Find a dedicated server provider and rent the hardware. These companies rent some part of the datacenter (or sometimes build their own). Bonus points if they offer KVM - as in remote console, not the Linux hypervisor. Also ask if they do hardware monitoring and proactively replace the failed parts. All of this is still way cheaper than cloud. Usually with unmetered networking.
Way less hassle. They'll even take your existing stuff and put it into the same rack with the rented hardware.
The difference from cloud, apart from the price, is mainly that they have a sales rep instead of an API. And getting a server may take a from few hours to a few days. But in the end you get the same SSH login details you would get from a cloud provider.
Or, if you really want to just collocate your boxes, the providers offer "remote hands" service, so you can have geo-redundancy or just choose a better deal instead of one that's physically close to your place.
There was a single security guard, I signed in and he gave me directions and a big keychain. The keys opened most of the rooms and most of the cages around the racks. To this day I remain mystified at the level of trust (or nonchalance) that security guard had in a spotty teenager.
" 1. Install Linux on the box. Turn everything off but sshd. Turn off password access to sshd. If you just locked yourself out of sshd because you didn't install ssh keys first, STOP HERE. You are not ready for this. "
If you blindly followed the directions and got locked out, you can do exactly the same thing with other directions. You were not ready.
If your server/device has an IPMI... get a PiKVM (or like) anyways. Not only will you last more than 2 seconds without being hacked but it'll have more functionality and be much faster.
If you're in the US there are lots of places in the Kansas City area that have ridiculously cheap pricing and it's decently centrally located in the country.
i remember the look in the admin's eyes when they asked "alright, what kind of hardware are you looking to install?" and I said "oh i have it right here" and pulled two Intel NUCs out of my backpack
> Consider bringing a screwdriver and a flashlight (any halfway decent place will provide those for you, but you never know).
two multitools minimum, sometimes you need to hold a nut with one while loosening the bolt with the other
the best space is the one that is right next to a Frys/Microcenter/whathaveyou
> Plug everything in and turn it on.
Most server racks will have C13 or C19 outlets for power distribution. For consumer-type devices designed to plug into a wall socket, suitable cables or an adapter strip would probably be required.
For example, in ZFS you can mirror disks 1 and 2, while having 3 and 4 as hot spares with:
zpool create pool mirror $d1 $d2 spare $d3 $d4
Also remember ask your provider to assign you a private vlan and vpn to it for your own access only.
Well, we could run decentralized systems at home for SMEs/Startup common needs from modern homes of remote workers, just renting them a room with proper local equipment the modern remote worker probably already have (VMC, A/C, "the big UPS" from domestic p.v.) with not much more expenses than using someone else computer also named "the cloud". Then if/when needs mount some sheds with p.v. + local storage and good connections can pops up. Who need a CDN in such setup? Then/if a real small datacenter could born.
Are you horrified? Well, now try to see the recent history, when Amazon choose not to buy expensive and reliable SUN servers choosing cheap and crappy PCs or Google choose to save money with rackmounted velcro-assembled PCs with a built-in UPS, i.e. a naked velcro strapped one. See the history of buying cheap low quality ram because we have ECC, lo quality storage because we have checksums anyway and so on. Then think about some real recent datacenter fires and what they told about "how brilliant and safe" was their design.
Long story short: as we ditch big iron in the past, we ditch mainframes, it's about time to think ditching dataceter model as well for a spread set of small unreliable but many machine rooms at homelabs scale. This, like FLOSS gives us reliability, cheapness and ownership as well. This ALSO decouple hw and sw giants, witch is a very good things in terms of free markets. It will also force he OEMs to be flexible in their design and ditching slowly from modern crappy fats-tech also some giants deprecate [2] as unsustainable, for more balanced tech because well, such homelabs last much longer than typical ready-made PCs and craptops or NAS from the mass distribution market.
[1] buying from China directly instead of feeding local importers who skyrocket the price instead of lowering it when Chinese supplier lower them
[2] https://unctad.org/system/files/official-document/der2024_en... of course suggesting the wrong answer
When you have a better handle on what works for you, then go with colo (meaning you don't rent servers, but buy and setup your own servers and Ethernet switches, that go into your rack space).
I guess I could leave them here at home and probably pay less in power than the total colo cost, but the bandwidth here at home is not great.
HE will give you a full cabinet for $600/month https://he.net/colocation.html?a=707336661752&n=g&m=e&k=hurr...
And if you failed to do this and all advices in the article: then kindly ask colo provider to attach ip-KVM (if your "server" is not actual server with ipmi/bmc), 99.9% of places have these. And most of them will provide it free of charge for limited amount of time.
RIP Alchemy.
Also you could see if your local hackers space has a server rack
Also, test that it's properly disabled with something like `ssh -v yourserver : 2>&1 | grep continue`, because there are a surprising number of ways for that to go wrong (did you know that sshd lets you Include multiple config files together in a way that can override the main one? I know that now.)