yjftsjthsd-h
> 1. Install Linux on the box. Turn everything off but sshd. Turn off password access to sshd.

Also, test that it's properly disabled with something like `ssh -v yourserver : 2>&1 | grep continue`, because there are a surprising number of ways for that to go wrong (did you know that sshd lets you Include multiple config files together in a way that can override the main one? I know that now.)

krab
A bit less terrible way in my opinion:

Find a dedicated server provider and rent the hardware. These companies rent some part of the datacenter (or sometimes build their own). Bonus points if they offer KVM - as in remote console, not the Linux hypervisor. Also ask if they do hardware monitoring and proactively replace the failed parts. All of this is still way cheaper than cloud. Usually with unmetered networking.

Way less hassle. They'll even take your existing stuff and put it into the same rack with the rented hardware.

The difference from cloud, apart from the price, is mainly that they have a sales rep instead of an API. And getting a server may take a from few hours to a few days. But in the end you get the same SSH login details you would get from a cloud provider.

Or, if you really want to just collocate your boxes, the providers offer "remote hands" service, so you can have geo-redundancy or just choose a better deal instead of one that's physically close to your place.

elric
Back in the late 90s/early 00s when I was a precocious teenager, I ran a somewhat popular website. At some point it made sense to just buy a 1U rack moubtable server and having it colocated (commercial webhosting was expensive then). I couldn't find anyone to give me a ride to the datacenter, so I took a bus. By the time I got there my arms were numb from carrying the bloody thing.

There was a single security guard, I signed in and he gave me directions and a big keychain. The keys opened most of the rooms and most of the cages around the racks. To this day I remain mystified at the level of trust (or nonchalance) that security guard had in a spotty teenager.

mjevans
They're absolutely correct:

" 1. Install Linux on the box. Turn everything off but sshd. Turn off password access to sshd. If you just locked yourself out of sshd because you didn't install ssh keys first, STOP HERE. You are not ready for this. "

If you blindly followed the directions and got locked out, you can do exactly the same thing with other directions. You were not ready.

mastazi
She wrote a "part 2" just today https://rachelbythebay.com/w/2024/09/23/colo/
zamadatix
Get something like a PiKVM and drop all of the stuff about being very local since you can find a cheaper overall provider elsewhere and use smart hands once a year either for free or still cheaper than picking the local place you can drive to. Even if you do the things in this guide perfectly it'll break/hang/get misconfigured at some point and the PiKVM (or like) lets you remotely hard boot a box instantly without having to drive or open a ticket. It also enables you to easily reinstall the entire OS remotely if you need to.

If your server/device has an IPMI... get a PiKVM (or like) anyways. Not only will you last more than 2 seconds without being hacked but it'll have more functionality and be much faster.

If you're in the US there are lots of places in the Kansas City area that have ridiculously cheap pricing and it's decently centrally located in the country.

seszett
The most difficult step I find is just barely mentioned, finding colocation space at reasonable price is difficult these days.
jareklupinski
> the bare minimum required to run your own hardware in a colocation environment

i remember the look in the admin's eyes when they asked "alright, what kind of hardware are you looking to install?" and I said "oh i have it right here" and pulled two Intel NUCs out of my backpack

> Consider bringing a screwdriver and a flashlight (any halfway decent place will provide those for you, but you never know).

two multitools minimum, sometimes you need to hold a nut with one while loosening the bolt with the other

the best space is the one that is right next to a Frys/Microcenter/whathaveyou

neomantra
Don't forget hearing protection! All the fans in there make it crazy loud. Facilities often supply earplugs, but consider using it with over-the-ear protection.
andyjohnson0
Interesting article.

> Plug everything in and turn it on.

Most server racks will have C13 or C19 outlets for power distribution. For consumer-type devices designed to plug into a wall socket, suitable cables or an adapter strip would probably be required.

efortis
My tip is adding hot spare disks so you don't have to replace broken ones.

For example, in ZFS you can mirror disks 1 and 2, while having 3 and 4 as hot spares with:

  zpool create pool mirror $d1 $d2 spare $d3 $d4
methou
IPMI/BMC is a must if plan to keep the server running longer than a release cycle of your Linux distribution.

Also remember ask your provider to assign you a private vlan and vpn to it for your own access only.

kkfx
I suggest a very different reasoning: we know WFH works, we also agree that in large part of the world we have enough sunlight for p.v., battery storage could be cheap [1] and FTTH connections start be be very good for modern time. Who have already understood where I want to go?

Well, we could run decentralized systems at home for SMEs/Startup common needs from modern homes of remote workers, just renting them a room with proper local equipment the modern remote worker probably already have (VMC, A/C, "the big UPS" from domestic p.v.) with not much more expenses than using someone else computer also named "the cloud". Then if/when needs mount some sheds with p.v. + local storage and good connections can pops up. Who need a CDN in such setup? Then/if a real small datacenter could born.

Are you horrified? Well, now try to see the recent history, when Amazon choose not to buy expensive and reliable SUN servers choosing cheap and crappy PCs or Google choose to save money with rackmounted velcro-assembled PCs with a built-in UPS, i.e. a naked velcro strapped one. See the history of buying cheap low quality ram because we have ECC, lo quality storage because we have checksums anyway and so on. Then think about some real recent datacenter fires and what they told about "how brilliant and safe" was their design.

Long story short: as we ditch big iron in the past, we ditch mainframes, it's about time to think ditching dataceter model as well for a spread set of small unreliable but many machine rooms at homelabs scale. This, like FLOSS gives us reliability, cheapness and ownership as well. This ALSO decouple hw and sw giants, witch is a very good things in terms of free markets. It will also force he OEMs to be flexible in their design and ditching slowly from modern crappy fats-tech also some giants deprecate [2] as unsustainable, for more balanced tech because well, such homelabs last much longer than typical ready-made PCs and craptops or NAS from the mass distribution market.

[1] buying from China directly instead of feeding local importers who skyrocket the price instead of lowering it when Chinese supplier lower them

[2] https://unctad.org/system/files/official-document/der2024_en... of course suggesting the wrong answer

cuu508
What's the importance of having a switch/hub? Is it because you may want to add more servers later, but the colo host only provides one port in their router?
shrubble
If someone asked me "right now" and they didn't have unusual hardware needs, and they were dipping their toes into it, I would suggest wholesaleinternet.net and get 1 or more cheap servers to practice with. I rent something there for about $25/month, though you can get a better setup for a bit more.

When you have a better handle on what works for you, then go with colo (meaning you don't rent servers, but buy and setup your own servers and Ethernet switches, that go into your rack space).

theideaofcoffee
Missing step -1: find someone to get on your staff that knows what they are doing, or at least find a contractor with a good history of doing this kind of thing. After a certain point, you really shouldn't wing it. But if you're at that point, you should know if you actually need your own hardware, or if public cloud or other options are good for you. Some need their own, most don't. There's a lot of nuance there that is difficult to express in a short HN comment.
divbzero
Does anyone have a sense of how much colocation costs? Assuming you have a modest setup that would otherwise run on a home server or VPS?
ggm
Remote power management can be a godsend. If you can get an ipmi console in, you want it.
whalesalad
I have 2x R720's that I am thinking of dropping into a colo here in Michigan. They'll be running Proxmox in a cluster (no HA as you need 3+ nodes for that). Can someone talk me out of this? It'll be $200/mo for both at Smarthost (looking glass here, http://mi1.smarthost.net/, I get ~12ms ping to this datacenter)

I guess I could leave them here at home and probably pay less in power than the total colo cost, but the bandwidth here at home is not great.

renewiltord
Imho, the easiest way is to get a Mikrotik switch, set up Wireguard on it, allow only Wireguard in, then use the Wireguard Mac App. Build your machines with AsRock Rack or Gigabyte motherboards. If you have existing, then add a PiKVM. You'll get an IPv4 with a half-cab in most DCs.

HE will give you a full cabinet for $600/month https://he.net/colocation.html?a=707336661752&n=g&m=e&k=hurr...

thatsit
„If you just locked yourself out of sshd because you didn't install ssh keys first, STOP HERE. You are not ready for this.“ that one had me chuckling
moandcompany
Documenting and testing the hard reset/reboot procedure, as well as any expectations, for your colocated gear sounds like a good thing to add to this list.
omgtehlion
It is also useful to configure mutiple IPs on the same interface, one for Colo, and one for your lab. You can do this on all OSes, even on windows.

And if you failed to do this and all advices in the article: then kindly ask colo provider to attach ip-KVM (if your "server" is not actual server with ipmi/bmc), 99.9% of places have these. And most of them will provide it free of charge for limited amount of time.

subjectsigma
Why is the tone of the article so negative and grouchy? On one hand the author doesn’t owe anybody fake kindness and it’s their blog. On the other hand - if you don’t like the topic or giving advice, then why write a grumpy article about it?
ForHackernews
Why would you do this instead of having the hardware at your house? For better/faster connection?
vfclists
Is colo necessary of you have a 1000Mbit symmetrical provider at home?
johnklos
I'm writing up a how-to about colocating using one of those Pi colocation offers. Glad to see colocation get some more attention!
N8works
This is very 2004. I like it.

RIP Alchemy.

brodouevencode
Is this cloud repatriation?
emptiestplace
What audience was this written for?
throwaway984393
[dead]
behringer
Forget the switch and pi and get a ubiquity router. Much more powerful and simple to setup. Does require some knowhow.

Also you could see if your local hackers space has a server rack