This makes me feel old.

IM amused by some of the JS community acting like server side rendering and hydration is akin to discovering fire when they just brought back progressive enhancement from circa 2009.

Next week we're going to have a lesson on semaphores.

On a more serious note, there are a lot of places where we could take some lessons from the past to heart. (read stop reinventing the wheel). Permissions systems spring to mind... Something unix/ldap like would fit a lot of use cases and be much more clear that some of the awful things I have seen. Good database design needs to make a come back (and people need to stop being afraid of sql). Being able to let go of data, why do some people have so much trouble purging old logs.

I could go on but... dam you kids get off my lawn!

I am surprised this doesn't list two major resources for learning how caching works:

- https://www.mnot.net/cache_docs/ for a long time this was the best online resource

- https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching is extremely detailed, and based on the previous link too from what I can tell

ETags are also a built-in way to avoid conflicts when multiple clients are mutating the same data. Instead of each client sending

  PUT /doc
And blindly overwriting what’s there, they send

  PUT /doc
  If-Match: <etag they fetched>
If the current server-side ETag had a different value, it can return a 409. Then the client can re-fetch the document, re-apply the changes, and re-PUT it in a loop until it succeeds.

You wouldn’t do that for huge, frequently changing docs like a collaborative spreadsheet or such. It’s perfect for small documents where you’d expect success most of the time but failure is frequent enough that you want some sort of smart error handling.

ETags are brilliant at reducing bandwidth and work especially well if the origin webserver is close to the client. Unfortunately, there isn't a lot it can do about round trip latency. Even if the payload resides on a proxy cache close to the client, that proxy cannot instantly answer 304 Not changed because it needs to revalidate its cache with its upstream as well (using If-None-Match).

So, serving a (relatively small) cvsbase Bay Area to Australia will still be slow unless you're willing to accept stale data (Ie. Cache-Control: max-age / Expires headers).

The varnish caching lifecycle [1] has some great additional features. Coallate and hold multiple requests, refresh the cache while immediately serving cached content to the requester, serve stale items when the backend is down.

[1] https://docs.varnish-software.com/tutorials/object-lifetime/

For a file-based HTTP server's weak ETags I've used a concatenation of st_dev, st_ino, and inode generation number. For strong ETags I've used a SHA-512 hash.

In combination with If-Match: and If-None-Match: this is very powerful.

I really wish HTTP had a no-build solution for caching static assets that didn't require either a request or a stale asset period. For example, ability to declare that static assets under a path won't change unless a build version at some other path changes.
csvbase looks cool.

Apache mod-cache is pretty good on correctedness, not so much in speed. I once did layering with Varnish on top of mod-cache on top of real backend. It was enough. It could even handle moderate traffic WebDAV (that was my main use case).

If Windows hadn't dropped native support for WebDAV, I would recommend you to take a look at it. If I'm not mistaken, macos still supports it out of the box, so does GNOME through gvfs.

Is `no-cache` necessary if using `must-revalidate`?
http1.1 was a mistake mostly driven by audience analytics (in other words, advertising)

no complex cache (i.e. aggressive cache everything) with a user who can discern how to operate a simple refresh button, was the best solution.

cache today is a joke. you cannot press back after going offline anywhere.