serialx
PSA: You can also use singleflight[1] to solve the problem. This prevents the thundering herd problem. Pocache is an interesting/alternative way to solve thundering herd indeed!

[1]: https://pkg.go.dev/golang.org/x/sync/singleflight

bww
You may be interested in Groupcache's method for filling caches, it solves the same problem that I believe this project is aimed at.

Groupcache has a similar goal of limiting the number of fetches required to fill a cache key to one—regardless of the number of concurrent requests for that key—but it doesn't try to speculatively fetch data, it just coordinates fetching so that all the routines attempting to query the same key make one fetch between them and share the same result.

https://github.com/golang/groupcache?tab=readme-ov-file#load...

pluto_modadic
so.... if I initially got the key "foo" at time T=00:00:00, this library would re-query the backing system until time T=00:00:60? even if I requery it at T=:01? vs... being a write-through cache? I guess you're expecting other entries in the DB to go around the cache and update behind your back.

if you are on that threshold window, why not a period where the stale period is okay? T0-60 seconds, use the first query (don't retrigger a query) T60-120 seconds, use the first query but trigger a single DB query and use the new result. repeat until the key is stale for 600 seconds.

that is, a minimum of 2 queries (the first preemptive one at 60 seconds, (in the cache for 10 minutes total)

and a maximum of 11 queries (over 10 minutes) (the initial one that entered the key, and if people ask for it once a minute, a preemptive one at the end of those minutes, for 20 minutes total in the cache).

tbiehn
Interesting idea - do you handle ‘dead keys’ as well? Let’s say you optimistically re-fetch a few times, but no client re-requests?