pzmarzly

  -g kB       Remove old log lines when the in-memory database crosses x kB
Seems like garbage collection is only implemented for in-memory database (by reading SQLITE_DBSTATUS_CACHE_USED). Maybe logrotate could be set up to do it instead, but nothing in documentation indicates so.

Otherwise looks like a great project.

thebeardisred
What's the maximum write speed? At what point do you start losing log messages?
Spivak
I'm actually kinda surprised they went with SQLite here, log messages are the trivialest data format and there's no way you can't beat SQLite's speed by just not having database logic in the middle at all. Just being able to BYOAllocator for the logs themselves with such predictable linear memory usage would make this thing scream.
simscitizen
How does this work exactly? Is every log line a separate transaction in autocommit mode? Because I don't see any begin/commit statements in this codebase so far...
marcrosoft
I did something similar but not open source: centrallogging.com it is surprising how SQLite can scale for smallish amounts of logs (1tb)
juvenn
Why not use duckdb? It is a column database, and more situated for log entries (seems to me).
righthand
This looks right up my alley. I am experimenting to see how much I can strip systemd from my every day laptop as an exercise in futility and to understand how embedded a distribution like Debian has become.
throwaway984393
[dead]