1. Allow building against packages without installing them. The core issue here is that Gentoo package build and install happen as a single step: one can't "build a bunch of things that depend on one another" and then "atomically install all the build items into place". This means that Gentoo can easily be partially broken when one is doing updates when an ABI change occurs (modulo so versioning, see next option). This issue with 64-bit time_t is an example of an ABI change that folks are very aware of and is very widespread. It's also an example of something that causes an ABI change that isn't handled by the normal `.so` versioning scheme (see next option).
2. Extend the normal `.so` versioning to capture changes to the ABI of packages caused by packages they depend on. Normally, every `.so` (shared object/library) embeds a version number within it, and is also installed with that version number in the file name (`libfoo.so.1.0.0`, for example, would be the real `.so` file, and would have a symlink from `libfoo.so` to tell the linker which `.so` to use). This shared object version is normally managed by the package itself internally (iow: every package decides on their ABI version number to enable them to track their own internal ABI breakages). This allows Gentoo to upgrade without breaking everything while an update is going on as long as every package manages their `.so` version perfectly correctly (not a given, but does help in many cases). There is a process in Gentoo to remove old `.so.x.y.z` files that are no longer used after an install completes. What we'd need to do to support 64-bit time_t is add another component to this version that can be controlled by the inherited ABI of dependencies of each `.so`. This is very similar in result to the "use a different libdir" option from the post, but while it has the potential to set things up to enable the same kinds of ABI changes to be made in the future, it's likely that fixing this would be more invasive than using a different libdir.
For Debian it was extremely painful. A few people probably burned out. Lots of people pointed to source-based distributions and said "they will have it very easy".
The amd64 architecture did have some interesting features that made this much easier than it might have been for other cpu architectures. One of which was the automatic cast of 32 bit function arguments to 64 bit during the function call. In most cases, if you passed a 32 bit time integer to a function expecting a 64 bit time_t, it Just Worked(TM) during the platform bringup. This meant that a lot of the work on the loose ends could be deferred.
We did have some other 64 bit platforms at the time, but they did not have a 64 bit time_t. FreeBSD/amd64 was the first in its family, back in 2003/2004/2005. If I remember correctly, sparc64 migrated to 64 bit time_t.
The biggest problem that I faced was that (at the time) tzcode was not 64 bit safe. It used some algorithms in its 'struct tm' normalization that ended up in some rather degenerate conditions, eg: iteratively trying to calculate the day/month/year for time_t(2^62). IIRC, I cheated rather than change tzcode significantly, and made it simply fail for years before approx 1900, or after approx 10000. I am pretty sure that this has been fixed upstream in tzcode long ago.
We did have a few years of whackamole with occasional 32/64 time mixups where 3rd party code would be sloppy in its handling of int/long/time_t when handling data structures in files or on the network.
But for the most part, it was a non-issue for us. Being able to have 64 bit time_t on day 1 avoided most of the problem. Doing it from the start was easy. Linux missed a huge opportunity to do the same when it began its amd64/x86_64 port.
Aside: I did not finish 64 bit ino_t at the time. 32 bit inode numbers were heavily exposed in many, many places. Even on-disk file systems, directory structures in UFS, and many, many more. There was no practical way to handle it for FreeBSD/amd64 from the start while it was a low-tier platform without being massively disruptive to the other tier-1 architectures. I did the work - twice - but somebody else eventually finished it - and fixed a number of other unfortunately short constants as well (eg: mountpoint path lengths etc).
struct {
int a;
time_t b;
int c;
};
With 32-bit time_t, the offset of c is 8. With the 64-bit type, it’s 12."Surely it's 16 not 12, as b needs to be 64-bit aligned, so padding is added between a and b? Which also kind of makes the point the author is trying to make stronger.
> Musl has already switched to that, glibc supports it as an option. A number of other distributions such as Debian have taken the leap and switched. Unfortunately, source-based distributions such as Gentoo don’t have it that easy.
While I applaud their efforts I just think as a user I want to be over and done with this problem by switching to a non source-based distribution such as Debian.
If legacy dates were a concern one could shift the epoch by a couple of decades, or even reduce the time granularity from 1 second to 2 seconds. Each alternative has subtle problems of their own. It depends on the use case.
Then the linker and loader could error if two incompatible objects with different ABIs were attempted to be linked together.
For instance, I suspect you could have fields to denote the size of certain types. I guess DWARF has that... But DWARF is optional and sucks to parse.
I must say, mgorny's posts are always a treat for those who like to peer under the hood! (The fact that Gentoo has remained my happy place for years doesn't influence this position, though there's probably some correlation)
Yet another is to freeze the time for those programs at the last 32-bit POSIX second. They would just appear to execute incredibly fast :). Of course some will still break, and it’s obviously not suitable for many use cases (but neither is running in the past), but some might be just fine.
Instead, the OS and its SDK are versioned, and at build time you can also specify the earliest OS version your compiled binary needes to run on. So using this, the headers ensured the proper macros were selected automatically. (This is the same mechanism by which new/deprecated-in-some-version annotations would get set to enable weak linking for a symbol or to generate warnings for it respectively.)
And it was all handled initially via the preprocessor, though now the compilers have a much more sophisticated understanding of what Apple refers to as “API availability.” So it should be feasible to use the same mechanisms on any other platform too.