Copying a compiled binary package to another Linux machine of whatever ancient distribution, and being able to run it, is more or less a dream than a reality. Hence why tools like Docker are so useful. But I don’t think I would feel natural to use Docker to run, say, vim with YouCompleteMe built with a certain set of options.
So the solution usually boils down to copying all dependencies along with whatever package I am going to use. But dependencies have dependencies too, so the right answer would be “all recursive dependencies”. Which is still often doable with the help of tools like
ldd. But some libraries are more angry than others; a particularly one,
glibc, does not simply work if one copies it to another machine and expect the dynamic loader to load it. Of course, one can also copy the dynamic loader,
ld.so, but that puts many extra limitations on what you can run and what it can interact with.
Even worse, part of the dependencies actually comes from the toolchains. If your
libstdc++ depends on a newer version of
glibc, trying to run anything written in
C++ and built with that toolchain on a machine with an older version of
glibc would be futile.
So the solution would be a new toolchain. It turns out the problem we are going to solve is structurally similar to that of cross-compilation. The machine we build on (the host) is different from the machine our code should run on (the target).
The build process is more or less described at this place. Usually, the fact that one needs to interleave between building
glibc is an annoyance. But since what we want is actually “we want to take control of both
glibc version”, the chore is gone and all left is clarity.
The very action of building high versioned
gcc along with low versioned
glibc has its own troubles. Its configuration scripts written with
autotools, while famous for its compatibility, unfortunately, is actually pretty poor at supporting newer compilers than the time those scripts were created, even when said compilers are pretty standard. That is, the
configure command gets confused by the presence of things like a modern
make or a modern
gcc, even if it really shouldn’t.
Some of the other compatibility issues are even harder to tackle. Fortunately, I am not the only one in the world who try to combine old
glibc with new
gcc! This blog post contains a number of important things to take care of when one does so.
My build scripts are available here, and the binary files here.
While I browsed the various discussions, I discovered an interesting project called glibc_version_header which seems to be a more lightweight approach to the same problem. However, its list of caveats is also much longer than that of a full-fledged cross compiler.
Oftentimes one also wants to make sure the cross compiler itself can run on various old machines. To describe that requirement we need the full set of terminology of cross compilers: we want to build a cross compiler from a very new build machine, that runs on a decently new host machine, and produces outputs that are compatible to even older targets. In other words, a “Canadian” Cross Compiler with all three architectures happening to be minor variations of the same one. A Canadian Cross of the amd64, by the amd64, for the amd64.
A slightly simplified route is simply let the host and the target to be the same, so I should be able to run my build script twice, with the second iteration using the output of the first iteration as its build toolchain, to get such a cross compiler.