It should come as no surprise that the execution speed of programs is a primary concern in high-performance computing (HPC). Many HPC practitioners would tell you that, among their top concerns, is the performance of high-speed networks used by the Message Passing Interface (MPI) and use of the latest vectorization extensions of modern CPUs.
High-performance networks have constantly been evolving, in sometimes hard-to-decipher ways. Once upon a time, hardware vendors would pre-install an MPI implementation (often an in-house fork of one of the free MPI implementations) specially tailored for their hardware. Fortunately, this time appears to be gone. Despite that, there is still widespread belief that MPI cannot be packaged in a way that achieves best performance on a variety of contemporary high-speed networking hardware.
Guix follows a transparent source/binary deployment model: it will
download pre-built binaries when they’re available—like
yum—and otherwise falls back to building from source. Most of the
time the project’s build farm provides binaries so that users don’t have
to spend resources building from source. Pre-built binaries may be
missing when you’re installing a custom package, or when the build farm
hasn’t caught up yet. However, deployment of binaries is often seen as
incompatible with high-performance requirements—binaries are “generic”,
so how can they take advantage of cutting-edge HPC hardware? In this
post, we explore the issue and solutions.