Friday, April 17, 2026

Multi merge sort, or when optimizations aren'tIn our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue. This seems like a slam dunk for performance. Doubling the number of arrays to merge at a time halves the number of total passes needed The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time Processing an element takes only log(#lists) comparisons Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so. Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing. The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round. A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds. Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here . I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.📝Nibble Stew

This page runs on coffee, please consider supporting it.

Thursday, April 16, 2026

No imageProposal: faster and smaller MC/DCThis post describes a proposal and is a call for clients, if you will. I have an idea for an optimization in the GCC MC/DC instrumentation ( -fcondition-coverage ) that I estimate would reduce the overhead in both compile time, object size, and run time, roughly 2-3 times, and I am looking for clients to fund this work. GCC has supported MC/DC since version 14 (released May 2024) and works quite well. I recently fixed a performance bug that made a massive difference for (very) large expressions, which went unnoticed for a while because such large expressions are quite uncommon and the performance has been adequate. I do believe we can improve both the compile time and the quality of instrumentation further. I did some more experiments and compared the compile times for a couple of real-world programs. The baseline is built with no flags, coverage with -ftest-coverage -fprofile-arcs , and MC/DC with -ftest-coverage -fcondition-coverage on GCC 14.2. This table shows that compile times are very sensitive to the program structure and that MC/DC slows down compiles significantly, sometimes up to 3 times. All compile times are in seconds. Baseline Coverage MC/DC SQLite 2.3 3.4 5.2 TagLib 2.2 3.2 2.9 FreeType 1.3 1.7 2.5 JUCE 1.4 1.9 1.9 Object sizes too are greatly affected by instrumentation, albeit slightly less consistent, but does support that the generated instrumentation (and not the analysis) is the main driver for compile times. The object sizes are in megabytes (MB). Baseline Coverage MC/DC SQLite 1.4 2.5 3.9 TagLib 3.0 6.5 4.5 FreeType 1.0 2.3 3.8 JUCE 1.5 3.4 2.8 Faster compiles are important and goes beyond simple ergonomics. Increased latency in the edit-compile-test cycle is very detrimental to both efficiency and effectiveness; reduced latency empowers engineers to solve harder problems and, in my experience, in a better way. The faster compile does not just save time (which is already precious and expensive), but reduces context switches and makes for a stronger feedback loop. The importance of fast and solid feedback loops is well understood, and has been covered by countless books, articles, blog posts, and talks. I would argue it is vital to optimize the edit-compile-test cycle when working with MC/DC since reaching (and understanding) coverage often is the most expensive and time consuming phase of system development. There’s efficiency to be gained, too, just by reducing the time wasted waiting for the compiler to finish. When developing the test suite for MC/DC the program will be recompiled many times which will effectively be idle, so just a few seconds here and there add up fast. The improvements in size- and runtime overhead means even larger programs can fit on small embedded system, which is particularly relevant the context of MC/DC as safety critical systems often run on small computers, ECUs, microcontrollers. This goes beyond just edit-compile-test friction; if instrumented programs cannot run on the device it is not just a mere inconvenience, it is a barrier to progress. The compile time and size tests show that the overhead of coverage instrumentation greatly depends on the structure of the program. Note that compilation is a complicated process and the 2-3x reduction in overhead is an estimate. To make this improvement in GCC happen, send an email to j@patch.no and I will send you a formal proposal and discuss the terms. After we sign the contract I will implement these optimizations and take care of upstreaming and integrating the changes into GCC.📝patch – Blog

Wednesday, April 15, 2026

MrDocs Bootstrap: One Script to Build Them AllWhen new developers joined the MrDocs team, we expected the usual ramp-up: learning the codebase, understanding the architecture, and getting comfortable with the review process. What we did not expect was that building and testing the project would be the hardest part. People dedicated to the project full-time spent weeks just trying to get a working build. Even when they succeeded, each person ended up with their own set of workarounds: a custom script here, a patched flag there, an undocumented environment variable somewhere else. One unrelated commit from someone else could silently break another developer’s local setup. And even after all of that, they didn’t know how to run the commands to test the project. As the complexity grew, we naturally reached for a package manager. We adopted vcpkg, but over time we discovered that our problem was too complex for what any package manager is designed to handle. The build type combinations, the sanitizer propagation, the cross-platform toolchain differences, and the IDE configurations: these are workflow problems that kept accumulating. That realization, combined with an onboarding crisis where new contributors could not build the project at all, led us to write our own bootstrap script. The idea was not unfamiliar: at the C++ Alliance, we work closely with the Boost libraries, and Boost has shipped a bootstrap script for years. We knew the pattern worked. We just needed to apply it to our own dependency problem. This post explains why robust C++ workflows are fundamentally difficult, not only for dependency management but also for supporting multiple platforms, compilers, and testing configurations. It describes what we learned from our experience with vcpkg and how a bootstrap script solved the problem for MrDocs. Why Dependency Management Is Hard A Combinatorial Explosion Why C++ Makes It Worse What Went Wrong for MrDocs Where vcpkg Fell Short The Problems No Package Manager Solves Five Workflows and Counting The Bootstrap Script How It Evolved Key Design Decisions What We Learned Why Dependency Management Is Hard A Combinatorial Explosion Suppose your project depends on Package A >=1.0 and Package B >=2.0, but all options where A >=1.0 require B ). So a developer using Clang 20 on a fresh Ubuntu machine gets build errors from the standard library, not from their own code. Testing every Clang version with every GCC’s libstdc++ is infeasible, but specific combinations matter, and the mismatch is not obvious to the developer when it happens. Platform explosion: Windows/Linux/macOS multiplied by Debug/Release/OptimizedDebug, GCC/Clang/MSVC/AppleClang, shared/static, and sanitizer variants creates a combinatorial explosion of configurations that all need to be tested. Each platform also has its own quirks: git symlinks behave differently on Windows, Ninja availability varies, and even the way you specify compiler flags differs between MSVC and GCC/Clang. Conditional dependencies: in C++, build options frequently add or remove entire dependencies. An image processing library might support PNG, JPEG, and WebP, each requiring its own codec library. Enabling or disabling a format changes the dependency graph. Build scripts also commonly look for host dependencies (system libraries for talking to the OS, GPU, or network) that you are not expected to build yourself but that must be present on the machine. The dependency graph is not static; it depends on the configuration. Closed-source dependencies: all of the problems above assume you have the source code and can rebuild with the correct flags. Sometimes you do not. When a dependency is distributed only as a pre-built binary, there is no way to adjust the ABI, propagate sanitizer flags, or change the build type. If it was compiled with incompatible settings, there is nothing you can do about it. It becomes a hard constraint on the entire system. %%{init: {"theme": "base", "themeVariables": {"primaryColor": "#f7f9ff", "primaryBorderColor": "#9aa7e8", "primaryTextColor": "#1f2a44", "lineColor": "#b4bef2", "secondaryColor": "#fbf8ff", "tertiaryColor": "#ffffff", "fontSize": "14px"}}}%% mindmap root((C++ Dependencies)) No Standard Format Built from source Closed-source binaries Compatibility ABI API / Templates Build Type / CRT Propagation Viral flags Viral macros Sanitizers Categorical options Dependencies Conditional on build options Host / system libraries Closed-source binaries Platform Toolchain setup Compiler + stdlib combos Combinatorial explosion In C++, the general case involves so many dimensions that no existing tool handles all of them well. What about CPS? The Common Package Specification (CPS) is an interesting effort to standardize how C++ packages are consumed. A .cps file describes everything a build system needs to find and link against an already-built package: include paths, library paths, compiler flags. This is valuable, but it operates at the point of consumption, where we have already made all the decisions about platform, compiler, build type, and sanitizers. It assumes the dependency has already been built in a compatible way. It does not describe how to build the dependency with the correct flags in the first place. For example, if we need AddressSanitizer, all dependencies must be built with ASan instrumentation. A CPS file tells us how to consume a package that was built with ASan, but it does not know how to rebuild that package with ASan if it was not. The problems described above are all about making those upstream decisions correctly, which happens before CPS enters the picture. What Went Wrong for MrDocs MrDocs depends on LLVM, Duktape, Lua, and libxml2 (and previously also fmt). Over time, three categories of problems accumulated. Where vcpkg Fell Short For over a year, we used vcpkg to manage these dependencies. MrDocs is a tool, not a library, so we only needed vcpkg for acquiring our own dependencies rather than for making ourselves easy to consume downstream. It worked at first, but the complexity of our workflows gradually outgrew what vcpkg was designed to handle: Build types: MrDocs developers frequently need a Debug build with optimization enabled because the codebase is large enough that an unoptimized debug build is painfully slow. On MSVC, Debug and Release are ABI-incompatible, so a “Debug with optimization” configuration does not fit neatly into vcpkg’s Debug/Release binary model. Patches and dual paths: vcpkg applies patches to libraries that do not follow CMake conventions. This meant we had to support two ways to find the same library: the vcpkg-patched version and the upstream version. When libraries do follow CMake conventions, we do not need vcpkg as much. But when they do not, the patches make vcpkg less useful rather than more. Contributors kept opening PRs proposing yet another way to locate a dependency. In a build script, every new path is expensive to test. Rigid baseline: vcpkg’s baseline model pins all libraries to a single snapshot. We are tightly coupled to a specific LLVM commit, so we could not use vcpkg for LLVM from the start. That alone meant vcpkg could only manage a subset of our dependencies. On top of that, when fmt bumped a major version and broke downstream consumers, it showed that the baseline approach is too rigid for projects that use a few unrelated libraries. Sometimes the entire baseline would be updated and libraries we had no reason to touch just got upgraded, introducing unexpected breakage. Different developers also had different baseline expectations, so the same vcpkg.json could produce different results depending on when someone last updated. Missing dependencies: some dependencies were not in vcpkg at all, or not configured the way we needed them. LLVM is the classic example: we need a specific commit, built with specific flags. Tools do not provide their own vcpkg integration; everything is centralized in the vcpkg repository. This forced us into mixed-source dependency management where some deps come from vcpkg and some from custom scripts. No variant support: when we needed sanitizer builds (ASan, MSan, UBSan, TSan), vcpkg had nothing to offer. It knows Debug and Release. Building sanitized variants required custom scripts or custom environment variables to pass the information to the package manager internally. Manifest vs. classic mode: vcpkg offers two modes for specifying dependencies. Some users simply did not like one of the modes, and we had so many complaints that we ended up supporting both. Unlike npm’s local and global modes, vcpkg’s manifest and classic modes do not play well together, so supporting both effectively meant maintaining two separate dependency workflows. The vcpkg team has done outstanding work on a genuinely difficult problem, and vcpkg handles a lot of it well. Many of these limitations may simply be the best anyone can do given the complexity of the language. Most of the problems listed above do have external solutions: you can set custom triplets, configure environment variables, pass flags manually, and configure build types from outside vcpkg. That is how we handled it for a long time. The issue is that those solutions live outside the vcpkg workflow. We owned that part, and maintaining it was hard. Having vcpkg in the equation meant one more workflow to support, even when the problem was not vcpkg’s fault. The accumulated complexity of maintaining vcpkg alongside our own custom scripts is what eventually became unsustainable. The Problems No Package Manager Solves Dependency acquisition at configure time: we once had FetchContent as an optional alternative to find_package, so CMake could download dependencies if they were not already present. A team member’s internet went down during a build and CMake failed. The reaction was strong: nobody should be required to have internet to compile a project they already downloaded. The feature was removed entirely. This reinforced that dependency acquisition needed to be a separate, explicit step that completes before the build system even runs. IDE integration: developers had to manually configure run configurations for CLion, VS Code, or Visual Studio, and those configurations broke whenever the application changed, build options were added, or targets were renamed. Platform-specific toolchain setup: on macOS with Homebrew Clang, the standard tool paths (llvm-ar, llvm-ranlib, ld.lld) are not where the system expects them. On Windows, MSVC requires a Developer Command Prompt with specific environment variables. Setting up either of these correctly from scratch is its own project. Debugger integration: there was no automated way to set up LLDB formatters or GDB pretty printers for Clang and MrDocs symbols. Developers working on the AST had to inspect raw memory layouts. The sheer volume of instructions: the build script should not assume a package manager, so you end up documenting both the manual and the package manager path. For each dependency, for each variant (sanitizers, special build types), for each platform. When the package manager path does not work for a given configuration, the developer falls back to the manual path, and that path has to be maintained too. Five Workflows and Counting The proliferation was gradual. We started with manual CMake commands, then added FetchContent as an alternative, then adopted vcpkg, then had to support both vcpkg modes, then needed custom CI scripts. By mid-2025, we had accumulated five different workflows for installing dependencies: Manual CMake: the original path, configuring everything by hand FetchContent: later removed after the internet incident vcpkg (manifest mode): the “official” package manager path vcpkg (classic mode): because some users did not like manifest mode Custom CI scripts: CI uses its own language to describe workflows, and there was no single command that could configure all possible build variants %%{init: {"theme": "base", "themeVariables": {"primaryColor": "#fce4e4", "primaryBorderColor": "#e8a0a0", "primaryTextColor": "#1f2a44", "lineColor": "#e8a0a0", "secondaryColor": "#fef3e4", "tertiaryColor": "#ffffff", "fontSize": "14px"}}}%% flowchart LR A[New Developer] --> B{Which workflow?} B --> C[Manual CMake] B --> D[FetchContent] B --> E[vcpkg manifest] B --> F[vcpkg classic] B --> G[CI scripts] We tried to create a set of instructions that would describe what the user could do for each dependency. For each dependency, we would explain each of the ways to fetch and build it: manual, vcpkg manifest, vcpkg classic. On top of that, for each special variant (sanitizer builds, special build type combinations), there would be yet another set of instructions per dependency per workflow. The documentation grew combinatorially, and people got lost. The Bootstrap Script The core principle was separation of concerns: CMake builds the project, but something else manages the dependencies. The bootstrap script fills that gap. Before: # Clone and build LLVM (specific commit) git clone https://github.com/llvm/llvm-project.git cd llvm-project && git checkout dc4cef81d47c... cmake -S llvm -B build -DCMAKE_BUILD_TYPE=Release ... cmake --build build cmake --install build cd .. # Download and build Duktape curl -L https://github.com/.../duktape-2.7.0.tar.xz | tar xJ cmake -S duktape -B duktape/build ... cmake --build duktape/build cmake --install duktape/build # Repeat for libxml2, Lua... # Then configure MrDocs with all the install paths cmake -S mrdocs -B mrdocs/build \ -DLLVM_ROOT=/path/to/llvm/install \ -Dduktape_ROOT=/path/to/duktape/install \ -Dlibxml2_ROOT=/path/to/libxml2/install \ ... cmake --build mrdocs/build After: python bootstrap.py The script handles everything else: Probes MSVC (Windows only): detects and imports the Visual Studio development environment Checks system prerequisites: validates that cmake, git, python, and a C/C++ compiler are available Sets up compilers: resolves compiler paths, detects Homebrew Clang on macOS Configures build options: prompts for build type, sanitizer, and preset name (or accepts defaults in non-interactive mode for CI) Probes compilers: runs a dummy CMake project to extract the compiler ID, version, and capabilities before building anything Sets up Ninja: finds or downloads the Ninja build system Installs dependencies: fetches and builds Duktape, Lua, libxml2, and LLVM in topological order, each with the correct flags for the chosen configuration Generates CMake presets: writes a CMakeUserPresets.json with all dependency paths, compiler configuration, and IDE settings Generates IDE configurations: run/debug configs for CLion, VS Code, and Visual Studio, plus debugger pretty printers Builds MrDocs: configures, builds, and optionally installs MrDocs using the generated presets Runs tests: executes the test suite in parallel %%{init: {"theme": "base", "themeVariables": {"primaryColor": "#e4eee8", "primaryBorderColor": "#affbd6", "primaryTextColor": "#000000", "lineColor": "#baf9d9", "secondaryColor": "#f0eae4", "tertiaryColor": "#ebeaf4", "fontSize": "14px"}}}%% sequenceDiagram participant U as Developer participant B as bootstrap.py participant S as System participant D as Dependencies participant C as CMake participant I as IDE U->>B: python bootstrap.py B->>S: Probe MSVC environment (Windows) B->>S: Check prerequisites (cmake, git, compiler) B->>S: Set up compilers and Ninja B->>U: Prompt for build type, sanitizer, preset B->>S: Probe compiler ID and version B->>D: Fetch and build dependencies B->>C: Generate CMakeUserPresets.json B->>I: Generate IDE and debugger configs B->>C: Build and install MrDocs B->>C: Run tests How It Evolved The first commit landed on July 16, 2025. Over the next eight months, the script went through seven distinct phases of development across roughly 57 commits. %%{init: {"theme": "base", "themeVariables": {"primaryColor": "#f7f9ff", "primaryBorderColor": "#9aa7e8", "primaryTextColor": "#1f2a44", "lineColor": "#b4bef2", "secondaryColor": "#fbf8ff", "tertiaryColor": "#ffffff", "fontSize": "14px"}}}%% timeline title bootstrap.py Evolution Jul 2025 : Foundation and UX Aug 2025 : IDE configs, sanitizers, and Windows Sep 2025 : Developer tooling and LLDB Dec 2025 : Modularization into package Mar 2026 : CI integration The first week (July 16–19) was about getting the one-liner to work at all: the core workflow, colored prompts, parallel test execution, and the first installation docs. Phase 1: Foundation (July 16–19, 2025) 521cc704 build: bootstrap script e32bb36e build: bootstrap uses another path for mrdocs source when not already called from source directory e7e3ef51 build: bootstrap build options list valid types 75c28e45 build: bootstrap prompts use colors c156a05f build: bootstrap removes redundant flags c14f071b build: bootstrap runs tests in parallel 1a9de28c docs: one-liner installation instructions 76611f93 build: bootstrap paths use cmake relative path shortcuts The second and third weeks turned the script into a development environment setup tool by generating IDE run configurations for CLion, VS Code, and Visual Studio. By the end of July, the script also supported custom compilers, sanitizer builds, and Homebrew Clang on macOS. Phase 2: IDE Integration (July 22–28, 2025) 502cfbd8 build: bootstrap generates debug configurations b546c260 build: bootstrap dependency refresh run configurations 83525d38 build: bootstrap documentation run configurations 2cfdd19e build: bootstrap website run configurations ca4b04d3 build: bootstrap MrDocs self-reference run configuration b5f53bd9 build: bootstrap XML lint run configurations Phase 3: Build Variants and Sanitizers (July 29–August 1, 2025) 0a751acd build: bootstrap supports custom compilers ff62919f build: LLVM runtimes come from presets 2b757fac build: bootstrap debug presets with release dependencies 0d179e84 build: installation workflow uses Ninja for all projects 3d8fa853 build: installation workflow supports sanitizers 26cec9d8 build: installation workflow supports homebrew clang August was the cross-platform month. Windows support required probing vcvarsall.bat, handling Visual Studio tool paths, and ensuring git symlinks worked. Paths were made relocatable so CMakeUserPresets.json files could be shared across machines. Phase 4: Cross-Platform Polish (August 2025) fc2aa2d6 build: external include directories are relocatable 21c206b9 build: bootstrap vscode run configurations d2f9c204 build: Visual Studio run configurations 0ca523e7 build: bootstrap supports default Visual Studio tool paths on Windows 4b79ef41 build(bootstrap): probe vcvarsall environment 4d705c96 build(bootstrap): ensure git symlinks 524e7923 build(bootstrap): visual studio run configurations and tasks 94a5b799 build(bootstrap): remove dependency build directories after installation September and October added developer tooling: LLDB data formatters for Clang and MrDocs symbols, pretty printer configurations, libcxx hardening mode, and the style guide documentation. Phase 5: Developer Tooling (September–October 2025) fc98559a build(bootstrap): include pretty printers configuration 069bd8f4 feat(lldb): LLDB data formatters 1b39fdd7 fix(lldb): clang ast formatters 988e9ebc build(bootstrap): config info for docs f48bbd2f build: bootstrap enables libcxx hardening mode 5e16e3fa Fix support for clang cl-mode driver (#1069) By December, the monolithic 2,700-line bootstrap.py was refactored into a proper Python package under util/bootstrap/ with 20+ modules organized by concern: core/ (platform detection, options, UI), configs/ (IDE run configurations), presets/ (CMake preset generation), recipes/ (dependency building), and tools/ (compiler detection). The package also includes its own test suite, which means one person changing the bootstrap script for their platform is not going to break it for someone else on a different platform. Phase 6: Modularization (November–December 2025) 0d4a8459 build(bootstrap): modularize recipes 7ba4699b build(bootstrap): transition banner 99d61207 build(bootstrap): handle empty input and “none” in prompt retry e3b3fd02 build(bootstrap): convert script into package structure In March 2026, the bootstrap script replaced the custom CI dependency scripts. This was a major milestone: users, developers, and CI now all use the same tool. CI was simplified significantly because the dependency steps are no longer custom shell commands maintained separately. And because CI runs the bootstrap on every push, the script itself is continuously tested across all platforms. If the bootstrap breaks on any platform, CI catches it immediately. Phase 7: CI Integration (2026) 6cee4af2 use system libs by default (#1077) 9b4fafbf ci: dependency steps use bootstrap script Key Design Decisions Several technical challenges required careful design. Here are the most interesting ones. Flag propagation. Not all flags should reach all dependencies, and the propagation rules vary per flag type and per dependency. Some sanitizers require all dependencies to be instrumented, while others only need compile-time checks. Build type does not always propagate (libxml2 is always built as Release). Compiler paths always propagate. The script evaluates each dependency individually and checks ABI compatibility before deciding whether to honor or coerce the build type. Windows ABI handling. On MSVC, Debug and Release are ABI-incompatible at the CRT level. When the script detects a mismatch, it coerces the dependency build to “OptimizedDebug” (Debug ABI with /O2 optimization). This is different from RelWithDebInfo, which uses the Release ABI with debug symbols and will not link with a Debug MrDocs. Cross-platform compiler detection. On Linux, compiler detection is straightforward. On macOS with Homebrew Clang, the script detects and injects the correct llvm-ar, llvm-ranlib, ld.lld, and libc++ paths, which are not on the default search path. On Windows, the script locates Visual Studio via vswhere.exe, runs vcvarsall.bat with debug output, and parses the environment variables into Python for all subsequent CMake calls. CMake preset generation. After building dependencies, the script generates a CMakeUserPresets.json with all dependency paths, compiler configuration, and platform conditions. Paths are made relocatable by replacing absolute prefixes with CMake variables (${sourceDir}, ${sourceParentDir}, $env{HOME}). IDE run configurations. The script generates ready-to-use configurations for CLion, VS Code, and Visual Studio: building and debugging MrDocs, running tests, generating documentation, refreshing dependencies, generating config info and YAML schemas, validating XML output, running MrDocs on Boost libraries (auto-discovered), and reformatting source files. CMake custom commands can create build targets, but you cannot debug them from the IDE. Recipe system. Dependencies are defined as JSON recipe files with source URLs, build steps, and dependency relationships. The bootstrap topologically sorts them and builds them in order. Each recipe tracks its state with a stamp file (recipe version, git ref, platform, build parameters). If any parameter changes, the dependency is rebuilt. The stamp system also generates CI cache keys like llvm-abc1234-release-ubuntu-24.04-clang-19-ASan. Refresh command. Because of the stamp system, a developer can run the bootstrap with --refresh-all at any time. The script re-evaluates all stamps and rebuilds only the dependencies that are out of date with whatever configurations are needed. This makes updating dependencies after a configuration change (new sanitizer, different compiler, updated LLVM commit) a single command rather than a manual process of figuring out which dependencies need rebuilding. What We Learned Users, developers, and CI now all use the same tool. Users get a one-liner installation. Developers get IDE run configurations and debugger integration. CI gets non-interactive mode with sanitizer support. The exact same code path that builds dependencies on a developer’s laptop now builds dependencies in CI. Separation of concerns. When your project’s requirements are complex enough (multiple build types, sanitizer variants, cross-platform quirks, heavy dependencies like LLVM), a custom script that owns the entire dependency lifecycle is simpler than trying to make a general-purpose tool handle every edge case. Existing tools solve the general case well. Our specific combination of requirements needed something tailored. C++ has no unified build workflow. Every platform has its own conventions for finding compilers, setting up environments, and linking libraries. Just finding and setting up MSVC from a script is a project in itself. New contributors can start working immediately. Before the bootstrap, getting a working build could take days. Now it takes a single command, and the IDE configurations are included. We still have small glitches as new compilers and platforms appear, but each fix is a localized change in one module rather than a cross-cutting update to five independent workflows. The complete bootstrap package is available in the MrDocs repository.📝The C++ Alliance

Tuesday, April 14, 2026

Monday, April 13, 2026

Sunday, April 12, 2026

Saturday, April 11, 2026

Preventing Integer Overflow in Physical ComputationsPreventing Integer Overflow in Physical Computations Integers overflow. That is not a controversial statement. What is surprising is how easily overflow can hide behind the abstraction of a units library. Most developers immediately think of explicit or implicit scaling operations — calling .in(unit) to convert a quantity, constructing a quantity from a different unit, or assigning between quantities with different units. These are indeed places where overflow can occur, and the library cannot prevent it at compile time when the values are only known at runtime. But at least these operations are visible in your code : you wrote the conversion, you asked for the scaling, and you can reason about whether the multiplication or division might overflow your integer type. The far more insidious problem is what happens when you don't ask for a conversion. When you write 1 * m + 1 * ft , the library must automatically convert both operands to a common unit before performing the addition. That conversion — which you never explicitly requested — involves multiplication or division by scaling factors. With integer representations, those scaling operations can overflow silently, producing garbage results that propagate through your calculations undetected. No compile-time programming can prevent this. The values are only known at runtime. But very few libraries provide proper tools to detect it. This article explains why that limitation is real, how other libraries have tried to work around it, and what mp-units provides to close the gap as tightly as the language allows.📝mp-units