Wednesday, May 6, 2026

If this page is useful, please consider your support

Tuesday, May 5, 2026

Construct with Collaborators, Call with Work@media only screen and (max-width: 600px) { .body { overflow-x: auto; } .post-content table, .post-content td { width: auto !important; white-space: nowrap; } } This article was adapted from a Google Tech on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office. By Shahar Roth Classes require various objects and parameters to function. The "Construct with Collaborators, Call with Work" guideline can help you construct effective inputs: Use the constructor for collaborators—the dependencies that establish the object’s identity. Collaborators stay with the object for its lifetime to enable it to fulfill its ongoing duties. Pass work—the parameters that change with each interaction—to methods. Unique to each call, these inputs provide the specific data needed for an operation such as a file path or database query. Consider a ReportGenerator that needs a database, a formatter, and a date range to generate a report. The database and formatter , as collaborators, are injected via the constructor, while dateRange , which varies per report generation, is passed as a method parameter to the generate method: class ReportGenerator { private final Database database ; private final Formatter formatter ; // database and formatter are passed as collaborators. ReportGenerator(Database database , Formatter formatter ) { this. database = database ; this. formatter = formatter ; } // dateRange is passed as a parameter. Report generate(Range dateRange ) { return formatter .format( database .getRecords( dateRange )); } } A single ReportGenerator object can generate multiple reports with different date ranges: ReportGenerator generator = new ReportGenerator( database , formatter ); Report report1 = generator.generate( dateRange1 ); Report report2 = generator.generate( dateRange2 ); Following the "Construct with Collaborators, Call with Work" guideline promotes : Reusability: Enables instances to be used for multiple, distinct operations. Testability: Separates dependency setup from business logic. Cleaner code: Hides implementation dependencies from the object’s users. Predictable behavior: Locks in dependencies at creation time. Note that the definition of "collaborator" versus "work" depends on the object's identity. For example, a RequestMessage could be a collaborator for a RequestHandler if the handler operates on a single request, or work if the handler processes different requests with each method call.📝Google Testing Blog
Introducing the QML Profiler Skill for Agentic DevelopmentInstead of a painstaking row-by-row or slow flame graph reviews, the QML profiler skill for agentic development allows developers to delegate code performance profiling to AI agents. The skill guides the developer through the workflow, triggers the QML profiler, crunches through the resulting raw data, presents the performance bottlenecks in a concise report, and suggests improvements. The skill targets 2D Qt Quick applications and supports four profiling modes — rendering, logic, memory, and full. It can also analyze an existing trace file directly, without re-running the application, for example, if the performance trace has been run on the target hardware.📝Qt Blog
ELF’s ways to combine potentially non-unique objectsPreviously [I wrote](/blog/2026/04/24/define-static-array/): > [Template parameter objects of array type] are permitted to overlap or be > coalesced, just like `initializer_list`s and string literals. Clang trunk > isn't smart enough to coalesce potentially non-unique objects [but] > GCC, once it implements `define_static_array`, will presumably make them the same. Well, GCC 16 has an experimental implementation of `define_static_array` (compile with `g++ -std=c++26 -freflection`), and it does _not_ coalesce template parameter objects of array type in the way I expected. Digging deeper into why not, I learned that there are at least three ways compilers and linkers (on ELF — that is, non-Windows — platforms) conspire to "merge" potentially non-unique objects: * Merging at the compiler level (for `initializer_list` backing arrays) * Sections with `SHF_MERGE` (for string literals and backing arrays) * Sections with `SHF_GROUP`, a.k.a. COMDAT sections (for inline variables)📝Arthur O’Dwyer
Introducing conan-py-build: Build Python Wheels with ConanPackaging Python extensions that contain native C or C++ code has come a long way. PEP 517 defined a contract between Python build frontends ( pip , build , uv ) and the build backend that produces the wheel. That standard is what makes it possible today to connect a CMakeLists.txt to a pyproject.toml , declare a backend, and let pip wheel . drive the build. The C/C++ dependency layer is a different story. Somewhere between pyproject.toml and CMakeLists.txt , a find_package(OpenSSL) has to resolve. In practice, most projects solve that outside the wheel build: through system packages, vendored source trees, FetchContent or a separate native package manager install step. That means a separate step to manage before the Python build, often duplicated across CI configurations and developer setups. Today, we are happy to introduce conan-py-build , a PEP 517 build backend that brings Conan’s C/C++ dependency management directly into the Python wheel build. The project is currently in beta and under active development. We are releasing it now to gather early feedback, and we would love for you to try it and tell us what you think. What is conan-py-build? conan-py-build is a build backend for Python packages that contain native C/C++ extensions. You declare it in pyproject.toml , provide a conanfile.py that describes the C/C++ build and its dependencies, and build wheels through standard Python packaging commands such as pip wheel . . When a build runs, conan-py-build : Resolves the C/C++ dependency graph through Conan, downloading precompiled binaries where available and building the rest from source Prepares the build toolchain through the corresponding Conan generators Builds the extension using your project’s build system When the extension links against shared libraries, copies those runtime dependencies next to the extension module and patches RPATH on Linux and macOS where applicable Packages the result into a standard Python wheel Because it is a PEP 517 backend, it plugs into pip , build , and uv directly, and fits into cibuildwheel -based CI workflows for multi-platform builds. A minimal example Let’s build a tiny Python package that exposes a single function, greet(name) , which prints a colored greeting to the terminal. We’ll use CMake for the native build, pybind11 for the Python bindings, and {fmt} as a dependency pulled in through Conan. The same setup extends to other build systems like Meson or Autotools. The project layout: mypackage/ ├── pyproject.toml ├── conanfile.py ├── CMakeLists.txt └── src/ ├── mypackage/ │ └── __init__.py └── mypackage.cpp pyproject.toml declares the build backend and the project metadata: [build-system] requires = ["conan-py-build"] build-backend = "conan_py_build.build" [project] name = "mypackage" version = "0.1.0" conanfile.py describes the C/C++ side: its dependencies ( pybind11 and fmt ) and how they are built and packaged. from conan import ConanFile from conan.tools.cmake import CMake , cmake_layout class MyPackageConan ( ConanFile ): settings = "os" , "compiler" , "build_type" , "arch" generators = "CMakeToolchain" , "CMakeDeps" def layout ( self ): cmake_layout ( self ) def requirements ( self ): self . requires ( "pybind11/3.0.1" ) self . requires ( "fmt/12.1.0" ) def build ( self ): cmake = CMake ( self ) cmake . configure () cmake . build () def package ( self ): cmake = CMake ( self ) cmake . install () CMakeLists.txt builds the extension against pybind11 and fmt and installs the resulting module into the Python package directory so the backend picks it up when assembling the wheel: cmake_minimum_required ( VERSION 3.15 ) project ( mypackage LANGUAGES CXX ) set ( PYBIND11_FINDPYTHON ON ) find_package ( pybind11 CONFIG REQUIRED ) find_package ( fmt CONFIG REQUIRED ) pybind11_add_module ( _core src/mypackage.cpp ) target_link_libraries ( _core PRIVATE fmt::fmt ) install ( TARGETS _core DESTINATION mypackage ) The C++ source defines greet(name) using fmt’s color support and exposes it as a compiled _core module: #include #include #include void greet ( const std :: string & name ) { fmt :: print ( fmt :: fg ( fmt :: color :: green ), "Hello, {}! \n " , name ); } PYBIND11_MODULE ( _core , m ) { m . def ( "greet" , & greet ); } And src/mypackage/__init__.py re-exports it so callers see mypackage.greet : from mypackage._core import greet __all__ = [ "greet" ] With that in place, building the wheel is the standard Python packaging command: $ pip wheel . -w dist/ Conan resolves pybind11 and fmt from Conan Center Index, CMake compiles the extension against them, and you get a platform-specific wheel in dist/ . Install it and try it: $ pip install dist/mypackage- * .whl $ python -c "import mypackage; mypackage.greet('world')" Hello, world! You should see Hello, world! printed in green. More examples: the repo has nanobind bindings, shared library dependencies, C++ sources fetched at build time, and a full multi-platform cibuildwheel setup for Linux, macOS, and Windows. What conan-py-build brings Some of the advantages of bringing Conan into the wheel build: One build entry point. The usual pip wheel . command can drive both the Python packaging step and the native C/C++ dependency/build step. Conan Center. A large catalog of C/C++ libraries with recipes tested across a broad compiler and OS matrix. Binary caching. Compiled dependencies are reused across builds and CI runs via the Conan cache or a shared remote, rebuilt only when settings change. Profiles and lockfiles. Profiles define the native build configuration of each wheel (compiler, architecture, C++ standard, dependency options), and lockfiles pin the graph for reproducible builds. Shared library handling. Conan-managed runtime libraries are deployed next to the extension module, and RPATH is adjusted on Linux and macOS where applicable. Conclusions conan-py-build pulls the C/C++ dependency layer inside the PEP 517 build so the Python build and the C/C++ build are one problem, not two. If you have been maintaining a separate dependency step alongside your Python packaging, it is worth a look. Check out the documentation and browse the examples . conan-py-build is still in beta and available on PyPI . If something does not work, or there is a workflow you want supported, please open an issue on GitHub and let us know where it fits, or where it does not yet. Looking forward to your feedback.📝Conan C/C++ Package Manager Blog

Monday, May 4, 2026

Kitware Awarded $4M DARPA SABER Contract to Evaluate AI-Enabled Battlefield SystemsClifton Park, NY — May 4, 2026 — Kitware has been awarded a two-year, $4 million contract from the Defense Advanced Research Projects Agency (DARPA), through its Information Innovation Office (I2O), as part of the Securing Artificial Intelligence for Battlefield Effective Robustness (SABER) program. The effort will focus on developing software and participating in field tests to evaluate the resilience of AI-enabled battlefield systems against external threats.📝Kitware Inc
Giving Copilot more C++ context using custom instructions in VS CodeIn February, we announced how GitHub Copilot can now use C++ symbol context and CMake build configuration awareness to deliver smarter suggestions in Visual Studio Code. Today, we’re excited to share new ways to further enhance your C++ development experience with Copilot and get the most out of the language-driven suggestions, by leveraging custom instructions […] The post Giving Copilot more C++ context using custom instructions in VS Code appeared first on C++ Team Blog .📝C++ Team Blog
Introducing the Qt Code Review Skills for Agentic Code ReviewReviewing, auditing, or sanity-checking code usually means running separate linters, reading through checklists, and manually verifying Qt-specific patterns across dozens of files. The Qt code review skills help developers to automate part of this code review phase. Developers avoid a laborious manual walkthrough of every file, with the AI agent running a deterministic linter followed by six parallel deep-analysis agents and surfacing real issues with mitigations in a few minutes. AI-Powered Code Reviews with Reliable Results📝Qt Blog

Sunday, May 3, 2026

15 Different Ways to Filter Containers in Modern C++Do you know how many ways we can implement a filter function in C++? While the problem is relatively easy to understand - take a container, copy elements that match a predicate, and return a new container - it’s good to exercise with the C++ Standard Library and check a few ideas. We can also apply some Modern C++ techniques, including C++23. Let’s start! The article was written in 2021 and recently updated in late 2025 to include additional techniques from C++23. Additionally, the text is also republished on the ACCU website : ACCU Overload, 33: 15 Different Ways to Filter Containers in Modern C++ . The Problem Statement To be precise, by a filter I mean a function with the following interface: auto Filter(const Container& cont, UnaryPredicate p) {} It takes a container and a predicate, and creates an output container with elements (copies) that satisfy the predicate. We can use it like the following: const std :: vector std :: string > vec { "Hello" , "**txt" , "World" , "error" , "warning" , "C++" , "****" }; auto filtered = Filter ( vec , []( auto & elem ) { return ! elem . starts_with ( '*' ); }); // filtered should have "Hello", "World", "error", "warning", "C++" Writing such a function can be a good exercise with various options and algorithms in the Standard Library. What’s more, our function hides internal mechanisms like iterators, so it’s more like a range-based version. Let’s start with the first option: Good old Raw Loops While it’s good to avoid raw loops, they might help us to fully understand the problem. For our filtering problem, we can write the following code: // filter v1 template typename T , typename Pred > auto FilterRaw ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; for ( auto && elem : vec ) if ( p ( elem )) out . push_back ( elem ); return out ; } Simple yet very effective. Please note some nice features of this straightforward implementation. The code uses auto return type deduction, so there’s no need to write the explicit type (although it could be just std::vector ). It returns the output vector by value, but the compiler will leverage the copy elision (named return value optimization - NRVO), or move semantics at worse. Since we’re at raw loops, we can take a moment and appreciate range-based for loops that we get with C++11. Without this functionality, our code would look much worse: // filter v1 - old way template typename T , typename Pred > std :: vector T > FilterRawOld ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; for ( typename std :: vector T >:: const_iterator it = begin ( vec ); it != end ( vec ); ++ it ) if ( p ( * it )) out . push_back ( * it ); return out ; } And now let’s move to something better and see some of the existing std:: algorithms that might help us with the implementation. Filter by std::copy_if std::copy_if is probably the most natural choice. We can leverage back_inserter to push matched elements into the output vector. // filter v2 template typename T , typename Pred > auto FilterCopyIf ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; std :: copy_if ( begin ( vec ), end ( vec ), std :: back_inserter ( out ), p ); return out ; } std::remove_copy_if We can also do the reverse: // filter v3 template typename T , typename Pred > auto FilterRemoveCopyIf ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; std :: remove_copy_if ( begin ( vec ), end ( vec ), std :: back_inserter ( out ), std :: not_fn ( p )); return out ; } Depending on the requirements, we can also use remove_copy_if , which copies elements that do not satisfy the predicate. For our implementation, I had to add std::not_fn to reverse the predicate. One remark: std::not_fn has been available since C++17. The Famous Remove Erase Idiom One thing to remember: remove_if doesn’t remove elements; it only moves them to the end of the container. So we need to use erase to do the final work: // filter v4 template typename T , typename Pred > auto FilterRemoveErase ( const std :: vector T >& vec , Pred p ) { auto out = vec ; out . erase ( std :: remove_if ( begin ( out ), end ( out ), std :: not_fn ( p )), end ( out )); return out ; } Here’s a minor inconvenience. Because we don’t want to modify the input container, we had to copy it first. This might cause some extra processing and is less efficient than using back_inserter . Adding Some C++20 After seeing a few examples that can be implmented in C++11, we can leverage a convenient feature from C++20: erase_if : // filter v5 template typename T , typename Pred > auto FilterEraseIf ( const std :: vector T >& vec , Pred p ) { auto out = vec ; std :: erase_if ( out , std :: not_fn ( p )); return out ; } This function is superior to the remove/erase iodiom, as you can just use a single function. One minor thing, this approach copies all elements first. So it might be slower than the approach with copy_if . Adding Some C++20 Ranges C++20 also brought us powerful ranges and range algorithms, and we can use them as follows: // filter v6 template typename T , typename Pred > auto FilterRangesCopyIf ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; std :: ranges :: copy_if ( vec , std :: back_inserter ( out ), p ); return out ; } The code is super simple, and we might even say that our Filter function has no point here, since the Ranges interface is so easy to use in code directly. Making it More Generic So far, I showed you code that operates on std::vector . But how about other containers? Let’s try and make our Filter function more generic. This is easy with std::erase_if , which has overloads for many standard containers: // filter v7 template typename TCont , typename Pred > auto FilterEraseIfGen ( const TCont & cont , Pred p ) { auto out = cont ; std :: erase_if ( out , std :: not_fn ( p )); return out ; } And another version for ranges. // filter v8 template typename TCont , typename Pred > auto FilterRangesCopyIfGen ( const TCont & vec , Pred p ) { TCont out ; std :: ranges :: copy_if ( vec , std :: back_inserter ( out ), p ); return out ; } Right now, it can work with other containers, not only with std::vector : std :: set std :: string > mySet { "Hello" , "**txt" , "World" , "error" , "warning" , "C++" , "****" }; auto filtered = FilterEraseIfGen ( mySet , []( auto & elem ) { return ! elem . starts_with ( '*' ); }); On the other hand, if you prefer not to copy all elements upfront, we might need more work. Generic “copy_if” Approach The main problem is that we cannot use back_inserter on associative containers, or on containers that don’t support the push_back() member function. In that case, we can fall back to the std::inserter adapter. That’s why a possible solution is to detect if a given container supports push_back : // filter v9 template typename T , typename = void > struct has_push_back : std :: false_type {}; template typename T > struct has_push_back T , std :: void_t decltype ( std :: declval T > (). push_back ( std :: declval typename T :: value_type > ())) > > : std :: true_type {}; template typename TCont , typename Pred > auto FilterCopyIfGen ( const TCont & cont , Pred p ) { TCont out ; if constexpr ( has_push_back TCont >:: value ) std :: copy_if ( begin ( cont ), end ( cont ), std :: back_inserter ( out ), p ); else std :: copy_if ( begin ( cont ), end ( cont ), std :: inserter ( out , out . begin ()), p ); return out ; } Above, I used a technique available up to C++17 with void_t and SFINAE; read more here: How To Detect Function Overloads in C++17, std::from_chars Example - C++ Stories . But since C++20, we can leverage concepts and make the code much more straightforward.: template typename T > concept has_push_back = requires ( T container , typename T :: value_type v ) { container . push_back ( v ); }; And see more in Simplify Code with if constexpr and Concepts in C++17/C++20 - C++ Stories . More C++20 Concepts We can add more concepts and restrict other template parameters. For example, if I write: auto filtered = FilterCopyIf ( vec , []( auto & elem , int a ) { return ! elem . starts_with ( '*' ); }); In the above code I tried to use two arguments for the unary predicate. In Visual Studio I’m getting the following error message: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\algorithm(1713,13): error C2672: 'operator __surrogate_func': no matching overloaded function found 1> C:\Users\Admin\Documents\GitHub\articles\filterElements\filters.cpp(38): message : see reference to function template instantiation '_OutIt std::copy_if >>,std::back_insert_iterator >>,Pred>(_InIt,_InIt,_OutIt,_Pr)' being compiled 1> with Not very helpful… but then after a few lines, we have some clear reason: error C2780: 'auto main:: ::operator ()(_T1 &,int) const': expects 2 arguments - 1 provided We can experiment with concepts and restrict our predicate to be std::predicate , an existing concept from the Standard Library. In our case, we need a function that takes one argument and then returns a type convertible to bool . // filter v10 template typename T , std :: predicate const T &> Pred > // auto FilterCopyIfConcepts ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; std :: copy_if ( begin ( vec ), end ( vec ), std :: back_inserter ( out ), p ); return out ; } And then the problematic code: auto filtered = FilterCopyIfConcepts ( vec , []( auto & elem , int a ) { return ! elem . starts_with ( '*' ); }); This results in the following message: 1> filters.cpp(143,19): error C2672: 'FilterCopyIfConcepts': no matching overloaded function found 1> filters.cpp(143,101): error C7602: 'FilterCopyIfConcepts': the associated constraints are not satisfied It’s better, as we have messages about our top-level function rather than internals, but it would be great to see why and which constraint wasn’t satisfied. Making it Parallel? Since C++17, we also have parallel algorithms, so why not add them to our list? As it appears the parallel std::copy_if is not supported in Visual Studio, this problem is a bit more complicated. We’ll leave this topic for now and try to solve it next time. For completeness, we can write the following naive code: // filter v11 std :: mutex mut ; std :: for_each ( std :: execution :: par , begin ( vec ), end ( vec ), [ & out , & mut , p ]( auto && elem ) { if ( p ( elem )) { std :: unique_lock lock ( mut ); out . push_back ( elem ); } }); This is, of course, a naive version, and will make the process serialized. The topic is quite advanced, so please have a look at my other text and experiment ( filter v12 ): Implementing Parallel copy_If in C++ - C++ Stories . Direct filter support with ranges::filter_view , C++23 In C++23, we got std::ranges::filter_view and std::views::filter . So the code is much simpler now: // filter v13 template typename T , std :: predicate const T &> Pred > auto FilterRangesFilter ( const std :: vector T >& vec , Pred p ) { std :: vector T > out ; for ( const auto & elem : vec | std :: views :: filter ( p )) out . push_back ( elem ); return out ; } Play at Compiler Explorer Adding ranges::to , C++23 What’s more, we can use ranges::to to automatically create a container. // filter v14 template typename T , std :: predicate const T &> Pred > auto FilterRangesFilterTo ( const std :: vector T >& vec , Pred p ) { return vec | std :: views :: filter ( p ) | std :: ranges :: to std :: vector > (); } Additionally, ranges::to works with any container type and determines an appropriate way to populate it. So it works with more than just std::vector : See here @Compiler Explorer Here’s an example: template typename Cont , std :: predicate const typename C :: value_type &> Pred > auto FilterRangesFilterTo ( const Cont & vec , Pred p ) { return vec | std :: views :: filter ( p ) | std :: ranges :: to Cont > (); } C++23: Lazy Filtering with std::generator All previous versions of Filter in this article return a materialised container - a std::vector , std::set , or something similar. That’s often what we want, but sometimes it’s more efficient to: avoid allocating a separate container, process elements on the fly (e.g. streaming input, large ranges), or combine filtering with another lazy pipeline. C++23 adds std::generator , a coroutine-based type that models a range. We can use it to express a lazy filter : template typename T , std :: predicate const T &> Pred > std :: generator const T &> FilterLazy ( const std :: vector T >& vec , Pred p ) { for ( const auto & elem : vec ) { if ( p ( elem )) co_yield elem ; } } Usage is straightforward: std :: vector std :: string > vec { "Hello" , "**txt" , "World" , "error" , "warning" , "C++" , "****" }; auto gen = FilterLazy ( vec , []( const auto & s ) { return ! s . starts_with ( '*' ); }); // Elements are produced lazily, on demand: for ( const auto & s : gen ) { std :: cout s '\n' ; } A few essential properties of this approach: Lazy evaluation - elements are filtered only when you iterate the generator. No intermediate container - no extra allocation by default. Summary In this article, I’ve shown at least 15 possible ways to filter elements from various containers. We started from code that worked on std::vector , and you’ve also seen multiple ways to make it more generic and applicable to other container types. For example, we used std::erase_if from C++20, concepts, and even a custom type trait. We also used the “holy grail” of C++23 with ranges::filter and ranges::to . See my code, with all the examples, on this repository: https://github.com/fenbf/articles/blob/master/filterElements/filters.cpp📝C++ Stories

Saturday, May 2, 2026

Every float on one pagevitaut.net https://vitaut.net/posts/2026/every-float/ - In my previous post about Żmij , a high-performance binary-to-decimal floating-point conversion library, I drew a small diagram of a rounding interval around a single floating-point value. That worked well enough for the local picture, but it doesn’t say much about the global one. Where do the irregular intervals at powers of two come from? What does the subnormal range actually look like? And how do the binades (the $[2^e, 2^{e+1})$ slices of the real line on which all FP numbers share an exponent) fit together? So I tried to draw that instead: the entire set of representable numbers, laid out so the interesting structure is visible at a glance. A small format that is actually used To draw all the floats you really need a small format. Luckily small formats have recently escaped the textbook: 8-bit floating-point formats are now used in production for AI training and inference. The one we will look at here is E4M3 : 1 sign bit, 4 exponent bits, 3 significand bits, bias 7, with subnormals and a single NaN slot. It is specified in FP8 Formats for Deep Learning by NVIDIA, Arm and Intel, three companies that famously agree on very little, and is supported in hardware by GPUs such as the H100, H200 and B200, where it is the workhorse format for low-precision inference. E4M3 has only 256 encodings, so we can comfortably show every value at once and still have room to think. The usual picture is a single line The traditional way to visualize floating-point numbers is to put them on a real number line. For E4M3 the result is accurate but not very useful: most of the action is squeezed near zero, huge gaps open up near the maximum, and there is nothing on the page that tells you why the spacing changes the way it does. (You can see for yourself in the linear number line panel at the bottom of the embedded explorer further down.) The structure that makes floating-point floating-point, the partition into binades, is exactly the thing the linear axis hides. Zooming in helps a bit, but it doesn’t scale: at any zoom level you only ever see a slice of one or two binades, and the relationship between the binary representation and the decimal positions is still mostly invisible. A 2-D picture, suggested by an LLM After some back and forth with an LLM, a much better idea came up: forget the linear axis, just plot every value by its exponent. The first sketch was crude but got the point across, which was more than I had any right to expect from a vibe-coding session: Log₂ scale (this is the "real" structure) E = -9 • • • • • • • (subnormals) E = -6 • • • • • • • • E = -5 • • • • • • • • E = -4 • • • • • • • • E = -3 • • • • • • • • E = -2 • • • • • • • • E = -1 • • • • • • • • E = 0 • • • • • • • • E = 1 • • • • • • • • E = 2 • • • • • • • • E = 3 • • • • • • • • E = 4 • • • • • • • • E = 5 • • • • • • • • E = 6 • • • • • • • • E = 7 • • • • • • • • 👉 Every exponent bucket has exactly 8 evenly spaced values 👉 That's why FP behaves like a logarithmic number system The reason this works is that every finite floating-point value can be written as $c \cdot 2^{e_2}$ with $c$ an integer in a small range. For E4M3 normals $c \in \{8, \ldots, 15\}$ and $e_2 = E - 10$; for subnormals $c \in \{1, \ldots, 7\}$ and $e_2 = -9$. So every value sits at integer coordinates $(c, e_2)$: the rows are binades, the columns are integer significands, and within each row the dots are linearly spaced. The mysterious “logarithmic spacing” of floating-point numbers is just what you see when you stack these linear rows and look at them from the side. Vibe-coded into something usable A few more iterations and the sketch turned into the interactive explorer below ( standalone page , source ). Click any dot to inspect the value, toggle subnormals and NaN, or scrub through the encoding directly: Two axes, two scales: the binary exponent $e_2$ runs down the left, and the matching decimal exponent $e_{10} = \lfloor e_2 \log_{10} 2 \rfloor$ down the right. A few things to look at: The horizontal line through each row is the binade, extended by half a cell on each side. Those half-cells are exactly the rounding interval : real numbers between them round back to the dot in the middle (modulo rounding-mode tie-breaks I’m glossing over). Crossing each row are vertical decimal ticks at two scales: minor every $10^{e_{10}}$, major every $10^{e_{10}+1}$. They are the only two decimal grids that matter in that binade. Anything coarser misses the rounding interval, anything finer just adds digits. So shortest-decimal conversion is one row’s worth of work: pick a dot, find the coarsest tick that lands inside its rounding interval. Reading the shortest decimal by hand Let’s pick a value and walk through it. Set the encoding to 51 in the explorer (or click the dot at row $e_2 = -4$, column $c = 11$). The bits are 0 0110 011 , i.e. $E = 6$, $M = 3$, so $$ v = (8 + 3) \cdot 2^{6 - 10} = 11 \cdot 2^{-4} = 0.6875. $$ Its row is $e_2 = -4$, with $e_{10} = \lfloor -4 \log_{10} 2 \rfloor = -2$. So on this row: minor ticks are at multiples of $10^{-2} = 0.01$, major ticks are at multiples of $10^{-1} = 0.1$. The dot’s neighbors in the binade are at $10 \cdot 2^{-4} = 0.625$ on the left and $12 \cdot 2^{-4} = 0.75$ on the right. The half-cells around the dot therefore span $(0.65625, 0.71875)$, the rounding interval. Now eyeball the ticks in that interval: The major tick at $0.7$ sits right inside it. (Several minor ticks $0.66, 0.67, \ldots, 0.71$ are also inside, but we don’t care: the major tick already gives the shortest answer.) Therefore the shortest decimal that round-trips through E4M3 to $0.6875$ is simply 0.7 . No tables, no big-integer arithmetic, just one dot and the ticks on its row. This is exactly what Schubfach (and Dragonbox, and Żmij) compute for double , just at a much larger scale and with a lot more arithmetic to keep track of: pick the coarsest decimal grid whose spacing still fits inside the rounding interval, then round the value to that grid. Where the special cases go Subnormals turn out to be just the bottom row $e_2 = -9$, with the column index running $1\ldots 7$ instead of $8\ldots 15$ and the leftmost half-cell extending a little further than usual. The irregular rounding intervals at powers of two show up at the leftmost normal column ($c = 8$ in every row except the bottom), where the left half-cell is shorter than the right because the predecessor sits in the binade below and is only half a ULP away. Even the e10 vs e10 + 1 decision in Schubfach has an obvious reading: “does the major tick fit inside the interval, or do we have to fall back to minor ticks?”. The full explorer is a single HTML/JS/SVG file with no build step; if you want to fork it, adapt it to a different format ( E5M2 ? bfloat16 ? Whatever your hardware vendor invents next quarter?), or just read how it’s wired together, the source is in the website’s repo . Happy floatspotting! window.MathJax = { tex: { inlineMath: [['$', '$'], ['\\(', '\\)']], displayMath: [['$$', '$$'], ['\\[', '\\]']] }, svg: { fontCache: 'global' } }; - https://vitaut.net/posts/2026/every-float/ -📝vitaut.net

Friday, May 1, 2026

Thursday, April 30, 2026

_Adventure:_ Is there light in the cobble crawl?The original _Colossal Cave Adventure_ consists basically of a Fortran source file and a textual data file. These files would often travel from one installation to another via paper printouts: printed out at one site, typed in by hand at another. The lines of WOOD0350's Fortran source (intentionally or not) never exceed 80 columns regardless of your tab stop. But the data file fits within 80 columns only with a tab stop of four. With an eight-space tab stop, four lines of the data file exceed 80 columns:📝Arthur O’Dwyer