Monday, April 13, 2026

This page runs on coffee, please consider supporting it.

Sunday, April 12, 2026

Saturday, April 11, 2026

Preventing Integer Overflow in Physical ComputationsPreventing Integer Overflow in Physical Computations Integers overflow. That is not a controversial statement. What is surprising is how easily overflow can hide behind the abstraction of a units library. Most developers immediately think of explicit or implicit scaling operations — calling .in(unit) to convert a quantity, constructing a quantity from a different unit, or assigning between quantities with different units. These are indeed places where overflow can occur, and the library cannot prevent it at compile time when the values are only known at runtime. But at least these operations are visible in your code : you wrote the conversion, you asked for the scaling, and you can reason about whether the multiplication or division might overflow your integer type. The far more insidious problem is what happens when you don't ask for a conversion. When you write 1 * m + 1 * ft , the library must automatically convert both operands to a common unit before performing the addition. That conversion — which you never explicitly requested — involves multiplication or division by scaling factors. With integer representations, those scaling operations can overflow silently, producing garbage results that propagate through your calculations undetected. No compile-time programming can prevent this. The values are only known at runtime. But very few libraries provide proper tools to detect it. This article explains why that limitation is real, how other libraries have tried to work around it, and what mp-units provides to close the gap as tightly as the language allows.📝mp-units

Friday, April 10, 2026

Building C/C++ libraries for HarmonyOS with vcpkgWe're currently working on porting Qt to HarmonyOS . For our CI and developer machines, we need a number of third-party libraries built for HarmonyOS. Cross-compiling open-source C and C++ libraries for this platform has been a manual, error-prone process. Each library has its own build system, whether CMake, Autotools, or Meson. Each needs individual attention to produce correct binaries for the OHOS target. We have been maintaining a hand-written shell script that builds libraries one by one, with per-library workarounds for cross-compilation quirks. With our vcpkg fork, that script is now a single command.📝Qt Blog

Thursday, April 9, 2026

A brief history of C/C++ programming languagesInitially, we had languages like Fortran (1957), Pascal (1970), and C (1972). Fortran was designed for number crunching and scientific computing. Pascal was restrictive with respect to low-level access (it was deliberately “safe”, as meant for teaching structured programming). So C won out as a language that allowed low-level/unsafe programming (pointer arithmetic, direct memory access) … Continue reading A brief history of C/C++ programming languages📝Daniel Lemire's blog

Wednesday, April 8, 2026

Bazel Q1 2026 Community UpdateAnnouncements Mark Your Calendars: BazelCon 2026 is Heading to Amsterdam! Get ready to build at scale in the heart of Europe. We are thrilled to announce that BazelCon 2026 will be taking place in the vibrant city of Amsterdam from October 13th–15th . Whether you’re a seasoned build engineer or just starting your journey with monorepos and hermeticity, this is the place to be. Expect three days of deep-dive technical sessions, hands-on workshops, and the chance to connect with the global community of maintainers and power users. The Details at a Glance: What: BazelCon 2026 Where: Postillion Hotel & Convention Centre Amsterdam, Netherlands When: October 13–15, 2026 October 13th - Training Day October 14th & 15th - 2 Conference days filled with technical sessions, Birds of a Feather and networking with the best in the field. Registration opens April 22nd - register via the BazelCon website ! For the most up to date BazelCon news and updates, follow the Bazel X account and the #bazelcon Bazel Slack channel. Call for Proposals Do you have a Bazel story to tell? We want to hear how you're using Bazel to solve complex problems, whether you’re optimizing massive monorepos or building custom rules for your team. Sharing your real-world wins and lessons learned helps the entire community grow, so don't hesitate to submit a proposal regardless of your experience level. CFP Opens: April 22 CFP Closes: June 21 Review Period: June 22 – July 8 Speaker Notifications: July 20 Schedule Announcement: July 22 The CfP submission form will be available via the BazelCon website . Want to become a Sponsor of BazelCon 2026? Hosted by the Bazel Community in partnership with The Linux Foundation, BazelCon is the premier annual event for build enthusiasts, maintainers, and contributors to connect in an inclusive environment. Sponsoring BazelCon 2026 puts your company right in front of the best build and platform engineers in the business. It is a great way to show your support for the community, meet key decision-makers, and find top talent to join your team. By becoming a partner, you help make the event possible while making sure the smartest people in tech know exactly who you are. Download the 2026 Sponsorship Prospectus Here. Email us at bazelcon-planning@bazel.build to reserve your sponsorship, ask questions, or talk about different options. Product Updates Upcoming Bazel releases Bazel 9.1.0 is expected to release on on 2026-04-16. Please send cherry-pick PRs against the release-9.1.0 branch before the RC1 cutoff on 2026-04-09. Bazel 8.7.0 is expected to release on 2026-05-04. Please send cherry-pick PRs against the release-8.7.0 branch before the RC1 cutoff on 2026-04-27. Q4 & Q1 releases 9.0.0 has been released in January ‘26, followed by patch 9.0.1 . 8.6.0 has been released in February ‘26. 8.5.0 has been released in December ‘25, followed by patch 8.5.1 . 7.7.0 has been released in October ‘25, followed by patch 7.7.1 . 6.6.0 has been released in January ‘26. This release has also marked the end of support for Bazel 6. Community Corner Bazel for CLion plugin updates A few updates from the JetBrains* team: Plugin supports Bazel 9 and comes now with Starlark REPL. C++ code insight under transitions is being rolled out. CLion Classic lets the user select resolve configuration for such files if more than one configuration is available, CLion Nova support for configuration switching is on the way. GoogleTest TEST_P macro is supported for individual test runs. Code insight takes into account conlyopts and cxxopts attributes. New PTY capable view is enabled by default for all outputs. BUILD Foundation Announced at BazelCon 2025, the BUILD Foundation has been established as a Linux Foundation Directed Fund to accelerate the community roadmap for Bazel and related build technologies. While Google maintains governance of the core Bazel "kernel," the new fiscal entity provides a formal structure to fund improved documentation, rulesets, and open-source infrastructure. The BUILD Foundation is now enrolling founding members. Read the Prospectus and Membership Entitlements to learn more about the values of becoming a member. Web Updates: Previews for the BCR and Bazel.build A New Look for the BCR: The community is working together on a more modern way to explore the Registry. Head over to bcr.stack.build to see the new UI in action and see how the ecosystem is evolving. Evolving Bazel.build: Thanks to a collaborative effort, the next version of our homepage is taking shape. Visit preview.bazel.build to see how we’re making documentation and resources easier for everyone to navigate. Upcoming Meetup.build events Build Meetup Munich - May 11, 2026 Community created content Articles Goodbye Dockerfile, Hello Bazel: Doubling Our CI Speed - by Nikita Chepanov and Oleg Dashevskii at Plaid Build less, merge faster: avoiding diamond merges with a merge queue - by Nikita Chepanov and Oleg Dashevskii at Plaid Bazel 9 Migration: How to Get Faster Builds Before the Bzlmod Refactor - by Pratik Mahalle Bazel for SONiC: What We've Learned and Contributed - by Şahin Yort Managing Bazel Flags in Monorepos with Flagsets (PROJECT.scl) - by Adin Ćebić Composing Bazel rules with subrules - by Adin Ćebić Lightning-fast BUILD file generation with Gazelle lazy indexing - by Jay Conrod @EngFlow Bazel rule extensions - by Keith Smiley Build Snippets #1 - Affected Target Analysis with Bazel - by Chris McDonald Migrating to Bazel symbolic macros - by Alexey Tereshenkov @Tweag Videos Bazel 9 is here! - by Aspect Build Tutorial: Set up Gazelle to automatically create your Bazel BUILD files - & other beginner friendly videos by Jon Block here . Bazel and Rust at OpenAI with David Zbarsky - by Aspect Build Zero-sysroot hermetic LLVM cross-compilation using Bazel - FOSDEM talk by David Zbarsky and Corentin Kerisit Resources GitHub repository: https://github.com/bazelbuild/bazel Releases: https://github.com/bazelbuild/bazel/releases Slack chat: https://slack.bazel.build Google group: bazel-discuss@googlegroups.com Special Interest Groups (SIG): Reach out the email(s) listed below if you’d like to be added to the SIG calendar invites. SIG Meeting frequency Point of contact Rules authors Every two weeks bazel-contrib@googlegroups.com Android app development Monthly ahumesky@google.com Bazel plugin for IntelliJ Monthly en@jetbrains.com Remote execution API working group Monthly chiwang@google.com Supply chain security / SBOM Weekly fwe@google.com Interested in learning about SIGs or starting a new one? Find more information on our website . Want to get your SIG listed? Please add it to the Community repository . Ideas, feedback, and submissions are welcome! Thank you for reading this edition! Let us know if you’d like to see any new information or changes in future community updates by reaching out to product@bazel.build. We look forward to hearing from you. Thanks, Google Bazel team * Copyright © 2026 JetBrains s.r.o. JetBrains and IntelliJ are registered trademarks of JetBrains s.r.o.📝Bazel Blog
Joining Community, Detecting Communities, Making Community.Joining Community Early in Q1 2026, I joined the C++ Alliance. A very exciting moment. So I began to work early January under Joaquin’s mentorship, with the idea of having a clear contribution to Boost Graph by the end of Q1. After a few days of auditing the current state of the library versus the literature, it became clear that community detection methods (aka graph clustering algorithms) were sorely lacking for Boost.Graph, and that implementing one would be a great start to revitalizing the library and fill up maybe the largest methodological gap in its current algorithmic coverage. Detecting Communities The vision was (and still is) simple: i) begin to implement Louvain algorithm, ii) build upon it to extend to the more complex Leiden algorithm, iii) finally get started with the Stochastic Block Model. If the plan is straightforward, the Louvain literature is not, and the BGL abstractions even less. But under the review and guidance from Joaquin and Jeremy Murphy (maintainer of the BGL), I was able to put up a satisfying implementation: Using the Newman-Girvan Modularity as the quality function to optimize, one can simply call: double Q = boost::louvain_clustering( g, cluster_map, weight_map, gen, boost::newman_and_girvan{}, // quality function (default) 1e-7, // min_improvement_inner (per-pass convergence) 0.0 // min_improvement_outer (cross-level convergence) ); // Q = 0.42, cluster_map = {0,0,0, 1,1,1} As it happens often with heuristics, there is a large number of quality functions out there, and this is not because of a lack of consensus: in a 2002 paper, computer scientist Jon Kleinberg proved that no clustering quality function (Modularity, Goldberg density, Surprise…) can simultaneously be: scale-invariant (doubling all edges should not change the clusters), rich (all partitions should be achievable), consistent (shortening distances inside a cluster and expanding distances between clusters should lead to similar results). In other words, there is no way to implement a single function hoping it would exhibit three basic properties we would genuinely expect. All we can do is to explore different trade-offs using different quality functions. So I left some doors open to be able to inject an arbitrary quality function. If this function exposes a minimal, “naive” interface, the algorithm will statically use a slow but generic path, and iterate across all the edges of the graph to compute the quality. It is slow, yes, but it makes the study of qualities easier, as one does not have to figure out the local mathematical decomposition of the function to get started with coding: struct my_quality { template typename boost::property_traits ::value_type quality(const G& g, const CMap& c, const WMap& w) { // your custom partition quality function } }; double Q = boost::louvain_clustering(g, cluster_map, weight_map, gen, my_quality{}); However, the Louvain algorithm is extremely popular because it is fast, as it is able to update the quality computational state for each vertex it tries to “insert” or “remove” from a neighboring putative community. This locality decomposition has to be figured out mathematically for each quality function, so it’s not trivial. I defined a GraphPartitionQualityFunctionIncrementalConcept that refines the GraphPartitionQualityFunctionConcept : if the algorithm detects that the injected quality function exposes an interface for this incremental update, the fast path is taken. One thing I figured out is that the GraphPartitionQualityFunctionIncrementalConcept is for now too specific to the Modularity family. I am currently working on a proposal to increase its scope in future work. The current PR has been carefully tested and benchmarked for correctness and performance, and validated by Jeremy to be merged on develop branch. I wrote a paper to be submitted to the Journal of Open Source Software to publish the current results and benchmarks, as we are at least as fast as our competitors, and more generic. There is no equivalent I am aware of. Making Community Concurrently, I worked on summoning the Boost.Graph user base, and it quickly became clear a small local workshop would be a tremendous start: the Louvain algorithm community is based in Louvain (Belgium), its extension was formulated in Leiden (Netherlands) and my PhD graphs network is based in Paris (France) in what has been presented to me as “the Temple of the Stochastic Block Model” ! Quite a sign: life finds ways to run in (tight) circles. So the goal of this workshop is to bring together a small group (10-15 people) of researchers, open-source implementers, and industrial users for a day of honest conversation on May 6th 2026. Three questions will anchor the discussions: What types of graphs and data structures do you use in practice? What performance, scalability, and interpretability requirements matter most to you? What algorithms are missing today that Boost.Graph could offer? Ray and Collier from the C++ Alliance will also be there to record the lightning talks and document the process. It would also be the occasion to show off the python-based animations I put together for the French C++ User Group presentation on March 24th. Those had a nice success and received many compliments, as it pairs well with the visual and dynamic nature of graphs and their algorithms, and I hope it will contribute to the repopularization of Boost.Graph. Graphliiings asseeeeemble !📝The C++ Alliance

Tuesday, April 7, 2026

Frictionless Implementation of Production-Grade GUI on Torizon Embedded LinuxEvaluating and starting to develop professional, production-grade GUIs on embedded Linux should be frictionless. Based on this statement, we are always working with our partners to improve the Qt developer experience. Together with Toradex we recently made major improvements to the Torizon Qt VS Code template, making it easier for you as a developer to use Qt Device Creation Enterprise workflows inside the same template that you might already have been using with the Device Creation Community Edition. On top of that, there is a brand-new Qt demo in the Torizon Demo Gallery which you can try right away. Torizon is a production-ready, container-based embedded Linux platform that simplifies how Qt applications are deployed and maintained. Qt developers may already be familiar with Boot2Qt, which is a useful tool to get a Qt prototype running quickly. However, scaling that prototype into a secure, maintainable, and updatable product usually requires building and managing your own Yocto stack. Torizon removes this burden, providing a pre-integrated OS, hardware-optimized Qt runtime, OTA automated updates, CVE tracking and a consistent containerized workflow, letting you focus entirely on your Qt application instead of maintaining the underlying Linux distribution.Below you’ll find what’s new, why it helps Qt developers, and exactly how to try it.📝Qt Blog

Monday, April 6, 2026

Sorting performance rabbit holeIn an earlier blog post we found out that Pystd's simple sorting algorithm implementations were 5-10% slower than their stdlibc++ counterparts. The obvious follow up nerd snipe is to ask "can we make the Pystd implementation faster than stdlibc++?" For all tests below the data set used was 10 million consecutive 64 bit integers shuffled in a random order. The order was the same for all algorithms. Stable sort It turns out that the answer for stable sorting is "yes, surprisingly easily". I made a few obvious tweaks (whose details I don't even remember any more) and got the runtime down to 0.86 seconds. This is approximately 5% faster than std::stable_sort . Done. Onwards to unstable sort. Unstable sort This one was not, as they say, a picnic. I suspect that stdlib developers have spent more time optimizing std::sort than std::stable_sort simply because it is used a lot more. After all the improvements I could think of were done, Pystd's implementation was consistently 5-10% percent slower. At this point I started cheating and examined how stdlibc++'s implementation worked to see if there are any optimization ideas to steal. Indeed there were, but they did not help. Pystd's insertion sort moves elements by pairwise swaps. Stdlibc++ does it by moving the last item to a temporary, shifting the array elements onwards and then moving the stored item to its final location. I implemented that. It made things slower. Stdlibc++'s moves use memmove instead of copying (at least according to code comments). I implemented that. It made things slower. Then I implemented shell sort to see if it made things faster. It didn't. It made them a lot slower. So did radix sort . Then I reworked the way pivot selection is done and realized that if you do it in a specific way, some elements move to their correct partitions as a side effect of median selection. I implemented that and it did not make things faster. It did not make them slower, either, but the end result should be more resistant against bad pivot selection so I left it in. At some point the implementation grew a bug which only appeared with very large data sets. For debugging purposes I reduce the limit where introsort switches from qsort to insertion sort from 16 to 8. I got the bug fixed but the change made sorting a lot slower. As it should. But this raises a question, namely would increasing the limit from 16 to 32 make things faster? It turns out that it did. A lot. Out of all perf improvements I implemented, this was the one that yielded the biggest improvement by a fairly wide margin. Going to 64 elements made it even faster, but that made other algorithms using insertion sort slower, so 32 it is. For now at least. After a few final tweaks I managed to finally beat stdlibc++. By how much you ask? Pystd's best observed time was 0.754 seconds while stdlibc++'s was 0.755 seconds. And it happened only once. But that's enough for me.📝Nibble Stew
IEEE International Symposium on Biomedical Imaging (ISBI) 2026Kitware is excited to announce our participation in the IEEE International Symposium on Biomedical Imaging (ISBI) 2026, taking place in London, UK. ISBI brings together experts working on the theory, algorithms, and computational methods that power modern biomedical imaging—from microscopic analysis to whole-body systems. The conference creates a space where different imaging disciplines connect, exchange ideas, and push the field forward through collaboration.📝Kitware Inc
Mr.Docs: Niebloids, Reflection, Code Removal, New XML GeneratorThis quarter, I focused on two areas of Mr.Docs: adding first-class support for function objects, the pattern behind C++20 Niebloids and Ranges CPOs, and overhauling how the tool turns C++ metadata into documentation output (the reflection layer). Function objects: documenting what users actually call In modern C++ libraries, many “functions” are actually global objects whose type has operator() overloads. The Ranges library, for instance, defines std::ranges::sort() not as a function template but as a variable of some unspecified callable type. Users call it like a function and expect it to be documented like one. Before this quarter, Mr.Docs didn’t know the difference: it would document the variable and its cryptic implementation type. The new function-object support (roughly 4,600 lines across 38 files) bridges this gap. When Mr.Docs encounters a variable whose type is a record with no public members but operator() overloads and special member functions, it now synthesizes free-function documentation entries named after the variable. The underlying type is marked implementation-defined and hidden from the output. Multi-overload function objects are naturally grouped by the existing overload machinery. So, given: struct abs_fn { double operator()(double x) const noexcept; }; inline constexpr abs_fn abs = {}; Mr.Docs documents it as simply: double abs(double x) noexcept; For cases where auto-detection isn’t quite right — for example, when the type has extra public members — library authors can use the new @functionobject or @functor doc commands. There is also an auto-function-objects config option to control the behavior globally. The feature comes with a comprehensive test fixture covering single and multi-overload function objects, templated types, and types that live in nested detail namespaces. Reflection: from boilerplate to a single generic template The bigger effort — and the one that kept surprising me with its scope — was the reflection refactoring. Mr.Docs converts its internal C++ metadata into a DOM (a tree of lazy objects) that drives the Handlebars template engine. Before this quarter, every type in the system required a hand-written tag_invoke() overload: one function to map the type’s fields to DOM properties, another to convert it to a dom::Value. Adding a new symbol kind meant touching half a dozen files and following a pattern that was easy to get wrong. The goal was simple to state: replace all of that with a single generic template that works for any type carrying a describe macro. Phase 1: Boost.Describe The first attempt used Boost.Describe. I added BOOST_DESCRIBE_STRUCT() annotations to every metadata type and wrote generic merge() and mapReflectedType() templates that iterated over the described members. This proved the concept and eliminated a great deal of boilerplate. However, we didn’t want a public dependency on Boost.Describe, which meant the dependency was hidden in .cpp files and couldn’t be used in templates living in public heades, Phase 2: custom reflection macros So I wrote our own. MRDOCS_DESCRIBE_STRUCT() and MRDOCS_DESCRIBE_CLASS() provide the same compile-time member and base-class iteration as Boost.Describe, but with no external dependency. The macros live in Describe.hpp and produce constexpr descriptor lists that the rest of the system iterates with describe::for_each(). Phase 3: removing the overloads With the describe macros in place, I could write generic implementations of tag_invoke() for both LazyObjectMapTag (DOM mapping) and ValueFromTag (value conversion), plus a generic merge(). Each one replaces dozens of per-type overloads with a single constrained template. The mapMember() function handles the dispatch: optionals are unwrapped, vectors become lazy arrays, described enums become kebab-case strings, and compound described types become lazy objects — all automatically. Removing the overloads was not as straightforward as I had hoped. The old overloads were entangled with: The Handlebars templates, which assumed specific DOM property names. Renaming symbol to id, type to underlyingType, and description to document required updating templates and golden tests in lockstep. The XML generator, which silently skipped types that weren’t described. Adding MRDOCS_DESCRIBE_STRUCT() to TemplateInfo and MemberPointerType made the XML output more complete, requiring schema updates and golden-test regeneration. The result Out of the original 39 custom tag_invoke(LazyObjectMapTag) overloads, only 7 remain — each with genuinely non-reflectable logic (computed properties, polymorphic dispatch, or member decomposition). Roughly 60 tag_invoke(ValueFromTag) boilerplate overloads were also removed. Adding a new metadata type to Mr.Docs now requires nothing beyond MRDOCS_DESCRIBE_STRUCT() at the point of definition. The XML Generator: a full rewrite in 350 lines The XML generator was the first major payoff of the reflection work (although it was initially done when we were using Boost.Describe). The old generator had its own hand-written serialization for every metadata type, completely independent of the DOM layer. It was a parallel set of per-type functions that had to be kept in sync with every schema change. I replaced it with a generic implementation built entirely on the describe macros. The core is about 350 lines: writeMembers() walks describe_bases and describe_members, writeElement() dispatches on type traits for primitives, optionals, vectors, and enums, and writePolymorphic() handles the handful of type hierarchies (Type, TParam, TArg, Block, Inline) via .inc-generated switches. The old generator needed a new function for every type; the new one handles them all, and the 241 files changed in that commit were almost entirely golden-test updates reflecting the now-more-complete and totally changed output. Smaller fixes Alongside the two main efforts, I fixed several bugs that came up during development or were reported by users: Markdown inline formatting (bold, italic, code) and bullet lists were not rendering correctly in certain combinations. tags were missing around HTML code blocks. bottomUpTraverse() was silently skipping ListBlock items, causing doc-comment content to be lost. Several CI improvements: faster PR demos, better failure detection, increased test coverage for the XML generator. Looking ahead The reflection infrastructure is now in good shape, and most of the mechanical boilerplate is gone. The remaining tag_invoke() overloads are genuinely custom — they compute properties that don’t exist as C++ members, or they dispatch polymorphically across type hierarchies. Those are worth keeping. Going forward, I’d like to explore whether the describe macros can replace more of the manual visitor code throughout the codebase. As always, feedback and suggestions are welcome — feel free to open an issue or reach out on Slack.📝The C++ Alliance
Speed and SafetyIn my last post I mentioned that int128 library would be getting CUDA support in the future. The good news is that the future is now! Nearly all the functions in the library are available on both host and device. Any function that has BOOST_INT128_HOST_DEVICE in its signature in the documentation is available for usage. An example of how to use the types in the CUDA kernels has been added as well. These can be as simple as: using test_type = boost::int128::uint128_t; __global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i ({255}, {2})' 18 | constexpr u8 z {x + y}; | ^~~~~ 1 error generated. Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace. Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support. One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device. We are always looking for users of all the libraries discussed. If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.📝The C++ Alliance
The road to C++20 modules, Capy and RedisModules in using std::cpp 2026 C++20 modules have been in the standard for 6 years already, but we’re not seeing widespread adoption. The ecosystem is still getting ready. As a quick example, import std, an absolute blessing for compile times, requires build system support, and this is still experimental as of CMake 4.3.1. And yet, I’ve realized that writing module-native applications is really enjoyable. The system is well-thought and allows for better encapsulation, just as you’d write in a modern programming language. I’ve been using my Servertech Chat project (a webserver that uses Boost.Asio and companion libraries) to get a taste of what modules really look like in real code. When writing this, I saw clearly that having big dependencies that can’t be consumed via import is a big problem. With the scheme I used, compile times got 66% worse instead of improving. This is because when writing modules, you tend to have a bigger number of translation units. These are supposed to be much more lightweight, but if you’re relying on #include for third-party libraries, they’re not. For example: // // File: redis_client.cppm. Contains only the interface declaration (somehow like headers do) // module; // No import boost yet - must be in the global module fragment #include #include module servertech_chat:redis_client; import std; namespace chat { class redis_client { public: virtual ~redis_client() {} virtual boost::asio::awaitable > get_int_key(std::string_view key) = 0; // ... }; } // // File: redis_client.cpp. Contains the implementation // module; #include module servertech_chat; import :redis_client; import std; namespace { class redis_client_impl final : public redis_client { /* ... */ }; } I analyze this in much more depth in the talk I’ve had the pleasure to give at using std::cpp this March in Madrid. The TL;DR is that supporting import boost natively is very important for any serious usage of Boost in the modules world. import boost is upon us As you may know, I prefer doing to saying, and I’ve been writing a prototype to support import boost natively while keeping today’s header code as is. This prototype has seen substantial advancements during these months. I’ve developed a systematic approach for modularization, and we’ve settled for the ABI-breaking style, with compatibility headers. I’ve added support for GCC (the remaining compiler) to the core libraries that we already supported (Config, Mp11, Core, Assert, ThrowException, Charconv), and I’ve added modular bindings for Variant2, Compat, Endian, System, TypeTraits, Optional, ContainerHash, IO and Asio. These are only tested under Clang yet - it’s part of a discovery process. The idea is modularizing the flagship libraries to verify that the approach works, and to measure compile time improvements. There is still a lot to do before things become functional. I’ve received helpful feedback from many community members, which has been invaluable. Redis meets Capy If you’re a user of Boost.Asio and coroutines, you probably know that there’s a new player in town - Capy and Corosio. They’re a coroutines-native Asio replacement which promise a range of benefits, from improved expressiveness to saner compile times, without performance loss. Since I maintain Boost.MySQL and co-maintain Boost.Redis, I know the pain of writing operations using the universal Asio model. Lifetime management is difficult to follow, testing is complex, and things must remain header-only (and usually heavily templatized). Coroutine code is much simpler to write and understand, and it’s what I use whenever I can. So obviously I’m interested in this project. My long-term idea is creating a v2 version of MySQL and Redis that exposes a Capy/Corosio interface. As a proof-of-concept, I migrated Boost.Redis and some of its tests. Still some polishing needed, but - it works! You can read the full report on the Boost mailing list. Some sample code as an appetizer: capy::task run_request(connection& conn) { // A request containing only a ping command. request req; req.push("PING", "Hello world"); // Response where the PONG response will be stored. response resp; // Executes the request. auto [ec] = co_await conn.exec(req, resp); if (ec) co_return; std::cout (resp).value() co_main() { connection conn{(co_await capy::this_coro::executor).context()}; co_await capy::when_any( // Sends the request run_request(conn), // Performs connection establishment, re-connection, pings... conn.run(config{}) ); } Redis PubSub improvements Working with PubSub messages in Boost.Redis has always been more involved than in other libraries. For example, we support transparent reconnection, but (before 1.91), the user had to explicitly re-establish subscriptions: request req; req.push("SUBSCRIBE", "channel"); while (conn->will_reconnect()) { // Reconnect to the channels. co_await conn->async_exec(req, ignore); // ... } Boost 1.91 has added PubSub state restoration. A fancy name but an easy feature: established subscriptions are recorded, and when a reconnection happens, the subscription is re-established automatically: // Subscribe to the channel 'mychannel'. If a re-connection happens, // an appropriate SUBSCRIBE command is issued to re-establish the subscription. request req; req.subscribe({"mychannel"}); co_await conn->async_exec(req); Boost 1.91 also adds flat_tree, a specialized container for Redis messages with an emphasis on memory-reuse, performance and usability. This container is especially appropriate when dealing with PubSub. We’ve also added connection::async_receive2(), a higher-performance replacement for connection::async_receive() that consumes messages in batches, rather than one-by-one, eliminating re-scheduling overhead. And push_parser, a view to transform raw RESP3 nodes into user-friendly structures. With these improvements, code goes from: // Loop while reconnection is enabled while (conn->will_reconnect()) { // Reconnect to channels. co_await conn->async_exec(req, ignore); // Loop reading Redis pushs messages. for (error_code ec;;) { // First try to read any buffered pushes. conn->receive(ec); if (ec == error::sync_receive_push_failed) { ec = {}; // Wait for pushes co_await conn->async_receive(asio::redirect_error(asio::use_awaitable, ec)); } if (ec) break; // Connection lost, break so we can reconnect to channels. // Left to the user: resp contains raw RESP3 nodes, which need to be parsed manually! // Remove the nodes corresponding to one message consume_one(resp); } } To: // Loop to read Redis push messages. while (conn->will_reconnect()) { // No need to reconnect, we now have PubSub state restoration // Wait for pushes auto [ec] = co_await conn->async_receive2(asio::as_tuple); if (ec) break; // Cancelled // Consume the messages for (push_view elem : push_parser(resp.value())) std::cout << "Received message from channel " << elem.channel << ": " << elem.payload << "\n"; // Clear all the batch resp.value().clear(); }📝The C++ Alliance
Range-Validated Quantity PointsRange-Validated Quantity Points Physical units libraries have always been very good at preventing dimensional errors and unit mismatches. But there is a category of correctness that they have universally ignored: domain constraints on quantity point values . A latitude is not just a length divided by a radius . It is a value that lives in $[-90°, +90°]$; anything outside that range is physically meaningless. An angle used in bearing navigation wraps cyclically around a circle; treating it as an unbounded real number ignores a fundamental property of the domain. A clinical body-temperature sensor should reject a reading of $44\ \mathrm{°C}$ at the API boundary, not silently pass it downstream. Type-level constraint enforcement for quantity points with this level of flexibility is a relatively unexplored area in mainstream physical units libraries. The approach we present here is novel and experimental — we are certain there are edge cases and design considerations we haven't yet discovered. This article describes the motivation in depth, the design we arrived at, and the open questions we would love the community's help to answer.📝mp-units