Wednesday, March 11, 2026

Corosio Beta: Coroutine-Native Networking for C++20Corosio Beta: Coroutine-Native Networking for C++20 The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review. The Gap C++20 Left Open C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over. What Corosio Is Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write co_await and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver. auto [socket] = co_await acceptor.async_accept(); auto n = co_await socket.async_read_some(buffer); co_await socket.async_write(response); Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake. Built on Capy Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained. Capy’s IoAwaitable protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup. What We Are Asking For We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically: Does the executor affinity model hold up under production conditions? Does cancellation behave correctly across complex coroutine chains? Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends? Does the zero-allocation model hold in your deployment scenarios? We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny. Get It git clone https://github.com/cppalliance/corosio.git cd corosio cmake -S . -B build -G Ninja cmake --build build Or with CMake FetchContent: include(FetchContent) FetchContent_Declare(corosio GIT_REPOSITORY https://github.com/cppalliance/corosio.git GIT_TAG develop GIT_SHALLOW TRUE) FetchContent_MakeAvailable(corosio) target_link_libraries(my_app Boost::corosio) Requires: CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+ Resources Corosio on GitHub – https://github.com/cppalliance/corosio Corosio Docs – https://develop.corosio.cpp.al/ Capy on GitHub – https://github.com/cppalliance/capy Capy Docs – https://develop.capy.cpp.al/ File an Issue – https://github.com/cppalliance/corosio/issues📝The C++ Alliance

Tuesday, March 10, 2026

The Way of TDD@media only screen and (max-width: 600px) { .body { overflow-x: auto; } .post-content table, .post-content td { width: auto !important; white-space: nowrap; } } This article was adapted from a Google Tech on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office. By Bartosz Papis Test-Driven Development (TDD) is the practice of working in a structured cycle where writing tests comes before writing production code . The process involves three steps, sometimes called the red-green-refactor cycle : Write a failing test Make the test pass by writing just enough production code Refactor the production code to meet your quality standards Research shows TDD has several benefits : it improves test coverage, reduces the number of bugs, increases confidence, and facilitates code reuse. This practice also helps reduce distractions and keep you in the flow. TDD also has its limitations and is not a silver bullet! See the Wikipedia article about TDD for a detailed explanation and references. Here is a short practical example. Assume you need to modify the following voting algorithm to support the option for voters to abstain: def outcome(ballots): if ballots.count(Vote.FOR) > len(ballots) / 2: return "Approved" return "Rejected" 1. We start by writing a failing test - as expected, the test doesn't even compile: def test_abstain_doesnt_count(self): self.assertEqual(outcome([Vote.FOR, Vote.FOR, Vote.AGAINST, Vote.ABSTAIN]), "Approved") 2. We fix the compilation error by including the missing enum option: class Vote(Enum): FOR = 1 AGAINST = 2 ABSTAIN = 3 Now that the test compiles, we fix the production code to get all tests passing: def outcome(ballots): if ballots.count(Vote.FOR) > (len(ballots) - ballots.count(Vote.ABSTAIN)) / 2: return "Approved" return "Rejected" 3. We now refactor the code to improve clarity, and complete an iteration of the TDD cycle: def outcome(ballots): counts = collections.Counter(ballots) return "Approved" if counts[Vote.FOR] > counts[Vote.AGAINST] else "Rejected" Learn more about TDD in the book Test Driven Development: By Example , by Kent Beck.📝Google Testing Blog
REST Better with the Support of OpenAPI in Qt 6Some of you are following our works to improve connectivity of Qt-based apps. For example, in this blogpost we explained enhacements in the Qt's network stack for more efficient use of RESTful APIs starting with Qt 6.7. So, it might sound we are done with REST. Why bother about OpenAPI then? Well, while around 70% of all web services run on REST, around 20-30% of them use code generated from OpenAPI specification. How could Qt leave that out without helping our users to code less and create more? The new Qt 6 OpenAPI module will become available with Qt 6.11 as a Technical Preview. The module introduces the Qt 6 OpenAPI generator, which generates Qt HTTP clients using Qt Network RESTful APIs. It is important to note here that an OpenAPI generator for Qt 5 has been originally developed by the OpenAPI community. We took it into Qt 6, refactored it, and extended it. In this blog post, you will learn about the new OpenAPI generator in Qt 6 and see how the new module can be used to implement a simple, Qt-based ChatGPT client application using specification of its API provided in the OpenAPI format.📝Qt Blog

Monday, March 9, 2026

Sunday, March 8, 2026

Saturday, March 7, 2026

Some fixes and improvements in GCCGCC 16 will probably release in a couple of months, and comes with a couple of my patches. There’s nothing too big this time, but a couple of bug fixes and some quality-of-life changes. You no longer need to explicitly pass -ftest-coverage for -fcondition-coverage and -fpath-coverage to be useful, it is now implied. The -ftest-coverage flag controls if GCC creates the .gcno files gcov needs to create the report. The coverage support in GCC is built on top of arc profiling which underpins profile guided optimization (PGO), and the PGO doesn’t need the .gcno, only the .gcda (counters). Coverage was a sort of side effect, and MC/DC and prime path coverage was built on that framework. Unlike arc profiling, it doesn’t make much sense to ask for MC/DC and prime path coverage without also wanting to read the reports, so this makes GCC a bit easier to use. I fixed a bug where gcov-dump printed the wrong offset for condition blocks, and taught it how to print the PATHS tag. gcov-dump is mostly useful for developing gcov itself, but it’s nice that it’s there. There’s a another bugfix in there too, which caused bad counter updates, but this bug was never included in a release. I have revised my paper on MC/DC , collected some data for it; on how the instrumentation affects compile time, runtime overhead, and object size. What I found was that compile times suffered greatly when analyzing expressions with many conditions joined by a single operator. A phase of the CFG analysis is figuring out which other conditions to mask when we take an edge, and this step evaluated all possible candidates. What I realised is that we don’t need to evaluate all of them, the search starting at the left-adjacent operand (which we must also include) will dominate and always find all the masked conditions. This had a massive impact on compile times. This is one of those problems that don’t really show up in testing that easily, because under normal circumstances this isn’t a problem. I wrote a small test program which causes the worst behaviour, a single (x && y && ... && z) . I tested two cases, 1 is (x && y && ... && z) and 4 is (x1 || ... || x8) && ... . These are the compile times before and after the fix: Those numbers are for all of GCC, including parsing, code generation, and linking. I also measured just the MC/DC analysis pass (finding the masking table and emit the instrumentation code), and get somewhere between 15–20 times speedup, not bad at all. As it turns out, algorithms matter. before: 20822.303 ms (41.645 ms per expression) after: 1288.548 ms ( 2.577 ms per expression) As you can see from the graphs, the performance hit really starts to kick in past 16 conditions, which is quite rare in practice. Still, faster is nice. I did find one case with 27 conditions in GNU ls, but that’s very much the exception.📝patch – Blog

Friday, March 6, 2026

Thursday, March 5, 2026

Beyond Affine: Thin Plate Splines for Serial Histology AlignmentWhen Your Images Don’t Quite Line Up If you work with serial histology slides, you are familiar with the routine: Zoom into a perivascular region. Toggle to the adjacent stain. Pan. Nudge. Recenter. Repeat. Serial sections are routinely used for cross-stain interrogation and volumetric reconstruction. But consecutive sections from the same tissue block rarely align well enough for direct comparison, especially at high magnification. Sections may be skipped, and tissue deforms during cutting and mounting in ways that undermine direct spatial correspondence. Those distortions may seem small, but repeated hundreds of times a day, they compound. Misalignment undermines comparative analysis, annotation quality, and downstream machine learning workflows. At first glance, this seems like a straightforward fitting problem that a simple affine transform should be able to handle. But, once applied, this is clearly insufficient.📝Kitware Inc

Wednesday, March 4, 2026

Tuesday, March 3, 2026

Accessing inactive union members through char: the aliasing rule you didn't know aboutI recently published an article on a new C++26 standard library facility, std::is_within_lifetime. As one of my readers, Andrey, pointed out, one of the examples contains code that seems like undefined behavior. But it’s also taken — almost directly — from the original proposal, so it’s probably not UB. And that’s correct, it’s not undefined behavior. Let’s first examine the example and the UB...📝Sandor Dargo's Blog
Set Safe Defaults for Flags@media only screen and (max-width: 600px) { .body { overflow-x: auto; } .post-content table, .post-content td { width: auto !important; white-space: nowrap; } } This article was adapted from a Google Tech on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office. By Zhe Lu We all make mistakes. But big mistakes can cause big headaches! Suppose you're writing a utility to update production data for a launch. Before making changes to production data, you want to perform a dry run to validate the expected changes. In your excitement, you forget to include the --dry_run flag in your command: $ /scripts/credit_accounts --amount=USD10 # Oops, I forgot to include --dry_run You realize your mistake too late. Safe flag defaults can prevent a simple mistake from turning into a major outage: Flag has unsafe default: cliArgs.addBoolFlag(name="dry_run", default= False , help="If set, print change summary, but do NOT change data.") Flag has safe default: cliArgs.addBoolFlag(name="dry_run", default= True , help="If set, print change summary, but do NOT change data.") Safety depends on context: When defining flags, choose the default that minimizes the cost of potential mistakes . This might involve defaulting to a "dry" run, asking for user confirmation before irreversible actions, requiring a confirmation flag on the command line, or other strategies. If you’re writing documentation that contains commands, always set values to minimize the damage if run blindly: Flag in documentation has unsafe default: ## How to commit changes Use this command to commit changes. Use --dry_run to test and compute and report changes. ```shell /scripts/credit_accounts --amount=[value] --filter=[conditions] ``` Flag in documentation has safe default: ## How to commit changes Use this command to compute and report changes. Use --nodry_run to commit the changes. ```shell /scripts/credit_accounts --amount=[value] --filter=[conditions] ``` Similarly, consider requiring that environment-specific flags (e.g., backend addresses and output folders) be explicitly set . In this situation, unspecified environment flags will crash your program, instead of potentially mixing configuration across environments.📝Google Testing Blog