Concurrent Composition
This section explains how to run multiple tasks concurrently using when_all and when_any.
Prerequisites
-
Completed Stop Tokens and Cancellation
-
Understanding of stop token propagation
Overview
Sequential execution—one task after another—is the default when using co_await:
task<> sequential()
{
co_await task_a(); // Wait for A
co_await task_b(); // Then wait for B
co_await task_c(); // Then wait for C
}
For independent operations, concurrent execution is more efficient:
task<> concurrent()
{
// Run A, B, C simultaneously
co_await when_all(task_a(), task_b(), task_c());
}
when_all: Wait for All Tasks
when_all launches multiple io_task children concurrently and waits for all of them to complete. It returns task<io_result<R1, R2, …, Rn>>, a single ec plus the flattened payloads:
#include <boost/capy/when_all.hpp>
io_task<int> fetch_a() { co_return io_result<int>{{}, 1}; }
io_task<int> fetch_b() { co_return io_result<int>{{}, 2}; }
io_task<std::string> fetch_c() { co_return io_result<std::string>{{}, "hello"}; }
task<> example()
{
auto [ec, a, b, c] = co_await when_all(fetch_a(), fetch_b(), fetch_c());
// ec == std::error_code{} (success)
// a == 1
// b == 2
// c == "hello"
}
Result Type
when_all returns io_result<R1, …, Rn> where each Ri is the child’s payload flattened: io_result<T> contributes T, io_result<> contributes tuple<>. Check ec first; values are only meaningful when !ec.
Void io_tasks
io_task<> children contribute tuple<> to the result:
io_task<> void_task() { co_return io_result<>{}; }
io_task<int> int_task() { co_return io_result<int>{{}, 42}; }
task<> example()
{
auto [ec, a, b, c] = co_await when_all(int_task(), void_task(), int_task());
// a == 42 (int)
// b == tuple<> (from void io_task)
// c == 42 (int)
}
When all children are io_task<>, just check r.ec:
task<> example()
{
auto r = co_await when_all(void_task_a(), void_task_b());
if (r.ec)
// handle error
}
Error Handling
I/O errors are reported through the ec field of the io_result. When any child returns a non-zero ec:
-
Stop is requested for sibling tasks
-
All tasks complete (or respond to stop)
-
The first
ecis propagated in the outerio_result
task<> example()
{
auto [ec, a, b] = co_await when_all(task_a(), task_b());
if (ec)
std::cerr << "Error: " << ec.message() << "\n";
}
If a task throws an exception, it is captured and rethrown after all tasks complete. Exceptions take priority over ec.
io_task<int> might_throw(bool fail)
{
if (fail)
throw std::runtime_error("failed");
co_return io_result<int>{{}, 42};
}
task<> example()
{
try
{
co_await when_all(might_throw(true), might_throw(false));
}
catch (std::runtime_error const& e)
{
// Catches the exception from the failing task
}
}
Stop Propagation
When one task fails, when_all requests stop for its siblings. Well-behaved tasks should check their stop token and exit promptly:
io_task<> long_running()
{
auto token = co_await this_coro::stop_token;
for (int i = 0; i < 1000; ++i)
{
if (token.stop_requested())
co_return io_result<>{}; // Exit early when sibling fails
co_await do_iteration();
}
co_return io_result<>{};
}
when_any: First-to-Succeed Wins
when_any launches multiple io_task children concurrently and returns when the first one succeeds (!ec):
#include <boost/capy/when_any.hpp>
task<> example()
{
auto result = co_await when_any(
fetch_int(), // io_task<int>
fetch_string() // io_task<std::string>
);
// result is std::variant<std::error_code, int, std::string>
// index 0: all tasks failed (error_code)
// index 1: fetch_int won
// index 2: fetch_string won
}
The result is a variant with error_code at index 0 (failure/no winner) and one alternative per input task at indices 1..N. Only tasks returning !ec can win; errors and exceptions do not count as winning. When a winner is found, stop is requested for all siblings. All tasks complete before when_any returns.
For detailed coverage including error handling, cancellation, and the range overload, see Racing Tasks.
Practical Patterns
Parallel Fetch
Fetch multiple resources simultaneously:
io_task<page_data> fetch_page_data(std::string url)
{
auto [ec, header, body, sidebar] = co_await when_all(
fetch_header(url),
fetch_body(url),
fetch_sidebar(url)
);
if (ec)
co_return io_result<page_data>{ec, {}};
co_return io_result<page_data>{{}, {
std::move(header),
std::move(body),
std::move(sidebar)
}};
}
Fan-Out/Fan-In
Process items in parallel, then combine results using the range overload:
io_task<int> process_item(item const& i);
task<int> process_all(std::vector<item> const& items)
{
std::vector<io_task<int>> tasks;
for (auto const& item : items)
tasks.push_back(process_item(item));
auto [ec, results] = co_await when_all(std::move(tasks));
if (ec)
co_return 0;
int total = 0;
for (auto v : results)
total += v;
co_return total;
}
Timeout
The timeout combinator races an awaitable against a deadline:
#include <boost/capy/timeout.hpp>
task<> example()
{
auto [ec, n] = co_await timeout(sock.read_some(buf), 50ms);
if (ec == cond::timeout)
{
// deadline expired before read completed
}
}
timeout returns the same io_result type as the inner awaitable. On timeout, ec is set to error::timeout and payload values are default-initialized. Unlike when_any, exceptions from the inner awaitable are always propagated and never swallowed by the timer.
Implementation Notes
Task Storage
when_all stores all tasks in its coroutine frame. Tasks are moved from the arguments, so the original task objects become empty after the call.
Reference
| Header | Description |
|---|---|
|
Concurrent composition with when_all |
|
First-completion racing with when_any |
|
Race an awaitable against a deadline |
You have now learned how to compose tasks concurrently with when_all and when_any. In the next section, you will learn about frame allocators for customizing coroutine memory allocation.