The Boost C++ Libraries

Scalability and Multithreading

Developing a program based on a library like Boost.Asio differs from the usual C++ style. Functions that may take longer to return are no longer called in a sequential manner. Instead of calling blocking functions, Boost.Asio starts asynchronous operations. Functions which should be called after an operation has finished are now called within the corresponding handler. The drawback of this approach is the physical separation of sequentially executed functions, which can make code more difficult to understand.

A library such as Boost.Asio is typically used to achieve greater efficiency. With no need to wait for an operation to finish, a program can perform other tasks in between. Therefore, it is possible to start several asynchronous operations that are all executed concurrently – remember that asynchronous operations are usually used to access resources outside of a process. Since these resources can be different devices, they can work independently and execute operations concurrently.

Scalability describes the ability of a program to effectively benefit from additional resources. With Boost.Asio it is possible to benefit from the ability of external devices to execute operations concurrently. If threads are used, several functions can be executed concurrently on available CPU cores. Boost.Asio with threads improves the scalability because your program can take advantage of internal and external devices that can execute operations independently or in cooperation with each other.

If the member function run() is called on an object of type boost::asio::io_service, the associated handlers are invoked within the same thread. By using multiple threads, a program can call run() multiple times. Once an asynchronous operation is complete, the I/O service object will execute the handler in one of these threads. If a second operation is completed shortly after the first one, the I/O service object can execute the handler in a different thread. Now, not only can operations outside of a process be executed concurrently, but handlers within the process can be executed concurrently, too.

Example 32.3. Two threads for the I/O service object to execute handlers concurrently
#include <boost/asio/io_service.hpp>
#include <boost/asio/steady_timer.hpp>
#include <chrono>
#include <thread>
#include <iostream>

using namespace boost::asio;

int main()
{
  io_service ioservice;

  steady_timer timer1{ioservice, std::chrono::seconds{3}};
  timer1.async_wait([](const boost::system::error_code &ec)
    { std::cout << "3 sec\n"; });

  steady_timer timer2{ioservice, std::chrono::seconds{3}};
  timer2.async_wait([](const boost::system::error_code &ec)
    { std::cout << "3 sec\n"; });

  std::thread thread1{[&ioservice](){ ioservice.run(); }};
  std::thread thread2{[&ioservice](){ ioservice.run(); }};
  thread1.join();
  thread2.join();
}

The previous example has been converted to a multithreaded program in Example 32.3. With std::thread, two threads are created in main(). run() is called on the only I/O service object in each thread. This makes it possible for the I/O service object to use both threads to execute handlers when asynchronous operations complete.

In Example 32.3, both alarm clocks should ring after three seconds. Because two threads are available, both lambda functions can be executed concurrently. If the second alarm clock rings while the handler of the first alarm clock is being executed, the handler can be executed in the second thread. If the handler of the first alarm clock has already returned, the I/O service object can use any thread to execute the second handler.

Of course, it doesn’t always make sense to use threads. Example 32.3 might not write the messages sequentially to the standard output stream. Instead, they might be mixed up. Both handlers, which may run in two threads concurrently, share the global resource std::cout. To avoid interruptions, access to std::cout would need to be synchronized. The advantage of threads is lost if handlers can’t be executed concurrently.

Example 32.4. One thread for each of two I/O service objects to execute handlers concurrently
#include <boost/asio/io_service.hpp>
#include <boost/asio/steady_timer.hpp>
#include <chrono>
#include <thread>
#include <iostream>

using namespace boost::asio;

int main()
{
  io_service ioservice1;
  io_service ioservice2;

  steady_timer timer1{ioservice1, std::chrono::seconds{3}};
  timer1.async_wait([](const boost::system::error_code &ec)
    { std::cout << "3 sec\n"; });

  steady_timer timer2{ioservice2, std::chrono::seconds{3}};
  timer2.async_wait([](const boost::system::error_code &ec)
    { std::cout << "3 sec\n"; });

  std::thread thread1{[&ioservice1](){ ioservice1.run(); }};
  std::thread thread2{[&ioservice2](){ ioservice2.run(); }};
  thread1.join();
  thread2.join();
}

Calling run() repeatedly on a single I/O service object is the recommended method to make a program based on Boost.Asio more scalable. However, instead of providing several threads to one I/O service object, you could also create multiple I/O service objects.

Two I/O service objects are used next to two alarm clocks of type boost::asio::steady_timer in Example 32.4. The program is based on two threads, with each thread bound to another I/O service object. The two I/O objects timer1 and timer2 aren’t bound to the same I/O service object anymore. They are bound to different objects.

Example 32.4 works the same as before. It’s not possible to give general advice about when it makes sense to use more than one I/O service object. Because boost::asio::io_service represents an operating system interface, any decision depends on the particular interface.

On Windows, boost::asio::io_service is usually based on IOCP, on Linux, it is based on epoll(). Having several I/O service objects means that several I/O completion ports will be used or epoll() will be called multiple times. Whether this is better than using just one I/O completion port or one call to epoll() depends on the individual case.