The reactor sofftware design pattern is an event handing strategy that can respond to many potential service requests concurrently.
The component is an event loop,runing in a single thread or process,which demultiplexes incoming requests and dispatches them to the correct request handler.[1]
By relying on event-based mechanisms rather than blocking I/O or multi-threading, a reactor can handle many concurrent I/O bound requests with minimal delay.[2] A reactor also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations.[1]
With its balance of simplicity and scalability, the reactor has become a central architectural element in several server applications and software frameworks for networking. Derivations such as the multireactor and proactor also exist for special cases where even greater throughput, performance, or request complexity are necessary.[1][2][3][4]
Overview
Practical considerations for the client--server model in large networks, such as the C10k problem for web servers, were the original motivation for the reactor pattern.[5]
A naive approach to handle service requests from many potential endpoints, such as network sockets or file descriptors, is to listen for new requests from within an event loop, then immediately read the earliest request. Once the entire request has been read, it can be processed and forwarded on by directly calling the appropriate handler. An entirely "iterative" server like this, which handles one request from start-to-finish per iteration of the event loop, is logically valid. However, it will fall behind once it receives multiple requests in quick succession. The iterative approach cannot scale because reading the request blocks the server's only thread until the full request is received, and I/O operations are typically much slower than other computations.[2]
One strategy to overcome this limitation is multi-threading: by immediately splitting off each new request into its own worker thread, the first request will no longer block the event loop, which can immediately iterate and handle another request. This "thread per connection" design scales better than a purely iterative one, but it still contains multiple inefficiencies and will struggle past a point. From a standpoint of underlying system resources, each new thread or process imposes overhead costs in memory and processing time (due to context switching). The fundamental inefficiency of each thread waiting for I/O to finish isn't resolved either.[1][2]
From a design standpoint, both approaches tightly couple the general demultiplexer with specific request handlers too, making the server code brittle and tedious to modify. These considerations suggest a few major design decisions:
- Retain a single-threaded event handler; multi-threading introduces overhead and complexity without resolving the real issue of blocking I/O
- Use an event notification mechanism to demultiplex requests only after I/O is complete (so I/O is effectively non-blocking)
- Register request handlers as callbacks with the event handler for better separation of concerns
Usage
The reactor pattern can be a good starting point for any concurrent, event-handling problem. The pattern is not restricted to network sockets either; hardware I/O, file system or database access, inter-process communication, and even abstract message passing systems are all possible use-cases.[citation needed]
However, the reactor pattern does have limitations, a major one being the use of callbacks, which make program analysis and debugging more difficult, a problem common to designs with inverted control.[1] The simpler thread-per-connection and fully iterative approaches avoid this and can be valid solutions if scalability or high-throughput are not required.[a][citation needed]
Single-threading can also become a drawback in use-cases that require maximum throughput, or when requests involve significant processing. Different multi-threaded designs can overcome these limitations, and in fact, some still use the reactor pattern as a sub-component for handling events and I/O.[1]
Applications
The reactor pattern (or a variant of it) has found a place in many web servers, application servers, and networking frameworks:
- Adaptive Communication Environment[1]
- EventMachine[citation needed]
- Netty[3]
- Nginx[4]
- Node.js[2][6]
- Perl Object Environment[citation needed]
- POCO C++ Libraries[7]
- Spring Framework (version 5 and later)[8]
- Twisted[9]
- Vert.x[3]
structure
UML 2 component diagram of a reactive application.[1
UML 2 sequence diagram of a reactive server.[1]
A reactive application consists of several moving parts and will rely on some support mechanisms:[1]
Handle
An identifier and interface to a specific request, with IO and data. This will often take the form of a socket, file descriptor, or similar mechanism, which should be provided by most modern operating systems.
Demultiplexer
An event notifier that can efficiently monitor the status of a handle, then notify other subsystems of a relevant status change (typically an IO handle becoming "ready to read"). Traditionally this role was filled by the select() system call, but more contemporary examples include epoll, kqueue, and IOCP.
Dispatcher
The actual event loop of the reactive application, this component maintains the registry of valid event handlers, then invokes the appropriate handler when an event is raised.
Event Handler
Also known as a request handler, this is the specific logic for processing one type of service request. The reactor pattern suggests registering these dynamically with the dispatcher as callbacks for greater flexibility. By default, a reactor does not use multi-threading but invokes a request handler within the same thread as the dispatcher.
Event Handler Interface
An abstract interface class, representing the general properties and methods of an event handler. Each specific handler must implement this interface while the dispatcher will operate on the event handlers through this interface.
Simple
class Reactor
c
/**
* \brief Reactor.
*
* An implementation of the reactor design pattern.
*/
class Reactor {
public:
/**
* \brief Constructor of Reactor.
*/
Reactor();
/**
* \brief Destructor of Reactor.
*/
~Reactor();
/**
* \brief Registers an event handler for a file descriptor.
*
* \param handle A file descriptor.
* \param event_handler A pointer to an event handler.
* \param event_type_mask A mask of event types for which the passed event handler should be registered.
* \param one_shot Specifies if the event handler should be automatically unregistered after the first event
* occurrence.
*/
void register_event_handler(int handle, EventHandler* event_handler, unsigned int event_type_mask,
bool one_shot = false);
/**
* \brief Unregisters a previously registered event handler for a file descriptor.
*
* \param handle A file descriptor.
* \param event_type_mask A mask of event types for which the event handler should be unregistered.
*/
void unregister_event_handler(int handle, unsigned int event_type_mask);
/**
* \brief Lets the reactor check whether some events are pending on any registered file descriptors and dispatch
* the corresponding event handlers.
*
* \param timeout A pointer to a time value to wait for events. In case of the null pointer the reactor
* will not block waiting for events.
*/
void handle_events(const struct timeval* timeout = nullptr);
/**
* \brief Causes the reactor to return from the blocking call of the function handle_events.
*/
void unblock();
private:
/* Log context */
const std::string log = "Reactor";
/**
* \brief Structure for storing registered event handlers.
*/
struct EventHandlerMapEntry {
EventHandler* event_handler_;
bool one_shot_;
};
/**
* \brief Notifies registered event handlers for which events are available.
*/
void dispatch_event_handlers();
/**
* \brief Sets up file descriptor sets required for the select system call.
*/
int setup_fd_sets();
/**
* \brief Sends wakeup signal to unblocks the select system call being executed in the function handle_events.
*/
void send_wakeup();
/**
* \brief Handles wakeup signal sent by the function send_wakeup.
*/
void handle_wakeup();
/**
* \brief Pipe used for unblocking a blocked select call.
*/
std::array<int, 2> wakeup_pipe_;
/**
* \brief Indicates whether the reactor is currently dispatching occurred events.
*/
std::atomic_bool dispatching_;
/**
* \brief File descriptor sets required for the select system call.
*/
fd_set rfds, wfds, efds;
/**
* \brief Event handlers for read events.
*/
std::map<int, EventHandlerMapEntry> read_event_handlers_;
/**
* \brief Event handlers for write events.
*/
std::map<int, EventHandlerMapEntry> write_event_handlers_;
/**
* \brief Event handlers for exception events.
*/
std::map<int, EventHandlerMapEntry> exception_event_handlers_;
};
event_handler
c
/**
* \brief Reactor event handler.
*/
class EventHandler {
public:
/**
* \brief Destructor of EventHandler.
*/
virtual ~EventHandler() = default;
/**
* \brief Read event handler.
*
* \param handle A file descriptor.
*
* This function is called by reactor whenever the file descriptor becomes readable.
*/
virtual void handle_read(int handle) {}
/**
* \brief Write event handler.
*
* \param handle A file descriptor.
*
* This function is called by reactor whenever the file descriptor becomes writable.
*/
virtual void handle_write(int handle) {}
/**
* \brief Exception event handler.
*
* \param handle A file descriptor.
*
* This function is called by reactor whenever an exception condition occurs on the file descriptor.
*/
virtual void handle_exception(int handle) {}
};
} /* namespace reactor */
func
c
Reactor::Reactor() : dispatching_(false) {
/* Create a pipe for unblocking the select system call */
int ret = pipe(wakeup_pipe_.data());
if (ret != 0) {
std::runtime_error("pipe");
}
}
Reactor::~Reactor() {
unblock();
// TODO: Verify that no method of Reactor is still running when the object is destroyed.
::close(wakeup_pipe_[0]);
::close(wakeup_pipe_[1]);
}
void Reactor::register_event_handler(int handle, EventHandler* event_handler, unsigned int event_type_mask,
bool one_shot) {
if (event_type_mask & EventType::kReadEvent) {
read_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (event_type_mask & EventType::kWriteEvent) {
write_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (event_type_mask & EventType::kExceptionEvent) {
exception_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (!dispatching_) {
send_wakeup();
}
}
void Reactor::unregister_event_handler(int handle, unsigned int event_type_mask) {
size_t count = 0;
if (event_type_mask & EventType::kReadEvent) {
count += read_event_handlers_.erase(handle);
}
if (event_type_mask & EventType::kWriteEvent) {
count += write_event_handlers_.erase(handle);
}
if (event_type_mask & EventType::kExceptionEvent) {
count += exception_event_handlers_.erase(handle);
}
if (count > 0 && !dispatching_) {
send_wakeup();
}
}
void Reactor::handle_events(const struct timeval* timeout) {
struct timeval tv;
struct timeval* tv_ptr = nullptr;
if (timeout != nullptr) {
// select() on Linux modifies the object under tv_ptr, therefore we copy the passed data.
tv = *timeout;
tv_ptr = &tv;
}
int maxfd = setup_fd_sets();
int count = select(maxfd + 1, &rfds, &wfds, &efds, tv_ptr);
if (count > 0) {
dispatch_event_handlers();
}
}
void Reactor::unblock() { send_wakeup(); }
int Reactor::setup_fd_sets() {
FD_ZERO(&rfds);
FD_ZERO(&wfds);
FD_ZERO(&efds);
FD_SET(wakeup_pipe_[0], &rfds);
int maxfd = wakeup_pipe_[0];
for (const auto& p : read_event_handlers_) {
FD_SET(p.first, &rfds);
maxfd = std::max(maxfd, p.first);
}
for (const auto& p : write_event_handlers_) {
FD_SET(p.first, &wfds);
maxfd = std::max(maxfd, p.first);
}
for (const auto& p : exception_event_handlers_) {
FD_SET(p.first, &efds);
maxfd = std::max(maxfd, p.first);
}
return maxfd;
}
void Reactor::dispatch_event_handlers() {
dispatching_ = true;
if (FD_ISSET(wakeup_pipe_[0], &rfds)) {
handle_wakeup();
}
for (auto it = read_event_handlers_.begin(); it != read_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &rfds)) {
std::cout << log << ": " << "Dispatching read event" << std::endl;
if (entry.one_shot_) {
it = read_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_read(handle);
} else {
++it;
}
}
for (auto it = write_event_handlers_.begin(); it != write_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &wfds)) {
std::cout << log << ": " << "Dispatching write event" << std::endl;
if (entry.one_shot_) {
it = write_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_write(handle);
} else {
++it;
}
}
for (auto it = exception_event_handlers_.begin(); it != exception_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &efds)) {
std::cout << log << ": " << "Dispatching exception event" << std::endl;
if (entry.one_shot_) {
it = exception_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_exception(handle);
} else {
++it;
}
}
dispatching_ = false;
}
void Reactor::send_wakeup() {
int ret;
char dummy;
ret = ::write(wakeup_pipe_[1], &dummy, sizeof(dummy));
if (ret != sizeof(dummy)) {
std::runtime_error("write");
}
}
void Reactor::handle_wakeup() {
int ret;
char dummy;
ret = ::read(wakeup_pipe_[0], &dummy, sizeof(dummy));
if (ret != sizeof(dummy)) {
std::runtime_error("read");
}
}
Reactor::Reactor() : dispatching_(false) {
/* Create a pipe for unblocking the select system call */
int ret = pipe(wakeup_pipe_.data());
if (ret != 0) {
std::runtime_error("pipe");
}
}
Reactor::~Reactor() {
unblock();
// TODO: Verify that no method of Reactor is still running when the object is destroyed.
::close(wakeup_pipe_[0]);
::close(wakeup_pipe_[1]);
}
void Reactor::register_event_handler(int handle, EventHandler* event_handler, unsigned int event_type_mask,
bool one_shot) {
if (event_type_mask & EventType::kReadEvent) {
read_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (event_type_mask & EventType::kWriteEvent) {
write_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (event_type_mask & EventType::kExceptionEvent) {
exception_event_handlers_.insert(std::pair<int, EventHandlerMapEntry>(handle, {event_handler, one_shot}));
}
if (!dispatching_) {
send_wakeup();
}
}
void Reactor::unregister_event_handler(int handle, unsigned int event_type_mask) {
size_t count = 0;
if (event_type_mask & EventType::kReadEvent) {
count += read_event_handlers_.erase(handle);
}
if (event_type_mask & EventType::kWriteEvent) {
count += write_event_handlers_.erase(handle);
}
if (event_type_mask & EventType::kExceptionEvent) {
count += exception_event_handlers_.erase(handle);
}
if (count > 0 && !dispatching_) {
send_wakeup();
}
}
void Reactor::handle_events(const struct timeval* timeout) {
struct timeval tv;
struct timeval* tv_ptr = nullptr;
if (timeout != nullptr) {
// select() on Linux modifies the object under tv_ptr, therefore we copy the passed data.
tv = *timeout;
tv_ptr = &tv;
}
int maxfd = setup_fd_sets();
int count = select(maxfd + 1, &rfds, &wfds, &efds, tv_ptr);
if (count > 0) {
dispatch_event_handlers();
}
}
void Reactor::unblock() { send_wakeup(); }
int Reactor::setup_fd_sets() {
FD_ZERO(&rfds);
FD_ZERO(&wfds);
FD_ZERO(&efds);
FD_SET(wakeup_pipe_[0], &rfds);
int maxfd = wakeup_pipe_[0];
for (const auto& p : read_event_handlers_) {
FD_SET(p.first, &rfds);
maxfd = std::max(maxfd, p.first);
}
for (const auto& p : write_event_handlers_) {
FD_SET(p.first, &wfds);
maxfd = std::max(maxfd, p.first);
}
for (const auto& p : exception_event_handlers_) {
FD_SET(p.first, &efds);
maxfd = std::max(maxfd, p.first);
}
return maxfd;
}
void Reactor::dispatch_event_handlers() {
dispatching_ = true;
if (FD_ISSET(wakeup_pipe_[0], &rfds)) {
handle_wakeup();
}
for (auto it = read_event_handlers_.begin(); it != read_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &rfds)) {
std::cout << log << ": " << "Dispatching read event" << std::endl;
if (entry.one_shot_) {
it = read_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_read(handle);
} else {
++it;
}
}
for (auto it = write_event_handlers_.begin(); it != write_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &wfds)) {
std::cout << log << ": " << "Dispatching write event" << std::endl;
if (entry.one_shot_) {
it = write_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_write(handle);
} else {
++it;
}
}
for (auto it = exception_event_handlers_.begin(); it != exception_event_handlers_.end();) {
int handle = it->first;
auto entry = it->second;
if (FD_ISSET(handle, &efds)) {
std::cout << log << ": " << "Dispatching exception event" << std::endl;
if (entry.one_shot_) {
it = exception_event_handlers_.erase(it);
} else {
++it;
}
entry.event_handler_->handle_exception(handle);
} else {
++it;
}
}
dispatching_ = false;
}
void Reactor::send_wakeup() {
int ret;
char dummy;
ret = ::write(wakeup_pipe_[1], &dummy, sizeof(dummy));
if (ret != sizeof(dummy)) {
std::runtime_error("write");
}
}
void Reactor::handle_wakeup() {
int ret;
char dummy;
ret = ::read(wakeup_pipe_[0], &dummy, sizeof(dummy));
if (ret != sizeof(dummy)) {
std::runtime_error("read");
}
}
IPCClient
c
class IPCClient: public reactor::EventHandler
{
public:
IPCClient(reactor::Reactor& reactor, std::string local_address);
virtual ~IPCClient ();
/**
* \brief Starts IPCClient
*/
void start();
/**
* \brief Stops IPCClient
*/
void stop();
private:
/* Log context */
const std::string log = "IPCClient";
/**
* \brief A file descriptor representing a Unix domain socket file
*/
int handle_;
/**
* \brief A name of a socket file representing the local address of the Unix domain socket
*/
std::string local_address_;
/**
* \brief A flag to control the client processing thread
*/
//控制线程是否结束
std::atomic<bool> ipcclient_done_;
/**
* \brief A reactor which is used for notifications about non-blocking writing to the Unix domain socket
*/
reactor::Reactor& reactor_;
};
source code
c
namespace ipcclient {
/* Reactor not used since this client can write anytime to the used Unix domain socket (no contention) */
IPCClient::IPCClient(reactor::Reactor& reactor, std::string local_address)
: reactor_(reactor), local_address_(local_address){}
IPCClient::~IPCClient (){
std::cout << log << ": " << "Destructing IPCClient" << std::endl << std::endl;
}
void IPCClient::start(){
std::cout << log << ": " << "Starting IPCClient" << std::endl << std::endl;
/* If Reactor is used here you can register your write handler */
/* Not a very good use case; it sends all the time */
//reactor_.register_event_handler(handle_, this, reactor::EventType::kWriteEvent);
std::thread thread([this](){
while (1) {
if ( (handle_ = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
perror("socket error");
exit(1);
}
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
strcpy(addr.sun_path, local_address_.c_str());
socklen_t lenAddr_=sizeof(addr.sun_family) + strlen(addr.sun_path);
if (connect(handle_, (struct sockaddr*)&addr, lenAddr_) != 0 ){
perror("Connection unsuccessful!!!");
exit(1);
}
std::cout << log << ": " << "Connected to the IPCServer..." << std::endl;
const std::string msg = "Hello from Client";
std::cout << log << ": " << "Sending data to server: " << msg << std::endl;
if (write(handle_, msg.c_str(), msg.size())!=msg.size()){
perror("write");
exit(1);
}
close(handle_);
std::this_thread::sleep_for(std::chrono::milliseconds(5000));
}});
thread.detach();
}
void IPCClient::stop(){
std::cout << log << ": " << "Stopping IPCClient" << std::endl << std::endl;
close(handle_);
}
}
IPCServer
c
namespace ipcserver {
class IPCServer: public reactor::EventHandler
{
public:
IPCServer(reactor::Reactor& reactor, std::string local_address);
virtual ~IPCServer ();
/**
* \brief Starts reading messages from the Unix domain socket connected to a server.
*/
void start();
/**
* \brief Stops reading messages from the Unix domain socket connected to a server
* and closes the connection.
*/
void stop();
private:
/* Log context */
const std::string log = "IPCServer";
/**
* \brief handler for asynchronous event notification on the Unix domain socket connected to a server.
*
* \param a an a argument.
*/
void handle_read(int handle) override;
/**
* \brief A file descriptor representing an established connection to an application.
*/
int handle_;
/**
* \brief Proving that the server is not blocked while waiting on new connections.
*/
void do_something_else();
/**
* \brief Unix path representing the local address of the accepting Unix domain socket.
*/
std::string local_address_;
/**
* \brief A flag to control the server processing thread
*/
std::atomic<bool> ipcserver_done_;
/**
* \brief A reactor which is used for notification about incoming connections.
*/
reactor::Reactor& reactor_;
};
}
source code
c
namespace ipcserver {
IPCServer::IPCServer(reactor::Reactor& reactor, std::string local_address)
: reactor_(reactor), local_address_(local_address){
if ( (handle_ = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
perror("socket error");
exit(1);
}
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
strcpy(addr.sun_path, local_address.c_str());
if (bind(handle_, (struct sockaddr*)&addr, sizeof(addr)) == -1) {
perror("bind error");
exit(1);
}
if (listen(handle_, 1) == -1) {
perror("listen error");
exit(1);
}
}
IPCServer::~IPCServer (){
std::cout << log << ": " << "Destructing IPCServer" << std::endl << std::endl;
unlink(local_address_.c_str());
//close();
}
/**
* Process some other things while waiting on new connections
*/
void IPCServer::do_something_else(){
std::thread thread = std::thread([this](){
while(!ipcserver_done_){
std::cout << log << ": " << "I am not blocked, I process some other stuff..." << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}});
thread.detach();
}
/**
* \brief Starts reading messages from the Unix domain socket connected to a server.
*/
void IPCServer::start(){
std::cout << log << ": " << "Starting IPCServer" << std::endl << std::endl;
reactor_.register_event_handler(handle_, this, reactor::EventType::kReadEvent);
do_something_else();
}
/**
* \brief Stops reading messages from the Unix domain socket connected to a server
* and closes the connection.
*/
void IPCServer::stop(){
std::cout << log << ": " << "Stopping IPCServer" << std::endl << std::endl;
ipcserver_done_ = true;
}
/**
* \brief handler for asynchronous event notification on the Unix domain socket connected to a server.
*
* \param a an a argument.
*/
void IPCServer::handle_read(int handle) {
std::thread thread = std::thread([this]() {
int cl, rc;
char buf[100];
std::cout << log << ": " << "handle_read called" << std::endl;
if ( (cl = accept(handle_, NULL, NULL)) == -1) {
std::cout << log << ": " << "Accept error" << std::endl;
}
std::cout << log << ": " << "New connection accepted" << std::endl;
if ( (rc=read(cl,buf,sizeof(buf))) > 0) {
printf("%s: Read %u bytes: %.*s\n\n", log.c_str(), rc, rc, buf);
}
close(cl);
});
thread.join();
}
}