Threadpool does not really fit into the same category as poll and epoll, so I will assume you are referring to threadpool as in "threadpool to handle many connections with one thread per connection".
Pros and cons
- threadpool
- Reasonably efficient for small and medium concurrency, can even outperform other techniques.
- Makes use of multiple cores.
- Does not scale well beyond "several hundreds" even though some systems (e.g. Linux) can in principle schedule 100,000s of threads just fine.
- Naive implementation exhibits "thundering herd" problem.
- Apart from context switching and thundering herd, one must consider memory. Each thread has a stack (typically at least a megabyte). A thousand threads therefore take a gigabyte of RAM just for stack. Even if that memory is not committed, it still takes away considerable address space under a 32 bit OS (not really an issue under 64 bits).
- Threads can actually use
epoll
, though the obvious way (all threads block on epoll_wait
) is of no use, because epoll will wake up every thread waiting on it, so it will still have the same issues.
- Optimal solution: single thread listens on epoll, does the input multiplexing, and hands complete requests to a threadpool.
futex
is your friend here, in combination with e.g. a fast forward queue per thread. Although badly documented and unwieldy, futex
offers exactly what's needed. epoll
may return several events at a time, and futex
lets you efficiently and in a precisely controlled manner wake N blocked threads at a time (N being min(num_cpu, num_events)
ideally), and in the best case it does not involve an extra syscall/context switch at all.
- Not trivial to implement, takes some care.
fork
(a.k.a. old fashion threadpool)
- Reasonably efficient for small and medium concurrency.
- Does not scale well beyond "few hundreds".
- Context switches are much more expensive (different address spaces!).
- Scales significantly worse on older systems where fork is much more expensive (deep copy of all pages). Even on modern systems
fork
is not "free", although the overhead is mostly coalesced by the copy-on-write mechanism. On large datasets which are also modified, a considerable number of page faults following fork
may negatively impact performance.
- However, proven to work reliably for over 30 years.
- Ridiculously easy to implement and rock solid: If any of the processes crash, the world does not end. There is (almost) nothing you can do wrong.
- Very prone to "thundering herd".
poll
/ select
- Two flavours (BSD vs. System V) of more or less the same thing.
- Somewhat old and slow, somewhat awkward usage, but there is virtually no platform that does not support them.
- Waits until "something happens" on a set of descriptors
- Allows one thread/process to handle many requests at a time.
- No multi-core usage.
- Needs to copy list of descriptors from user to kernel space every time you wait. Needs to perform a linear search over descriptors. This limits its effectiveness.
- Does not scale well to "thousands" (in fact, hard limit around 1024 on most systems, or as low as 64 on some).
- Use it because it's portable if you only deal with a dozen descriptors anyway (no performance issues there), or if you must support platforms that don't have anything better. Don't use otherwise.
- Conceptually, a server becomes a little more complicated than a forked one, since you now need to maintain many connections and a state machine for each connection, and you must multiplex between requests as they come in, assemble partial requests, etc. A simple forked server just knows about a single socket (well, two, counting the listening socket), reads until it has what it wants or until the connection is half-closed, and then writes whatever it wants. It doesn't worry about blocking or readiness or starvation, nor about some unrelated data coming in, that's some other process's problem.
epoll
- Linux only.
- Concept of expensive modifications vs. efficient waits:
- Copies information about descriptors to kernel space when descriptors are added (
epoll_ctl
)
- This is usually something that happens rarely.
- Does not need to copy data to kernel space when waiting for events (
epoll_wait
)
- This is usually something that happens very often.
- Adds the waiter (or rather its epoll structure) to descriptors' wait queues
- Descriptor therefore knows who is listening and directly signals waiters when appropriate rather than waiters searching a list of descriptors
- Opposite way of how
poll
works
- O(1) with small k (very fast) in respect of the number of descriptors, instead of O(n)
- Works very well with
timerfd
and eventfd
(stunning timer resolution and accuracy, too).
- Works nicely with
signalfd
, eliminating the awkward handling of signals, making them part of the normal control flow in a very elegant manner.
- An epoll instance can host other epoll instances recursively
- Assumptions made by this programming model:
- Most descriptors are idle most of the time, few things (e.g. "data received", "connection closed") actually happen on few descriptors.
- Most of the time, you don't want to add/remove descriptors from the set.
- Most of the time, you're waiting on something to happen.
- Some minor pitfalls:
- A level-triggered epoll wakes all threads waiting on it (this is "works as intended"), therefore the naive way of using epoll with a threadpool is useless. At least for a TCP server, it is no big issue since partial requests would have to be assembled first anyway, so a naive multithreaded implementation won't do either way.
- Does not work as one would expect with file read/writes ("always ready").
- Could not be used with AIO until recently, now possible via
eventfd
, but requires a (to date) undocumented function.
- If the above assumptions are not true, epoll can be inefficient, and
poll
may perform equally or better.
epoll
cannot do "magic", i.e. it is still necessarily O(N) in respect to the number of events that occur.
- However,
epoll
plays well with the new recvmmsg
syscall, since it returns several readiness notifications at a time (as many as are available, up to whatever you specify as maxevents
). This makes it possible to receive e.g. 15 EPOLLIN notifications with one syscall on a busy server, and read the corresponding 15 messages with a second syscall (a 93% reduction in syscalls!). Unluckily, all operations on one recvmmsg
invokation refer to the same socket, so it is mostly useful for UDP based services (for TCP, there would have to be a kind of recvmmsmsg
syscall which also takes a socket descriptor per item!).
- Descriptors should always be set to nonblocking and one should check for
EAGAIN
even when using epoll
because there are exceptional situations where epoll
reports readiness and a subsequent read (or write) will still block. This is also the case for poll
/select
on some kernels (though it has presumably been fixed).
- With a naive implementation, starvation of slow senders is possible. When blindly reading until
EAGAIN
is returned upon receiving a notification, it is possible to indefinitely read new incoming data from a fast sender while completely starving a slow sender (as long as data keeps coming in fast enough, you might not see EAGAIN
for quite a while!). Applies to poll
/select
in the same manner.
- Edge-triggered mode has some quirks and unexpected behaviour in some situations, since the documentation (both man pages and TLPI) is vague ("probably", "should", "might") and sometimes misleading about its operation.
The documentation states that several threads waiting on one epoll are all signalled. It further states that a notification tells you whether IO activity has happened since the last call to epoll_wait
(or since the descriptor was opened, if there was no previous call).
The true, observable behaviour in edge-triggered mode is much closer to "wakes the first thread that has called epoll_wait
, signalling that IO activity has happened since anyone last called either epoll_wait
or a read/write function on the descriptor, and thereafter only reports readiness again to the next thread calling or already blocked in epoll_wait
, for any operations happening after anyone called a of read (or write) function on the descriptor". It kind of makes sense, too... it just isn't exactly what the documentation suggests.
kqueue
- BSD analogon to
epoll
, different usage, similar effect.
- Also works on Mac OS X
- Rumoured to be faster (I've never used it, so cannot tell if that is true).
- Registers events and returns a result set in a single syscall.
- IO Completion ports
- Epoll for Windows, or rather epoll on steroids.
- Works seamlessly with everything that is waitable or alertable in some way (sockets, waitable timers, file operations, threads, processes)
- If Microsoft got one thing right in Windows, it is completion ports:
- Works worry-free out of the box with any number of threads
- No thundering herd
- Wakes threads one by one in a LIFO order
- Keeps caches warm and minimizes context switches
- Respects number of processors on machine or delivers the desired number of workers
- Allows the application to post events, which lends itself to a very easy, failsafe, and efficient parallel work queue implementation (schedules upwards of 500,000 tasks per second on my system).
- Minor disadvantage: Does not easily remove file descriptors once added (must close and re-open).
Frameworks
libevent -- The 2.0 version also supports completion ports under Windows.
ASIO -- If you use Boost in your project, look no further: You already have this available as boost-asio.
Any suggestions for simple/basic tutorials?
The frameworks listed above come with extensive documentation. The Linux docs and MSDN explains epoll and completion ports extensively.
Mini-tutorial for using epoll:
int my_epoll = epoll_create(0); // argument is ignored nowadays
epoll_event e;
e.fd = some_s
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…