The Single-Queue Dispatcher Pattern
Today is Black Friday in the States: the day after Thanksgiving when the Christmas shopping season officially begins. I usually try to avoid stores on this day, but my wife needs a new PC, so we went to Fry's.
There was quite a crowd there as we were picking out components, but the experience was incredible. They had extra staff helping customers on the floor, and even more manning the registers. The line moved quickly and smoothly due to their excellent planning.
The system that Fry's uses is a single queue. One person stands at the head of the queue and dispatches customers to multiple registers. Unlike a grocery store, with its one queue per register, this system doesn't force you to chose your server prior to getting in line. Have you ever gotten behind the person with three price-checks only to see the people who queued up after you served first in the next line? That won't happen at Fry's.
The single-queue solution is so simple and elegant that it is my favorite parallel-service software pattern. It is applicable any time you have units of work concurrently flowing into a system, when those units of work are independent. Examples I've seen include HTTP requests, financial transactions, and remote procedure calls.
This pattern has three components. First is the queue, which holds units of work. Second is the dispatcher, which draws units of work from the queue. Third is the pool, which holds the processors to which the dispatcher assigns work.
The queue is a thread-safe container. It may be implemented in memory if it is OK for work items to be lost (e.g. HTTP requests, which will be retried). Or it may be implemented in a database if work items cannot be lost (e.g. financial transactions).
The dispatcher is a single thread in a tight loop. It blocks on the queue until a unit of work is available, then it blocks on the pool until a processor can be allocated. More sophisticated dispatchers may even grow and shrink the pool based on demand.
The pool is another thread-safe collection, this one of processors. A processor carries with it the resources it needs to do the work, such as threads or database connections. Allocation of these resources can be expensive, so processors are returned to the pool with their resource intact. Usually the pool is implemented as a stack, so that a small number of processors will be kept as busy as possible. Processors that sink to the bottom of the pool will go stale, and will be the first to be reclaimed by a sophisticated dispatcher.
This pattern is supported by Berkley Sockets. The accept() function blocks on a queue within the TCP/IP stack. The ideal use of this function is for a dispatcher thread to call it in a tight loop and allocate the sockets it returns to threads in a pool.
This pattern is implemented in .NET's ThreadPool. The QueueUserWorkItem method places a unit of work into a managed queue, and the .NET Framework dispatches that work to a pooled thread for processing.
There are countless other implementations of this pattern, from JMS to MSMQ. But even if one of these solutions is more than you need, you can implement the pattern yourself. It is simple, efficient, and effective.