Wading in the shallow end of the pool - Thread Pools

Posted by on in Blogs
A few weeks ago I was following with interest the announcement by Intel where they open-sourced their Thread Building Blocks (TBB) library. OK, so it is written in C++, makes heavy use of templates, and is generally like nearly every C++ library out there with heavy macro usage and umpteen levels of nest macros... Besides all of that, either I've gotten way better at reading and understanding C++ or these libraries are simplifying things, I was able to understand it quite well. This got me thinking a bit and so I decided to take a quick look at Windows Thread Pools.

A thread pool is simply a way to more efficiently utilize the CPU(s) while incurring as little thread context switching overhead as possible. If your application needs to do lots of quick little tasks and can do them concurrently, the build-up and tear-down overhead of threads may actually cancel out any gains from trying to push these tasks into threads. Think of thread pooling as simply a way to queue up a bunch of work-items and allow them to run on an number of available threads that are created once and used over and over. Thus you can amortize the build-up and tear-down of the threads across all the work they do. Once a work item is assigned to a thread, that work items has exclusive access to that particular thread until it is finished and allows the thread to return to the pool. A thread pool is not the solution for everything. For instance, if you have a thread that is blocked most of the time, you should not do that on a thread pool thread.

Off I went on a discovery mission to see how Windows thread pooling works. I wanted to visually see what was going on as I scheduled or queued work items. I came up with a simple TThreadPool class. Here's the declaration of what I came up with:

type
TThreadPool = class
private
type
TUserWorkItem = class
FSender: TObject;
FWorkerEvent: TNotifyEvent;
end;
class procedure QueueWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent; Flags: ULONG); overload; static;
public
class procedure QueueWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent); overload; static;
class procedure QueueIOWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent); static;
class procedure QueueUIWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent); static;
end;


You'll notice that this is designed to be a singleton class since all methods are class static. The main function of interest is the QueueWorkItem. What this does is simply schedule the WorkerEvent to be called from a thread pool thread whenever that is. It is up to you to make sure that the instance on which the WorkerEvent event is called is still valid at the time it is called. The other two methods simply correspond to some of the flags you can pass to QueueUserWorkItem. They're not used right now. Sender is passed through to the event handler specified by WorkerEvent, so that object should contain the context in which that task item is to work.

Now here's the implementation of that class:


function InternalThreadFunction(lpThreadParameter: Pointer): Integer; stdcall;
begin
Result := 0;
try
try
with TThreadPool.TUserWorkItem(lpThreadParameter) do
if Assigned(FWorkerEvent) then
FWorkerEvent(FSender);
finally
TThreadPool.TUserWorkItem(lpThreadParameter).Free;
end;
except
// Eventually this will need to somehow synchronously notify the main thread and either reraise the exception over there or
// otherwise provide some information about the exception to the main thread.
end;
end;

{ TThreadPool }

class procedure TThreadPool.QueueWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent);
begin
QueueWorkItem(Sender, WorkerEvent, WT_EXECUTEDEFAULT);
end;

class procedure TThreadPool.QueueIOWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent);
begin
QueueWorkItem(Sender, WorkerEvent, WT_EXECUTEINIOTHREAD);
end;

class procedure TThreadPool.QueueUIWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent);
begin
QueueWorkItem(Sender, WorkerEvent, WT_EXECUTEINUITHREAD);
end;

class procedure TThreadPool.QueueWorkItem(Sender: TObject; WorkerEvent: TNotifyEvent; Flags: ULONG);
var
WorkItem: TUserWorkItem;
begin
if Assigned(WorkerEvent) then
begin
IsMultiThread := True;
WorkItem := TUserWorkItem.Create;
try
WorkItem.FWorkerEvent := WorkerEvent;
WorkItem.FSender := Sender;
if not QueueUserWorkItem(InternalThreadFunction, WorkItem, Flags) then
RaiseLastOSError;
except
WorkItem.Free;
raise;
end;
end;
end;


To see just what is going on I wrote this little application:


ThreadPoolApp


The numbers in the list box represent the thread ID for the thread that is currently running. The band of colors visually show how the threads are scheduled. What is interesting is that this is what it looks like after about the 3rd or 4th time it runs. The first time it runs, each color is painted in sequence in a clearly serialized manner. Subsequent iterations seem to interleave more and more. This is running on a dual core system. You can get the whole application in Code Central.

As multi-core systems become more and more mainstream (aren't they already??), your applications really should begin to take advantage of them. The problem is that multi-threaded, or concurrent, programming is not very easy since we humans tend to think serially and so it is conceptually a little tricky to understand all the various nuances of concurrency. This is where CodeGear is looking to help. By providing simple, easy to understand, tools and libraries we can help bring multi-core programming out of the realm of voodoo and black magic and into the hands of developers of all skill levels. This will involve providing both library and compiler/tool support.
About
Gold User, Rank: 84, Points: 11
Comments are not available for public users. Please login first to view / add comments.