I’ve been speaking recently at an Intel Software Conference in Barcelona about “thinking parallel”. I was saying, for instance, there is little point in high concurrency parallel programming producing reports faster—if the report sits on someone’s desk for a week, queuing up in a serial authorisation process with a single overloaded (human) processor.
Thinking parallel means thinking about concurrency in business processes too, something that isn’t often mentioned because (I think) many of the thinkers in the parallel processing space come from engineering and HPC (High Performance Computing) backgrounds where the problems are somewhat different to those in the mainstream business space. As are the architectures—general business computing is usually on a “shared memory” architecture so what one processor is doing can easily affect what others are doing; whereas HPC is often “shared nothing” (which, in some ways, is easier).
A real issue, now that performance improvements in mainstream computing will come from increasing (energy efficient) parallelism rather than ever-increasing clock speeds, is that many of the tools that developers need to help them cope with concurrency are built for HPC academics and performance specialists rather than mainstream business programmers. This makes some of the developer-friendly enhancements for parallelism in Microsoft’s Visual Studio 2010 welcome.
However, I was a bit taken aback when Steve Teixeira (Product Unit Manager of Parallel Computing Platform Developer Division, Microsoft Corporation—where do they get these snappy job titles) said that feedback from developers using VS 2010 was that help with the “correctness” of parallel programming was what they wanted next. I’d have thought that “correctness” (avoiding race-related result errors and deadlocks) was about the first thing a developer would need! Still, if there’s one thing Microsoft is good at, it’s making development tools that developers like, so I’m looking forward, as usual, to the next release.
James Reinders (Director of Marketing and Business for Intel Software Developer Products), did pull me up on a rather throwaway remark in my talk, something like “you can’t just rely on the compiler to look after concurrency“: “it’s true,” he says, “but it deserves a whole article, not just a sentence”. Well, yes, perhaps its full implications were a bit hard for the audience to take in after lunch and James goes on to make a good point: you can’t rely on the compiler to choose your algorithm for you; you can’t rely on the compiler to discover parallelism, in general; but you can rely on a compiler to package parallelism you’ve identified and no programmer should try to compete with a compiler in coding up the details of parallelism.
So, here’s what I mean in a bit more detail than I went into in the presentation (which you can find, with my explanatory notes, here). “Push button” programming is a dream but it hasn’t been fully realised for serial programming, so promising it for parallel programming (as in “write serial logic, let the compiler take care of breaking it up into threads and tasks scheduling it across parallel processors“) seems rather optimistic—and, of course, neither Intel nor Microsoft (just two examples) actually do make this promise. They promise to assist developers, by providing high-level abstractions for parallel programming, not to do all the work for developers. Nevertheless, I remember once telling a developer that he needed to think about concurrency issues when developing a “groupware” application for the enterprise, many years ago, and being told that “the compiler will look after it“—it’s an attractive idea for many developers, I suspect.
And James confirms this: “oh—if I had a penny for every time a customer said “Won’t the compiler just do it?”… well, I’d be rich,” he says, and goes on to say, “I’ve had to break this news over and over. Where do people did this high opinion of compiler magic? Compilers are amazing—but not that amazing… and they aren’t going to get there any time soon. The task to take a serial program and transform it to parallelism is generally an algorithm transformation. That won’t happen without the sort of intelligence humans still have and machines do not. In scientific code, loop-based mathematics can have parallelism that is “discovered.” This may feel common in scientific code—but it doesn’t apply to programs outside this domain… because scientific loops run a long time and process a lot of data… that is key“.
Think of the issues the compiler faces. For a start, it has to know how many processors are available at run time and although the latest compilers and compiler extensions handle this pretty well, you do need to be using the latest compiler technology which can abstract “tasks” and handle allocating them to processors at run time; James claims that Intel’s “OpenMP and TBB [Intel's supported abstractions for parallel processing, explained below] both do this with virtually no overhead“.
In theory, the compiler could perform a static analysis of the code and determine code blocks that are candidates for parallel running. However, I believe that it would have to be very conservative about using this information; as the safety (or not) of running code in parallel threads can only be fully determined by dynamic analysis at run time on the specific workload of concern.
So, what you can do is give the compiler “hints” and tell it what loops, say, can be parallelised and what processing threads can run in parallel. However, even then there are issues. Not least of these issues is whether the “hint” is correct—perhaps two pieces of code can safely run in parallel with the business process employed in London, say; but a slightly different business process employed in Shanghai, possibly, might mean that sometimes a race condition or a deadlock can develop. There’s a more complete description of races and deadlocks than I want to go into here, with code examples, on the Microsoft support site; it refers to multithreaded Visual Basic .NET but the issues are general.
I believe that there’s partly a computer science issue here: you can’t, in general, prove that a computer program (the compiler) can completely and correctly automate a solution to a problem (such as scheduling program threads across an arbitrary number of processors for an arbitrary workload). This is probably good, as it keeps developers in a job! James agrees, “Compilers are a collection of approximations to NP-hard (or worse) problems. Yes, compilers cannot, in general, be proven correct—but compilers using parallelism is not unique in that regard“.
However, that said, just think of the simple practical difficulties a compiler would face even with “good enough push button automation” of the production of multithreaded object code which will automatically schedule itself (or allow itself to be scheduled) across an arbitrary number of processors.
To start with, thread handling is an overhead, so (regardless of workload) the code mustn’t over-provision itself with threads (but what “over-provision” means rather depends on the umber of processors and the system configuration found at run time).
Then, it must avoid any possibility of deadlocks—which are (roughly) a consequence of having locks on resources, particularly locks held for a long time. However, you must have some locks as, otherwise, things may process in the wrong order. Even if this gives rise to a potential deadlock, this may not matter in practice if the locks are released quickly—but releasing locks quickly depends on the actual workload at run time (both workload volumes, since a particular thread on an overloaded machine may run unexpectedly slowly; and also on workload characteristics).
And, of course, the code must always give correct results regardless of possible race conditions (a race condition arises, roughly, when two threads are updating the same object and whichever happens to run last overwrites the changes made by the other thread). This means that each thread must be processing independent units of work; or, if not, that critical operations must be protected with locks (remembering that, in general, the more locks the less concurrency you’ll achieve). Again, whether a potential race condition is met in practice depends on workload (on an overloaded machine a thread which usually completes well before another thread wants to update a potentially shared object, might not finish in time on occasion). Intel provides tools which will let a skilled developer recognise and address potential race conditions but they do need user understanding and input.
These issues with developing code for optimal concurrent processing are not new and in thirty or more years since IBM first produced multiprocessor mainframes for business computing, things haven’t changed much, except in degree. We expect many more processors to work in parallel these days on much less specialised systems (concurrency in business transaction processing systems using a mainframe database management system is comparatively straightforward).
The tools which help us cope with concurrency have got a lot better and we now have coping strategies around using multithreaded utilities (database management systems, application servers, message queue managers etc.) which let us get some of the benefits of parallel processing without thinking about it too much. Nevertheless, the issue is arguably harder to deal with these days (more sophisticated platforms, more complex applications and developers used to serial coding), so we need better (and more developer-friendly) tools.
Nevertheless, there is hope. As James says: “I think most developers think the new models are about making the coding easier, or the parallelism clearer. Yes—they do that. But their real value is reducing (or eliminating) the sources of bugs—both in correctness and scaling. OpenMP, TBB, Cilk and Ct—all help enormously with this. As do Microsoft’s TBB-line solutions of PPL and TPL. That’s the real revolution going on. One that asks programmers to code in a way that steers them away from disaster, and asks the compiler to help only in ways that it can“.
- OpenMP (Open Multi-Processing—(see Wikipedia) is a portable, scalable model for developing multi-threaded shared memory parallel applications in C/C++ and Fortran (Intel has one implementation, there are others).
- TBB is Intel’s popular Threading Building Blocks library for parallel applications in C++. In Intel’s words, TBB “represents a higher-level, task-based parallelism that abstracts platform details and threading mechanisms for scalability and performance”, it’s rather more than just a threads-replacement library.
- Cilk is a general-purpose programming language based on GNU C designed for multithreaded parallel computing in HPC applications—Cilk introduces a few keywords to direct parallel programming; if you take these out of a Cilk program, it compiles as a serial C program. Cilk Arts developed Cilk++ for more general commercial applications in C++ and this was acquired by Intel in 2009.
- Ct (”C for throughput”—only a working name, not what it will be called if it is released commercially) is a generalised data parallel programming solution under development by Intel, that aims to free application developers from dependencies on particular low-level parallelism mechanisms or hardware architectures—you can download the beta here.
- TPL is Microsoft’s “Task Parallel Library” in Visual Studio 2010, a domain-specific embedded language for expressing concurrency that lets programmers express potential parallelism in existing sequential code.
- PPL is Microsoft’s Parallel Patterns Library in Visual Studio 2010, which provides support for task parallelism; generic parallel algorithms that act on collections of data in parallel; and, generic parallel containers and objects that provide safe concurrent access to their elements.
“If you have been affected by any of the issues raised in this article, professional help and counselling is available.” Well, for example, starting May 12th Intel’s parallelism evangelist, James Reinders, is interviewing Microsoft evangelists Steve Teixeira and Herb Sutter and they’ll be talking about concurrency issues—register to join in here. I haven’t seen this video in advance, but I always enjoy talking to James—he talks about the real issues rather than just about marketing Intel, in my experience.