A thread is a context of execution within a program. Multithreaded programming deals with designing a program to have parts of it execute concurrently.
Related categories 2
A site devoted to lock-free algorithms, scalable architecture, multicore design patterns, parallel computations, threading libraries, tooling support and related topics.
Application-Level Abstractions for Lock-Free Data Sharing
Describes lock-free data sharing, otherwise known as "wait-free data sharing" as an alternative to the use of locks.
Apply Critical Sections Consistently
Critical sections are the One True Tool for guaranteeing mutual exclusion on shared variables. Like most tools, these must be applied consistently, and with the intended meanings.
Avoid Exposing Concurrency: Hide It Inside Synchronous Methods
Explains where to start when trying to add concurrency to a mass of existing code.
Bibliography on Threads and Multithreading
Part of the Computer Science Bibliography Collection.
Break Up and Interleave Work to Keep Threads Responsive
Breaking up is hard to do, but interleaving can be even subtler.
Frequently asked questions (by Bryan O'Sullivan).
Fundamental Concepts of Parallel Programming
Explains fundamental concepts for moving from a linear to a parallel programming model
Generic Synchronization Policies in C++
Most uses of synchronization code in multi-threaded applications fall into a small number of high-level “usage patterns”, or what can be called generic synchronization policies (GSPs). This paper illustrates how the use of such GSPs simplify the writing of thread-safe classes. In addition, this paper presents a C++ class library that implements commonly-used GSPs.
Introduction to Priority Inversion
Gives an introduction to priority inversion and shows a pair of techniques to avoid them.
It's Not Always Nice To Share
It isn't just languages that have poor support for thread local storage, but operating systems too
Presents a solution to races and deadlocks based on a well-known deadlock-avoidance protocol and shows how it can be enforced by the compiler. It can be applied to programs in which the number of locks is fixed and known up front.
Lock-Free Code: A False Sense of Security
Writing lock-free code can confound anyone-even expert programmers, as Herb shows in this article.
Lock-free Interprocess Communication
Interprocess communication is an essential component of modern software engineering. Often, lock-free IPC is accomplished via special processor commands. This article propose a communication type that requires only atomic writing of processor word from processor cache into main memory and atomic processor word reading from main memory into the processor register or processor cache.
The Many Faces of Deadlock
Explains that deadlock can happen whenever there is a blocking (or waiting) cycle among concurrent tasks.
Maximize Locality, Minimize Contention
Explains why in the concurrent world, locality is a first-order issue that trumps most other performance considerations. Now locality is no longer just about fitting well into cache and RAM, but to avoid scalability busters by keeping tightly coupled data physically close together and separately used data far, far apart.
Measuring Parallel Performance: Optimizing a Concurrent Queue
Shows different ways of how to write a fast, internally synchronized queue, one that callers can use without any explicit external locking or other synchronization, and compares the performance.
Multi-threaded Debugging Techniques
Describes a number of general purpose debugging techniques for multi-threaded applications.
Multithreaded File I/O
So far multithreaded file I/O is a under-researched field. Although its simple to measure, there is not much common knowledge about it. The measurements presented here show that multithreading can improve performance of file access directly, as well as indirectly by utilizing available cores to process the data read.
The Pillars of Concurrency
This article makes the case that a consistent mental model is needed to talk about concurrency.
Very lightweight stackless threads; give linear code execution for event-driven systems, designed to use little memory; library is pure C, no platform-specific Assembly; usable with or without OS. Open source, BSD-type license.
Describes some key principles that will help mastering the "black art" of writing multithreaded code.
Higher order threads for C++; tutorial and reference manual.
Sharing Is the Root of All Contention
Sharing requires waiting and overhead, and is a natural enemy of scalability. This article focuses on one important case, namely mutable (writable) shared objects in memory, which are an inherent bottleneck to scalability on multicore systems.
Software and the Concurrency Revolution
Focuses on the implications of concurrency for software and its consequences for both programming languages and programmers. (Herb Sutter and James Larus)
State Threads Library
Small application library for writing fast, highly scalable Internet programs on Unix-like platforms. Open source, MPL or GPL.
A Thread Performance Comparison
Compares Windows NT and Solaris on a symmetric multiprocessor machine.
Understanding Parallel Performance
Explains how to accurately analyze the real performance of parallel code and lists some basic considerations and common costs.
Use Lock Hierarchies to Avoid Deadlock
Explains how to use lock hierarchies to avoid deadlock by assigning each shared resource a level that corresponds to its architectural layer.
Use Thread Pools Correctly: Keep Tasks Short and Nonblocking
A thread pool hides a lot of details, but to use it effectively some awareness of some things a pool does under the covers is needed to avoid inadvertently hitting performance and correctness pitfalls.
Use Threads Correctly = Isolation + Asynchronous Messages
Motivates and illustrate best practices for using threads - techniques that will make concurrent code easier to write correctly and to reason about with confidence.
volatile - Multithreaded Programmer's Best Friend
Discusses the usage of the volatile keyword in multithreaded C++ programs.
What's New in Boost Threads?
The Boost.Thread library, which enables the use of multiple threads of execution with shared data in portable C++ code, has undergone some major changes.
Writing Lock-Free Code: A Corrected Queue
Explores lock-free code by focusing on creating a lock-free queue.
Welcome to the Jungle
Herb Sutter is looking at how mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. (December 29, 2011)
Concurrency in the D Programming Language
Andrei Alexandrescu explains recent hardware changes allowing concurrency and how the D programming languages addresses these possibilities. (July 06, 2010)
Prefer Futures to Baked-In "Async APIs"
Explains that it's important to separate "what" from "how" when designing concurrent APIs. (January 15, 2010)
Prefer Structured Lifetimes: Local, Nested, Bounded, Deterministic
What's good for the function and the object is also good for the thread, the task, and the lock. (November 11, 2009)
Avoiding the Perils of C++0x Data Races
Find out what dangers race conditions in general and C++0x data races in particular pose to concurrent code, as well as the strategies for avoiding them. (September 10, 2009)
Practical Lock-Free Buffers
Looks at how lock-free programming avoids system failure by tolerating individual process failures. (August 26, 2009)
Multi-threaded Algorithm Implementations
Explores effective uses of threads by looking at a multi-threaded implementation of the QuickSort algorithm and reports on situations where using threads will not help. (June 18, 2009)
Deadlock: The Problem and a Solution
This article explains what deadlocks are and describes ways of circumventing deadlocks. (September 18, 2008)
It's (Not) All Been Done
Every decade or so there is a major revolution in the way software is developed. But, unlike the object and web revolutions, the concurrency revolution can be seen coming. (September 01, 2006)
The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software
The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency. (March 01, 2005)