My Account
A thread is a context of execution within a program. Multithreaded programming deals with designing a program to have parts of it execute concurrently.
More information

Subcategories 4

Related categories 2

A site devoted to lock-free algorithms, scalable architecture, multicore design patterns, parallel computations, threading libraries, tooling support and related topics.
Describes lock-free data sharing, otherwise known as "wait-free data sharing" as an alternative to the use of locks.
Critical sections are the One True Tool for guaranteeing mutual exclusion on shared variables. Like most tools, these must be applied consistently, and with the intended meanings.
Explains where to start when trying to add concurrency to a mass of existing code.
Part of the Computer Science Bibliography Collection.
Breaking up is hard to do, but interleaving can be even subtler.
Frequently asked questions (by Bryan O'Sullivan).
Explains fundamental concepts for moving from a linear to a parallel programming model
Most uses of synchronization code in multi-threaded applications fall into a small number of high-level “usage patterns”, or what can be called generic synchronization policies (GSPs). This paper illustrates how the use of such GSPs simplify the writing of thread-safe classes. In addition, this paper presents a C++ class library that implements commonly-used GSPs.
Gives an introduction to priority inversion and shows a pair of techniques to avoid them.
It isn't just languages that have poor support for thread local storage, but operating systems too
Presents a solution to races and deadlocks based on a well-known deadlock-avoidance protocol and shows how it can be enforced by the compiler. It can be applied to programs in which the number of locks is fixed and known up front.
Writing lock-free code can confound anyone-even expert programmers, as Herb shows in this article.
Interprocess communication is an essential component of modern software engineering. Often, lock-free IPC is accomplished via special processor commands. This article propose a communication type that requires only atomic writing of processor word from processor cache into main memory and atomic processor word reading from main memory into the processor register or processor cache.
Explains that deadlock can happen whenever there is a blocking (or waiting) cycle among concurrent tasks.
Explains why in the concurrent world, locality is a first-order issue that trumps most other performance considerations. Now locality is no longer just about fitting well into cache and RAM, but to avoid scalability busters by keeping tightly coupled data physically close together and separately used data far, far apart.
Shows different ways of how to write a fast, internally synchronized queue, one that callers can use without any explicit external locking or other synchronization, and compares the performance.
Describes a number of general purpose debugging techniques for multi-threaded applications.
So far multithreaded file I/O is a under-researched field. Although its simple to measure, there is not much common knowledge about it. The measurements presented here show that multithreading can improve performance of file access directly, as well as indirectly by utilizing available cores to process the data read.
This article makes the case that a consistent mental model is needed to talk about concurrency.
Very lightweight stackless threads; give linear code execution for event-driven systems, designed to use little memory; library is pure C, no platform-specific Assembly; usable with or without OS. Open source, BSD-type license.
Describes some key principles that will help mastering the "black art" of writing multithreaded code.
Higher order threads for C++; tutorial and reference manual.
Sharing requires waiting and overhead, and is a natural enemy of scalability. This article focuses on one important case, namely mutable (writable) shared objects in memory, which are an inherent bottleneck to scalability on multicore systems.
Focuses on the implications of concurrency for software and its consequences for both programming languages and programmers. (Herb Sutter and James Larus)
Small application library for writing fast, highly scalable Internet programs on Unix-like platforms. Open source, MPL or GPL.
Compares Windows NT and Solaris on a symmetric multiprocessor machine.
Explains how to accurately analyze the real performance of parallel code and lists some basic considerations and common costs.
Explains how to use lock hierarchies to avoid deadlock by assigning each shared resource a level that corresponds to its architectural layer.
A thread pool hides a lot of details, but to use it effectively some awareness of some things a pool does under the covers is needed to avoid inadvertently hitting performance and correctness pitfalls.
Motivates and illustrate best practices for using threads - techniques that will make concurrent code easier to write correctly and to reason about with confidence.
Discusses the usage of the volatile keyword in multithreaded C++ programs.
The Boost.Thread library, which enables the use of multiple threads of execution with shared data in portable C++ code, has undergone some major changes.
Explores lock-free code by focusing on creating a lock-free queue.
Herb Sutter is looking at how mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. (December 29, 2011)
Andrei Alexandrescu explains recent hardware changes allowing concurrency and how the D programming languages addresses these possibilities. (July 06, 2010)
Explains that it's important to separate "what" from "how" when designing concurrent APIs. (January 15, 2010)
What's good for the function and the object is also good for the thread, the task, and the lock. (November 11, 2009)
Looks at how lock-free programming avoids system failure by tolerating individual process failures. (August 26, 2009)
Explores effective uses of threads by looking at a multi-threaded implementation of the QuickSort algorithm and reports on situations where using threads will not help. (June 18, 2009)
Every decade or so there is a major revolution in the way software is developed. But, unlike the object and web revolutions, the concurrency revolution can be seen coming. (September 01, 2006)
The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency. (March 01, 2005)
A thread pool hides a lot of details, but to use it effectively some awareness of some things a pool does under the covers is needed to avoid inadvertently hitting performance and correctness pitfalls.
Breaking up is hard to do, but interleaving can be even subtler.
Explains where to start when trying to add concurrency to a mass of existing code.
Motivates and illustrate best practices for using threads - techniques that will make concurrent code easier to write correctly and to reason about with confidence.
It isn't just languages that have poor support for thread local storage, but operating systems too
Describes some key principles that will help mastering the "black art" of writing multithreaded code.
Discusses the usage of the volatile keyword in multithreaded C++ programs.
Very lightweight stackless threads; give linear code execution for event-driven systems, designed to use little memory; library is pure C, no platform-specific Assembly; usable with or without OS. Open source, BSD-type license.
Explains why in the concurrent world, locality is a first-order issue that trumps most other performance considerations. Now locality is no longer just about fitting well into cache and RAM, but to avoid scalability busters by keeping tightly coupled data physically close together and separately used data far, far apart.
Explores lock-free code by focusing on creating a lock-free queue.
Explains fundamental concepts for moving from a linear to a parallel programming model
So far multithreaded file I/O is a under-researched field. Although its simple to measure, there is not much common knowledge about it. The measurements presented here show that multithreading can improve performance of file access directly, as well as indirectly by utilizing available cores to process the data read.
Describes a number of general purpose debugging techniques for multi-threaded applications.
Describes lock-free data sharing, otherwise known as "wait-free data sharing" as an alternative to the use of locks.
Higher order threads for C++; tutorial and reference manual.
Interprocess communication is an essential component of modern software engineering. Often, lock-free IPC is accomplished via special processor commands. This article propose a communication type that requires only atomic writing of processor word from processor cache into main memory and atomic processor word reading from main memory into the processor register or processor cache.
Writing lock-free code can confound anyone-even expert programmers, as Herb shows in this article.
Shows different ways of how to write a fast, internally synchronized queue, one that callers can use without any explicit external locking or other synchronization, and compares the performance.
Explains that deadlock can happen whenever there is a blocking (or waiting) cycle among concurrent tasks.
Explains how to use lock hierarchies to avoid deadlock by assigning each shared resource a level that corresponds to its architectural layer.
Explains how to accurately analyze the real performance of parallel code and lists some basic considerations and common costs.
Gives an introduction to priority inversion and shows a pair of techniques to avoid them.
Critical sections are the One True Tool for guaranteeing mutual exclusion on shared variables. Like most tools, these must be applied consistently, and with the intended meanings.
Compares Windows NT and Solaris on a symmetric multiprocessor machine.
Sharing requires waiting and overhead, and is a natural enemy of scalability. This article focuses on one important case, namely mutable (writable) shared objects in memory, which are an inherent bottleneck to scalability on multicore systems.
Presents a solution to races and deadlocks based on a well-known deadlock-avoidance protocol and shows how it can be enforced by the compiler. It can be applied to programs in which the number of locks is fixed and known up front.
The Boost.Thread library, which enables the use of multiple threads of execution with shared data in portable C++ code, has undergone some major changes.
This article makes the case that a consistent mental model is needed to talk about concurrency.
A site devoted to lock-free algorithms, scalable architecture, multicore design patterns, parallel computations, threading libraries, tooling support and related topics.
Focuses on the implications of concurrency for software and its consequences for both programming languages and programmers. (Herb Sutter and James Larus)
Small application library for writing fast, highly scalable Internet programs on Unix-like platforms. Open source, MPL or GPL.
Most uses of synchronization code in multi-threaded applications fall into a small number of high-level “usage patterns”, or what can be called generic synchronization policies (GSPs). This paper illustrates how the use of such GSPs simplify the writing of thread-safe classes. In addition, this paper presents a C++ class library that implements commonly-used GSPs.
Frequently asked questions (by Bryan O'Sullivan).
Part of the Computer Science Bibliography Collection.
Herb Sutter is looking at how mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. (December 29, 2011)
Andrei Alexandrescu explains recent hardware changes allowing concurrency and how the D programming languages addresses these possibilities. (July 06, 2010)
Explains that it's important to separate "what" from "how" when designing concurrent APIs. (January 15, 2010)
What's good for the function and the object is also good for the thread, the task, and the lock. (November 11, 2009)
Looks at how lock-free programming avoids system failure by tolerating individual process failures. (August 26, 2009)
Explores effective uses of threads by looking at a multi-threaded implementation of the QuickSort algorithm and reports on situations where using threads will not help. (June 18, 2009)
Every decade or so there is a major revolution in the way software is developed. But, unlike the object and web revolutions, the concurrency revolution can be seen coming. (September 01, 2006)
The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency. (March 01, 2005)
Last update:
July 5, 2022 at 6:25:02 UTC
Computers
Games
Health
Home
News
Recreation
Reference
Regional
Science
Shopping
Society
Sports
All Languages
Arts
Business