22.12 Summary
Rust's concurrency and parallel processing features enable developers to write code that is both efficient and safe:
-
Threads
- Each Rust thread corresponds to a native OS thread, suitable for CPU-bound tasks and simpler concurrency.
- Scoped threads (
std::thread::scope
) let you borrow data from the parent stack safely, without requiring'static
lifetimes.
-
Async
- Uses cooperative scheduling, making it ideal for I/O-bound tasks.
- Tasks must periodically yield (
.await
) so other tasks can run.
-
Shared Data
Arc<Mutex<T>>
for shared mutable data; ensures exclusive access.RwLock<T>
for multiple readers and a single writer.Condvar
to manage waiting/notification patterns.- Atomic Types (like
AtomicUsize
) for lock-free concurrency.
-
Channels
mpsc::channel()
for message passing between threads.recv()
(blocking) vs.try_recv()
(non-blocking).- Use two channels if you need bidirectional communication.
-
Rayon
- Automatically parallelizes operations on collections with a thread pool.
- Especially suitable for CPU-bound tasks on large datasets.
-
SIMD
- Exploits vector instructions on the CPU to process multiple data items simultaneously.
- Rust can auto-vectorize, or you can use libraries like
std::simd
.
-
Send and Sync
- Determine if a type can be moved or shared safely between threads.
- Guaranteed by Rust's compiler to prevent data races and undefined behavior.
Be mindful that context switches, locks, and thread management involve costs. Always measure performance, consider the task size, and determine whether it is I/O-bound or CPU-bound. By benchmarking and profiling, you can select the best concurrency approach for your application.