22.5 Sharing Data Between Threads
Safe data sharing is essential in multithreaded code. In Rust, you typically rely on:
Arc<T>
: Atomically reference-counted pointers for shared ownership.Mutex<T>
orRwLock<T>
: Enforcing exclusive or shared mutability.- Atomics: Lock-free synchronization on single values when appropriate.
22.5.1 Arc<Mutex<T>>
A common pattern is Arc<Mutex<T>>
:
use std::sync::{Arc, Mutex}; use std::thread; fn main() { let counter = Arc::new(Mutex::new(0)); let mut handles = vec![]; for _ in 0..5 { let c = Arc::clone(&counter); let handle = thread::spawn(move || { for _ in 0..10 { let mut guard = c.lock().unwrap(); *guard += 1; } }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Final count = {}", *counter.lock().unwrap()); }
Each thread locks the mutex before modifying the counter, and the lock is automatically released when the guard goes out of scope.
22.5.2 RwLock<T>
A read-write lock lets multiple threads read simultaneously but allows only one writer at a time:
use std::sync::{Arc, RwLock}; use std::thread; fn main() { let data = Arc::new(RwLock::new(vec![1, 2, 3])); let reader = Arc::clone(&data); let handle_r = thread::spawn(move || { let read_guard = reader.read().unwrap(); println!("Reader sees: {:?}", *read_guard); }); let writer = Arc::clone(&data); let handle_w = thread::spawn(move || { let mut write_guard = writer.write().unwrap(); write_guard.push(4); println!("Writer appended 4"); }); handle_r.join().unwrap(); handle_w.join().unwrap(); println!("Final data: {:?}", data.read().unwrap()); }
For read-heavy scenarios, RwLock
can improve performance by letting multiple readers proceed in parallel.
22.5.3 Condition Variables
Use condition variables (Condvar
) to synchronize on specific events:
use std::sync::{Arc, Mutex, Condvar}; use std::thread; fn main() { let pair = Arc::new((Mutex::new(false), Condvar::new())); let pair_clone = Arc::clone(&pair); // Thread that waits on a condition let waiter = thread::spawn(move || { let (lock, cvar) = &*pair_clone; let mut started = lock.lock().unwrap(); while !*started { started = cvar.wait(started).unwrap(); } println!("Condition met, proceeding..."); }); thread::sleep(std::time::Duration::from_millis(500)); { let (lock, cvar) = &*pair; let mut started = lock.lock().unwrap(); *started = true; cvar.notify_one(); } waiter.join().unwrap(); }
Typical usage involves:
- A mutex-protected boolean (or other state).
- A thread calling
cvar.wait(guard)
to suspend until notified. - Another thread calling
cvar.notify_one()
ornotify_all()
once the condition changes.
22.5.4 Rust’s Atomic Types
For lock-free operations on single values, Rust offers atomic types:
use std::sync::atomic::{AtomicUsize, Ordering}; use std::thread; static GLOBAL_COUNTER: AtomicUsize = AtomicUsize::new(0); fn main() { let mut handles = vec![]; for _ in 0..5 { handles.push(thread::spawn(|| { for _ in 0..10 { GLOBAL_COUNTER.fetch_add(1, Ordering::Relaxed); } })); } for handle in handles { handle.join().unwrap(); } println!("Global counter: {}", GLOBAL_COUNTER.load(Ordering::SeqCst)); }
You must understand memory ordering to use atomics correctly, but they work similarly to C++ <atomic>
.
22.5.5 Scoped Threads (Rust 1.63+)
Before Rust 1.63, sharing non-’static references with threads typically required reference counting or static lifetimes. Scoped threads allow threads that cannot outlive a given scope:
use std::thread; fn main() { let mut numbers = vec![10, 20, 30]; let mut x = 0; thread::scope(|s| { s.spawn(|| { println!("Numbers are: {:?}", numbers); // Immutable borrow }); s.spawn(|| { x += numbers[0]; // Mutably borrows 'x' and reads 'numbers' }); println!("Hello from the main thread in the scope"); }); // All scoped threads have finished here. numbers.push(40); assert_eq!(numbers.len(), 4); println!("x = {x}, numbers = {:?}", numbers); }
Here, closures borrow data from the parent function, and the compiler ensures the threads finish before scope
returns, preventing dangling references.