Note: None of this is implemented.
Concurrency and parallelism can sometimes improve program throughput, by:
- Distributing work across multiple CPUs (many hands make light work).
- Keeping the CPU(s) busy with other work, rather than pausing all work while waiting on i/o to complete.
Most of the heavy lifting for Cone's concurrency and parallelism capability is baked into library packages. Depending on the library packages chosen, many work distribution and scheduling approaches can be taken (ordered by context switching cost):
- Cooperative concurrency (generators or co-routines), where context switching is explicitly baked into the logic.
- Green (mxn) threads, where context-switching is cooperatively handled by the runtime's work-stealing scheduler, rather than the OS.
- OS threads, where context-switching is handled by the OS scheduler. Threads share memory and resources.
- OS processes, where context-switching is handled by the OS. Processes are isolated, largely communicating via pipes.
- Distributed computing, where work is distributed across different machines, communicating via network sockets.
As always there are trade-offs: Sometimes the loss in context-switching cost (latency) can be justified by improvements in scalability, fault tolerance, i/o handling, etc.
All the above models support structured concurrency. All new processes or threads are effectively owned resources. Like all owned resources, there is a programmatic point-in-time when owned threads are joined, terminated and freed according to the logic of the finalizer. Libraries also support the ability to split m pieces of work across n independent processes/logic.
Concurrency (and parallelism) introduce safety risks, when implemented poorly:
- Race conditions, where a system's behavior varies inconsistently, depending on the sequence or timing of independent operations.
- Deadlocks, where forward progress has stalled out because independent operations have unresolved contention over the temporary ownership of multiple resources.
- Livelocks, where forward progress has stalled out because of resource starvation that is not caused by deadlocks.
Subsequent pages highlight Cone language features (e.g., permissions and queued methods) that can be used to make synchronized, inter-thread communications easier and safer.