Note: None of this is implemented.

Concurrency and parallelism can sometimes improve program throughput, by:

Most of the heavy lifting for Cone's concurrency and parallelism capability is baked into library packages. Depending on the library packages chosen, many work distribution and scheduling approaches can be taken (ordered by context switching cost):

As always there are trade-offs: Sometimes the loss in context-switching cost (latency) can be justified by improvements in scalability, fault tolerance, i/o handling, etc.

All the above models support structured concurrency. All new processes or threads are effectively owned resources. Like all owned resources, there is a programmatic point-in-time when owned threads are joined, terminated and freed according to the logic of the finalizer. Libraries also support the ability to split m pieces of work across n independent processes/logic.

Concurrency (and parallelism) introduce safety risks, when implemented poorly:

Subsequent pages highlight Cone language features (e.g., permissions and queued methods) that can be used to make synchronized, inter-thread communications easier and safer.