There has been a lot of discussion about the different concurrency models provided in languages like Java (shared-state) vs those provided in languages like Scala and Erlang (message-passing). The latter languages use an actor-based model that relies on setting up independent actors and (reliably) passing messages between them, rather than coordinating access to shared state. The actors run as lightweight thread-like entities and are expected to exist in large numbers (far more than the typical number of threads in a Java program).
The advantage of the message-passing model is that it is harder to encounter many of the common problems in shared-state models. Each piece of code is independent and deterministic which makes it easy to test. The pieces fit together in ways that are easy to model and reason about (well, easier than shared-state systems maybe).
Scala makes it easy to use actors and message-passing on top of the JVM. This gives us interesting new possibilities for using message-passing paradigms with the stability of the JVM. It may also let us do that a bit under the radar as most enterprise ops centers know how to deal with the JVM, when they might not be as comfortable running something like Erlang.
So, this lets us make interesting new systems based on message-passing, but how do we scale out those systems once they’re too big for a JVM instance? Today mad scientist Jonas Bonér announced support for clustering Scala actors using Terracotta.
Terracotta is typically used to cluster Java applications by sharing the heap and lock state across nodes. However, this is done using runtime bytecode manipulation and the most important bits are based on the JVM, not on Java. Since Scala is based on bytecode, it was possible to extend the Terracotta infrastructure to also handle Scala actors.
If you’re interested in Scala or Erlang actor-based concurrency, try it out with Terracotta to build a distributed version as well!
[ Note: I work at Terracotta, so I am entirely biased. But c’mon tell me this isn’t cool! :) ]