SimGrid
3.16
Versatile Simulation of Distributed Systems
|
SimGrid is a discrete event simulator of distributed systems: it does not simulate the world by small fixed-size steps but determines the date of the next event (such as the end of a communication, the end of a computation) and jumps to this date.
A number of actors executing user-provided code run on top of the simulation kernel. The interactions between these actors and the simulation kernel are very similar to the ones between the system processes and the Operating System (except that the actors and simulation kernel share the same address space in a single OS process).
When an actor needs to interact with the outer world (eg. to start a communication), it issues a simcall (simulation call), just like a system process issues a syscall to interact with its environment through the Operating System. Any simcall freezes the actor until it is woken up by the simulation kernel (eg. when the communication is finished).
Mimicking the OS behavior may seem over-engineered here, but this is mandatory to the model-checker. The simcalls, representing actors' actions, are the transitions of the formal system. Verifying the system requires to manipulate these transitions explicitly. This also allows to run safely the actors in parallel, even if this is less commonly used by our users.
So, the key ideas here are:
This leads to some very important consequences:
pthread_mutex_lock()
or std::mutex
. The simulation kernel would wait for the actor to issue a simcall and would deadlock. Instead it must use simulation-level synchronization primitives (such as simcall_mutex_lock()
).std::this_thread::sleep_for()
which waits in the real world but must instead wait in the simulation with simgrid::s4u::Actor::this_actor::sleep_for()
which waits in the simulation.Futures are a nice classical programming abstraction, present in many language. Wikipedia defines a future as an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is yet incomplete. This concept is thus perfectly adapted to represent in the kernel the asynchronous operations corresponding to the actors' simcalls.
Futures can be manipulated using two kind of APIs:
res = f.get()
);future.then(something_to_do_with_the_result)
). This is heavily used in ECMAScript that exhibits the same kind of never-blocking asynchronous model as our discrete event simulator.C++11 includes a generic class (std::future<T>
) which implements a blocking API. The continuation-based API is not available in the standard (yet) but is already described in the Concurrency Technical Specification.
Promise
s are the counterparts of Future
s: std::future<T>
is used by the consumer of the result. On the other hand, std::promise<T>
is used by the producer of the result. The producer calls promise.set_value(42)
or promise.set_exception(e)
in order to set the result which will be made available to the consumer by future.get()
.
The blocking API provided by the standard C++11 futures does not suit our needs since the simulation kernel cannot block, and since we want to explicitly schedule the actors. Instead, we need to reimplement a continuation-based API to be used in our event-driven simulation kernel.
Our futures are based on the C++ Concurrency Technical Specification API, with a few differences:
f.wait()
is not meaningful in this context.future.get()
does an implicit wait. Calling this method in the simulation kernel only makes sense if the future is already ready. If the future is not ready, this would deadlock the simulator and an error is raised instead.future.then()
or promise.set_value()
calls). That way, we don't have to fear problems like invariants not being restored when the callbacks are called :fearful: or stack overflows triggered by deeply nested continuations chains :cold_sweat:. The continuations are all called in a nice and predictable place in the simulator with a nice and predictable state :relieved:.The simgrid::kernel::Future
and simgrid::kernel::Promise
use a shared state defined as follows:
Both Future
and Promise
have a reference to the shared state:
The crux of future.then()
is:
We added a (much simpler) future.then_()
method which does not create a new future:
The .get()
delegates to the shared state. As we mentioned previously, an error is raised if the future is not ready:
So a simcall is a way for the actor to push a request to the simulation kernel and yield the control until the request is fulfilled. The performance requirements are very high because the actors usually do an inordinate amount of simcalls during the simulation.
As for real syscalls, the basic idea is to write the wanted call and its arguments in a memory area that is specific to the actor, and yield the control to the simulation kernel. Once in kernel mode, the simcalls of each demanding actor are evaluated sequentially in a strictly reproducible order. This makes the whole simulation reproducible.
In the very first implementation, everything was written by hand and highly optimized, making our software very hard to maintain and evolve. We decided to sacrifice some performance for maintainability. In a second try (that is still in use in SimGrid v3.13), we had a lot of boiler code generated from a python script, taking the list of simcalls as input. It looks like this:
At runtime, a simcall is represented by a structure containing a simcall number and its arguments (among some other things):
with the a scalar union type:
When manually calling the relevant Python script, this generates a bunch of C++ files:
an enum of all the simcall numbers;
user-side wrappers responsible for wrapping the parameters in the struct s_smx_simcall
; and wrapping out the result;
accessors to get/set values of of struct s_smx_simcall
;
a simulation-kernel-side big switch handling all the simcall numbers.
Then one has to write the code of the kernel side handler for the simcall and the code of the simcall itself (which calls the code-generated marshaling/unmarshaling stuff).
In order to simplify this process, we added two generic simcalls which can be used to execute a function in the simulation kernel:
The first one (simcall_run_kernel()
) executes a function in the simulation kernel context and returns immediately (without blocking the actor):
On top of this, we add a wrapper which can be used to return a value of any type and properly handles exceptions:
where Result<R>
can store either a R
or an exception.
Example of usage:
The second generic simcall (simcall_run_blocking()
) executes a function in the SimGrid simulation kernel immediately but does not wake up the calling actor immediately:
The f
function is expected to setup some callbacks in the simulation kernel which will wake up the actor (with simgrid::simix::unblock(actor)
) when the operation is completed.
This is wrapped in a higher-level primitive as well. The kernelSync()
function expects a function-object which is executed immediately in the simulation kernel and returns a Future<T>
. The simulator blocks the actor and resumes it when the Future<T>
becomes ready with its result:
A contrived example of this would be:
We can write the related kernelAsync()
which wakes up the actor immediately and returns a future to the actor. As this future is used in the actor context, it is a different future (simgrid::simix::Future
instead of simgrid::kernel::Future
) which implements a C++11 std::future
wait-based API:
The future.get()
method is implemented as[^getcompared]:
kernelAsync()
simply :wink: calls kernelImmediate()
and wraps the simgrid::kernel::Future
into a simgrid::simix::Future
:
A contrived example of this would be:
kernelSync()
could be rewritten as:
The semantic is equivalent but this form would require two simcalls instead of one to do the same job (one in kernelAsync()
and one in .get()
).
Similarly SimGrid already had simulation-level condition variables which can be exposed using the same API as std::condition_variable
:
We currently accept both double
(for simplicity and consistency with the current codebase) and std::chrono
types (for compatibility with C++ code) as durations and timepoints. One important thing to notice here is that cond.wait_for()
and cond.wait_until()
work in the simulated time, not in the real time.
The simple cond.wait()
and cond.wait_for()
delegate to pre-existing simcalls:
Other methods are simple wrappers around those two:
We wrote two future implementations based on the std::future
API:
the first one is a non-blocking event-based (future.then(stuff)
) future used inside our (non-blocking event-based) simulation kernel;
the second one is a wait-based (future.get()
) future used in the actors which waits using a simcall.
These futures are used to implement kernelSync()
and kernelAsync()
which expose asynchronous operations in the simulation kernel to the actors.
In addition, we wrote variations of some other C++ standard library classes (SimulationClock
, Mutex
, ConditionVariable
) which work in the simulation:
using simulated time; using simcalls for synchronisation.
Reusing the same API as the C++ standard library is very useful because:
we use a proven API with a clearly defined semantic; people already familiar with those API can use our own easily; users can rely on documentation, examples and tutorials made by other people; we can reuse generic code with our types (`std::unique_lock`,
std::lock_guard
, etc.).
This type of approach might be useful for other libraries which define their own contexts. An example of this is Mordor, a I/O library using fibers (cooperative scheduling): it implements cooperative/fiber mutex, recursive mutex which are compatible with the BasicLockable
requirements (see [thread.req.lockable.basic]
in the C++14 standard).
Result
Result is like a mix of std::future
and std::promise
in a single-object without shared-state and synchronisation:
~
Those helper are useful for dealing with generic future-based code:
Task<R(F...)>
is a type-erased callable object similar to std::function<R(F...)>
but works for move-only types. It is similar to std::package_task<R(F...)>
but does not wrap the result in a std::future<R>
(it is not packaged).
std::function | std::packaged_task | simgrid::xbt::Task | |
---|---|---|---|
Copyable | Yes | No | No |
Movable | Yes | Yes | Yes |
Call | const | non-const | non-const |
Callable | multiple times | once | once |
Sets a promise | No | Yes | No |
It could be implemented as:
but we don't need a shared-state.
This is useful in order to bind move-only type arguments:
[^getcompared]:
You might want to compare this method with `simgrid::kernel::Future::get()` we showed previously: the method of the kernel future does not block and raises an error if the future is not ready; the method of the actor future blocks after having set a continuation to wake the actor when the future is ready.
[^lock]:
`std::lock()` might kinda work too but it may not be such as good idea to use it as it may use a [<q>deadlock avoidance algorithm such as try-and-back-off</q>](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf#page=1199). A backoff would probably uselessly wait in real time instead of simulated time. The deadlock avoidance algorithm might as well add non-determinism in the simulation which we would like to avoid. `std::try_lock()` should be safe to use though.