How can we validate simulations? Or: How can we teach human behaviour to a computer?

In general, simulations are used to investigate how a system or process will manifest over time, given a complex set of interdependent variables, in a way that produces human-readable results. Microscopic, agent-based simulations use scientific models  that are influenced by physics, mathematics, psychology and traffic engineering, to name a few. We can check whether a model sufficiently imitates reality using various validation methods.

To validate microscopic, agent based simulators we could use:

Theoretical indicators

Any simulator should operate on or reproduce scientifically validated statistics. For example, crowd:it uses the results of studies carried out by Weidmann and NIST to assign agent walking speeds and body circumferences.

Experiments

unusual pedestrian movement patterns can be investigated from the results of experiments. These results can then be implemented in the software e.g. from Forschungszentrum Jülich for the BaSiGo project. In general, a distinction is made between real experiments and controlled experiments.

Test suites

A selection of simple test cases can be run using the simulator. These tests verify whether the software meets some predefined requirements. Some unit tests include:

  • The RiMEA test suite; the most important in Germany (see the next section).
  • In the US, NIST tests (NIST technical note 1822) are used used.
  • For the maritime sector, ISO 1238 exists

Currently, work is underway to unify these tests under the ISO standard in Chapter Fire Safety Engineering.

 

Validation of crowd:it

Regular RiMEA tests

Since its foundation in 2014, accu:rate has been a member of RiMEA e.V. (Guideline for Microscopic Evacuation Analysis). The goal of RiMEA is to support authorities and those developing microscopic pedestrian simulators with a guideline. It defines minimum standards for input parameters and simulation models.
Appendix 1 of the guideline presents a number of test cases. These test cases allow a user of simulation software to better understand the results of their simulation.
We run these test cases with crowd:it on a regular basis, enabling you to better understand our simulation model. Transparency is one of our key values! On request, we will also be happy to provide you with our older test cases. Simply contact us for this.

The latest RiMEA guidelines are available here. The latest results from crowd:it are available here.

Comparison of simulation results with experiments

We regularly compare the results of a crowd:it simulation with results from both real and controlled experiments. Here we observed an evacuation exercise, before simulated the same scenario with crowd:it. There was less than 5% deviation in evacuation time between crowd:it and the experiment. In order to further develop our staircase model, we carried out both real and controlled experiments in collaboration with the University of Applied Sciences Munich, whose results we subsequently used for validation.

Comparison of simulation results with other calculation methods

In order to compare the results of crowd:it against macroscopic simulation methods, we considered th capacity analyses and hydraulic methods of Predteschenski & Milinski.
The evacuation times of a three-storey office building with a 1.20 m wide staircase and simple floor plan is shown in the following diagram:
crowd:it lies between the respective procedures and unlike macroscopic procedures, crowd:it achieves exact results with low agent numbers.