SUMo'2011
International Workshop on
Scalable and Usable Model Checking for
Petri Nets and other models of Concurrency

Contents

Please find in the "program" page the results and data from the model checking contest.

April 6, 2011 - a bug was corrected in the BNF for formulae provided here.
APril 13, 2011 - an inconsistency in the comments of the various setup.sh was corrected in the submission kit.

Contents

  1. PuceObjectives

  2. PuceEvaluation of Model Checkers

  3. PuceThe Models

  4. PuceClassification of Model Checkers

  5. PuceHow to Participate

Objectives

To verify highly concurrent systems, should we use a symmetry-based or a partial order-based model checker?

For models with large variable domains, should we use decision diagram-based, or a symmetry-based model checker?

Results that help to answer these questions are spread among numerous papers in numerous conferences. Moreover, as benchmarks are executed over several platforms and composed of different models, conclusions are not easy.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on academic models with scaling capabilities (e.g. values that set up the "size" of the associated state space).

A feedback on tools efficiency according to the selected benchmarks will be provided during the SUMo workshop. It will give a hint on which techniques are adapted to which model characteristics.

As it is the first edition of the Model Checking Contest, we have selected a limited set of capabilities to be evaluated. Based on this experiment, future editions will propose more sophisticated ones. So far, we only check for state space generation and evaluation of safety (reachability) properties and deadlock computation.

According to the success of the contest and the quality of results, a report paper will be published in TOPNOC and thus officially state some clues for potential users of our technologies. If so, this report will not only focus on performances but will provide a tentative classification of properties and techniques that have been analyzed during the contest.

Your submission must be provided before May 15, 2011.

Evaluation of Model Checkers

During this contest (and for shake of simplicity in this first version), model checkers can pass three types of tests:

  1. PuceGeneration of the state space (time, memory used),

  2. PuceComputation of deadlocks in the state space,

  3. PuceEvaluation of reachability properties from generated formulae following this syntax.

In the submission kit, property files will be provided for each models (we may add some properties for the final evaluation ;-) ). There is a way to indicate that your tool does not participate in some benchmarks (models) or some tests.

To operate and extract information from this tests (memory usage, time required for computation, etc.) an execution script is provided to you. Integration of your tool in this script simply requires that you provide your binaries, descriptions for the required models, and fill the execution script. Detailed instructions are provided in the archive containing a dummy submission.

Submissions will be executed under Linux on a dedicated hyper-threaded 2.4 GHz bi-Pentium Xeon 32bits with 2Gbyte of memory (the machines will be fully dedicated to the tool execution). The operating system will be Linux Mandriva 2010-2.

The Models

Topics include (but are not limited to) the following:

  1. PuceP/T models : FMS, Kanban, MAPK,

  2. PuceColoured models: Peterson, Philosopher, SharedMemory, TokenRing.

Each model has a scalability factor we use to provide several instances and check scalability of each participating tool. See the dedicated model pages to know the impact of this scaling factor on each model.

IMPORTANT: in order to widely open the contest, we provide reference models in PNML format (that is now an official ISO/IEC standard) together with a graphical representation and a description. You are allowed to provide their counterparts in your tool's format, but have to ensure that both models are equivalent. Checks will be done to ensure equivalence. Read the integration instructions for more details in the submission kit.

Classification of Model Checkers

So far, the following classification of model checkers is foreseen among three axes, according to the planned evaluation:

  1. PuceTechniques:

  2. PuceExplicit,

  3. PuceAbstractions (CEGAR, Slicing, etc.),

  4. PucePartial order (stubborn sets, persistent sets, Mc Millan's unfoldings, ...),

  5. PuceParallel processing (multi-threaded model checkers),

  6. PuceDecision-Diagram based (BDD, ZBDD, MDD, DDD, SDD, etc.),

  7. PuceSymmetry-based (such as equivalence classes or permutation of values),

  8. PuceState compression (such as delta marking, bit state hashing, recursive indexing, etc.),

  9. Puceetc.

  10. PuceModel Characteristics:

  11. PuceColored,

  12. PuceP/T.

  13. PuceProperties:

  14. PuceDeadlocks,

  15. PuceReachability.

How to Participate

To participate, please follow this procedure:

  1. PuceStep 0: read the rules to check for submission condition.

  2. PuceStep 1: download the submission kit that contains execution scripts and PNML files,

  3. PuceStep 2: read instructions and perform integration that basically corresponds to (complete instructions are located in README and setup.sh files):

  4. Pucefill the setup.sh file (and potentially some others if you need it).

  5. Puceprovide executable binaries for your tools and, if needed (e.g. the tool does not support PNML), models in your format. Binaries must be linked statically since no library will be installed other than the one of the Mandriva 2010-2 standard Linux.
    Please be aware that you must respect a specific output that will be analyzed in order to extract data on the execution machine. Some output options allow you to specify that you do not participate in some categories of properties or models.
    Please note that the submission kit features a "dummy tool" that provides appropriately formatted outputs to help you catch what is required.

  6. Pucecheck for execution with the script in order to get an idea of what you get.

  7. PuceStep 3: when your submission is OK, run the command to build the distribution (see README)

  8. PuceStep 4: upload your file via this web page. To perform the submission, you must create a login and then, use it to access the download form. Creation of the login must be validated (to avoid submission by robots). In case of troubles, please contact Fabrice.Kordon[at]lip6.fr.

Your submission must be provided before May 15, 2011 to let us time for evaluation.

A satellite event of PETRI NETS'2011 and ACSD'2011
June  21, 2011

Model Checking Contest

Time left to submission deadline in days, hours, minutes and seconds

PeoplePeople.html
PetersonPeterson_model.html
PhilosopherPhilosophers_model.html
SharedMemorySharedMemory_model.html
FMSFMS_model.html
KanbanKanban_model.html
MAPKMAPK_model.html
FAQFAQ.html
BNF FormulaeBNF_for_formulae.html

Models

RulesRules.html
TokenRingTokenRing_model.html
Submission Kithttp://sumo.lip6.fr/SUMo-2011-MCC-BASE.tgz