Commit 662149c6 authored by Thom Badings's avatar Thom Badings
Browse files

Update README.md

parent 4295cfed
......@@ -39,12 +39,14 @@ docker load -i upMDPs_STTT.tar
Our Docker container is built upon a container for the probabilistic model checker Storm (see [this documentation](https://www.stormchecker.org/documentation/obtain-storm/docker.html) for details).
To use the docker container, open a terminal and navigate to the folder where you want to save the results in. Then, run the following command (for Windows platforms, please see the [documentation on the Storm website](https://www.stormchecker.org/documentation/obtain-storm/docker.html#run-the-docker-image-windows)):
To use the docker container, open a terminal and navigate to the folder where you want to save the results in on your host machine. Then, run the following command (for Windows platforms, please see the [documentation on the Storm website](https://www.stormchecker.org/documentation/obtain-storm/docker.html#run-the-docker-image-windows)):
```
docker run --mount type=bind,source="$(pwd)",target=/opt/upmdps/output -w /opt/upmdps --rm -it --name upmdps thombadings/upmdps:sttt
docker run --mount type=bind,source="$(pwd)",target=/opt/upmdps/data -w /opt/upmdps --rm -it --name upmdps thombadings/upmdps:sttt
```
Note that this command uses a shared folder between the host machine (in the directory where you run the command) and the Docker container (in the `/opt/upmdps/data` folder).
You will see a prompt inside the docker container. The README in this folder is what you are reading. Now you are ready to run the code for a single model (Section 3) or to replicate the experiments presented in [1] (Section 4).
## 3. How to run for a single model?
......@@ -57,17 +59,19 @@ A minimal code to run the BRP (256,5) benchmark is then as follows:
python3 sampler.py --model models/brp/brp_256_5.pm --property models/brp/brp.prctl --threshold 0.5 --bisimulation weak --comparator 'leq' --num_samples 10000 --beta 0.999 --outfile 'brp_256_5_output';
```
The `model` and `property` arguments refer to the PRISM model and the property file, respectively, and the `bisimulation` can either be strong, weak, or none. Moreover, the `threshold` is the probability for the property that we check against. Finally, `num_samples` is the number of parameter samples that is used, `beta` is the confidence level (for which we compute the lower bound on the satisfaction probability), and `outfile` is the file (prefix) in which we store the results (note that this is relative to the `output/` folder already).
The `model` and `property` arguments refer to the PRISM model and the property file, respectively, and the `bisimulation` can either be strong, weak, or none. Moreover, the `threshold` is the probability for the property that we check against. Finally, `num_samples` is the number of parameter samples that is used, `beta` is the confidence level (for which we compute the lower bound on the satisfaction probability), and `outfile` is the file (prefix) in which we store the results (note that this is relative to the `data/` folder already).
For details on all possible arguments and their default values, see below.
### Inspecting results
The results for individual experiments are saved in the `output/` folder. Depending on the arguments provided to the script, at most three output files are created here: one .txt file with some logs, and two .csv files with results that are also shown in the tables in the main paper, [1].
The results for individual experiments are saved in the `data/` folder. Depending on the arguments provided to the script, at most three output files are created here: one .txt file with some logs, and two .csv files with results that are also shown in the tables in the main paper, [1].
## 4. How to run experiments?
The and tables in the experimental section of [1] can be reproduced by running the shell script `RUN_ALL.sh` in the `experiments/` folder. This script then runs a number of subscripts that make calls to the Python code. All results are stored in the `output/` folder, and the tables in [1] can be replicated as follows:
The and tables in the experimental section of [1] can be reproduced by running the shell script `run_experiments.sh` in the `opt/upmdps/data` folder. To speed up run times, consider to reduce the numbers of parameter samples or the number of iteration. Note that the shell script is located in the shared Docker folder by default, so you can easily change this file on the host machine.
All results are stored in the `data/` folder, and the tables in [1] can be replicated as follows:
Table 1 can be replicated by putting the CSV output files from running the drone experiment that end with `_satprob.csv`. Note that three of such CSV files are created, for the different wind conditions in the experiment. Every of each files represents a row in Table 1.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment