diff --git a/README.md b/README.md
index 09b90da..2cad4e2 100644
--- a/README.md
+++ b/README.md
@@ -67,6 +67,8 @@ $ pip install -e .
```bash
$ python ./src/experiments/dataset.py
```
+4. (optional) run the template extraction tool
+5. (optional) run the dataset splitter tool
### Run CVSuite (for the first time)
1. Create `config.json` in the `./src/config/` folder and copy the contents of the template
@@ -103,11 +105,35 @@ $ python ./src/helpers/test/knn.py -i ./out/result-(date/time).csv -o ./out/mode
- `-m` Model to train; `dectree`, `randforest` or `extratree`
- `-s` Scaler file to use (`.pkl` file)
```sh
-python ./src/helpers/test/decision_tree.py -i ./out/result-(date/time).csv -o ./out/models/ -m 'dectree' -s ./out/models/scale_(date/time).pkl
+$ python ./src/helpers/test/decision_tree.py -i ./out/result-(date/time).csv -o ./out/models/ -m 'dectree' -s ./out/models/scale_(date/time).pkl
```
2. The script generates one `.pkl` file based on the chosen model
3. Edit your `config.json` to include the newly created model
+### Template extraction
+> :warning: **Please note:**
+> This tool uses the legacy format for datasets.
+> Images are sorted using folders, instead of by name.
+
+1. Images should have four standard Aruco markers clearly visible
+2. Run the template extraction tool with an input directory as argument
+```sh
+$ python ./src/experiments/template_extraction/script.py ./dataset
+```
+3. The script generates new folders, ending with `_out`
+4. The paths to any failed images are saved in `skipped.txt`
+
+### Dataset splitting
+1. Ensure that the dataset is in `./res/dataset`
+2. Run the dataset splitter tool:
+```sh
+$ python ./src/experiments/dataset.py
+```
+3. Three new folders will be created, containing the following percentage of images:
+ - `./res/dataset/training`, 70%
+ - `./res/dataset/validation`, 20%
+ - `./res/dataset/training`, 10%
+4. Images are split pseudorandomly, thus will create the same datasets on different machines.
---
Arne van Iterson