From 0bd7b7809abe2b85f8f91ac5ef2c5c63fe8de14e Mon Sep 17 00:00:00 2001 From: Tom Selier Date: Sun, 22 Oct 2023 18:59:50 +0200 Subject: [PATCH 1/2] added template extraction to readme --- README.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 09b90da..a890566 100644 --- a/README.md +++ b/README.md @@ -67,6 +67,8 @@ $ pip install -e . ```bash $ python ./src/experiments/dataset.py ``` +4. (optional) run the template extraction tool +5. (optional) run the dataset splitter tool ### Run CVSuite (for the first time) 1. Create `config.json` in the `./src/config/` folder and copy the contents of the template @@ -103,11 +105,23 @@ $ python ./src/helpers/test/knn.py -i ./out/result-(date/time).csv -o ./out/mode - `-m` Model to train; `dectree`, `randforest` or `extratree` - `-s` Scaler file to use (`.pkl` file) ```sh -python ./src/helpers/test/decision_tree.py -i ./out/result-(date/time).csv -o ./out/models/ -m 'dectree' -s ./out/models/scale_(date/time).pkl +$ python ./src/helpers/test/decision_tree.py -i ./out/result-(date/time).csv -o ./out/models/ -m 'dectree' -s ./out/models/scale_(date/time).pkl ``` 2. The script generates one `.pkl` file based on the chosen model 3. Edit your `config.json` to include the newly created model +### Template extraction +> :warning: **Please note:**
+> This tool uses the legacy format for datasets.
+> Images are sorted using folders, instead of by name + +1. Images should have four standard Aruco markers clearly visible +2. Run the template extraction tool with an input directory as argument +```sh +$ python ./src/experiments/template_extraction/script.py ./dataset +``` +3. The script generates new folders, ending with `_out` +4. The paths to any failed images are saved in `skipped.txt` --- Arne van Iterson
From 39d30708ec5e1f53ade8516e3ef5298f18e9b2d5 Mon Sep 17 00:00:00 2001 From: Tom Selier Date: Sun, 22 Oct 2023 19:07:57 +0200 Subject: [PATCH 2/2] added dataset splitter to readme --- README.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a890566..2cad4e2 100644 --- a/README.md +++ b/README.md @@ -113,7 +113,7 @@ $ python ./src/helpers/test/decision_tree.py -i ./out/result-(date/time).csv -o ### Template extraction > :warning: **Please note:**
> This tool uses the legacy format for datasets.
-> Images are sorted using folders, instead of by name +> Images are sorted using folders, instead of by name. 1. Images should have four standard Aruco markers clearly visible 2. Run the template extraction tool with an input directory as argument @@ -122,6 +122,18 @@ $ python ./src/experiments/template_extraction/script.py ./dataset ``` 3. The script generates new folders, ending with `_out` 4. The paths to any failed images are saved in `skipped.txt` + +### Dataset splitting +1. Ensure that the dataset is in `./res/dataset` +2. Run the dataset splitter tool: +```sh +$ python ./src/experiments/dataset.py +``` +3. Three new folders will be created, containing the following percentage of images: + - `./res/dataset/training`, 70% + - `./res/dataset/validation`, 20% + - `./res/dataset/training`, 10% +4. Images are split pseudorandomly, thus will create the same datasets on different machines. --- Arne van Iterson