From 35cd856c13188b8ee83adf4b9118eecaa750e5ad Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tereso=20del=20R=C3=ADo=20Almajano?= Date: Tue, 11 Apr 2023 10:39:57 +0100 Subject: [PATCH] README.md updated --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0d81db0..965368b 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ This repository contains proof of data balancing and data augmentation's impact - It also contains an installable package called dataset_manipulation that can balance and augment polynomial data. It is not necessary to install it. -- Running the module main.py, a file called ml_results.csv (also included in the repository) will be generated showing a comparison between a variety of models trained either in data without manipulation, in balanced data or in augmented data. +- Running the module main.py, files called ml_tested_in_normal.csv and ml_tested_in_normal.csv (also included in the repository) will be generated showing a comparison between a variety of models trained in data without manipulation, in balanced data and in augmented data. - For some models, there is not a big difference, but keep in mind that these models worked as well as random (accuracy close to 0.167) when using the hyperparameters found by Florescu in [1]. - However, there is an amazing improvement in random forest and k-nearest-neighbours, where accuracies have an increment of up to 50% when data is augmented.