Skip to content
Permalink
Browse files
Update README.md
  • Loading branch information
charilaouk committed Apr 18, 2024
1 parent e45e324 commit 0dc4db6
Showing 1 changed file with 71 additions and 97 deletions.
168 README.md
@@ -1,103 +1,71 @@
# Project Name
Dissertation Code

## Description

This is my implementation of the deep learning models as described in the dissertation.
find below a description of the structe, dependencies and example usage

## STRUCTURE

--------------------------------------------------------------------------------------------------------------------------------------------------------
### multi_lettuce
This directory includes the code for the deep lettuce detection model.
It is split in the following subdirectories:

#### datasets:
this directory includes the original data used in training in YOLO format,
in the subdiredctory
##### datasets/data/training: training data
##### datasets/data/validation: testing data

### file: multi_lettuce_model.ipynb:
includes the code to define, instanciate. train, test and validate the model.
The first half of the code which includes the training takes a long time to run.
For this reason the resulting model is saved in the parent directory (yolov8s.pt).
This can be loaded in the second part of the code which loads the saved model's
weights for training and evaluation. This code is faster to run however its outputs are
already included in the parent directory:
## training metrics: multi_lettuce/runs/detect/:
last training dir: train44

## predicted images: multi_lettuce/prediction_images, if code is run it will override them

## predicted detection labels: runs/detect/predict{n},
last predicted labels: predict10

## cropped RGB images: multi_lettuce/identified_lettuces_images_rgb, if code is run it will overwrite them

## cropped depth images: multi_lettuce/identified_lettuces_images_depth, if code is run it will overwrite them

### identified_lettuces_rgb_images:
includes extracted RGB single lettuce images based on predicted detections from last run

### identified_lettuces_depth_images:
includes extracted depth single lettuce images based on predicted detections from last run

---------------------------------------------------------------------------------------------------------------------------------------------------------------

### single_lettuce
This directory includes the code for the deep CNN Regressor.
It is split into the following subdirectories:

#### images:
This directory contains the preprocessing directory:

##### preprocessing:
This directory contains the code for the image preprocessing conducted before the model's training.

###### depth_images:
contains the original single-lettuce depth images

###### rgb_images:
contains the original single-lettuce RGB images

###### depth_images_cropped:
contains the cropped depth images

###### rgb_images_cropped:
contains the cropped RGB images

###### depth_images_cropped_grayscale:
contains the cropped depth images which were converted to grayscale

###### file: depth_to_grayscale.ipynb:
contains the code to convert the cropped depth images to grayscale.
if rerun it will overwrite the depth_images_cropped_grayscale directory

###### file: image_cropping.ipynb:
contains the code to crop both rgb and depth images based on a manually specified center.
if run it will overwrite the cropped images, requiring manual re-specification of 388 lettuce
images centers.

### training_logs:
contains the training event logging from each fold which are used to visualize the training
using TensorBoard.

### file: single_lettuce_cv_model.ipynb:
contains the code to define, initialise, train, test, and evaluate the model.
Training is performed during cross validation and it takes substantial time.
To avoid this, the models from each fold are sdaved along with the visualization
images regarding the model's predictions when compared to actual labels.
If the training code is re-run, training_logs will be overwritten, as well as the saved models.
Conducting the cross validation also consumes a large amount of memory.
--------------------------------------------------------------------------------------------------------------------------------------------------------


## DEPENDENCIES

The code for each model was run on different anaconda environments, each with certain installed packages.

#### ENVIRONMENT 1 : single_lettuce code (deep CNN Regressor model)
This is my implementation of the deep learning models as described in the dissertation. Below is a description of the structure, dependencies, and example usage.

## Structure

---

### multi_lettuce

This directory includes the code for the deep lettuce detection model. It is split into the following subdirectories:

#### datasets:

This directory includes the original data used in training in YOLO format, in the subdirectories:

- **datasets/data/training**: Training data
- **datasets/data/validation**: Testing data

#### multi_lettuce_model.ipynb:

Includes the code to define, instantiate, train, test, and validate the model. The first half of the code, which includes the training, takes a long time to run. For this reason, the resulting model is saved in the parent directory (yolov8s.pt). This can be loaded in the second part of the code, which loads the saved model's weights for training and evaluation. This code is faster to run; however, its outputs are already included in the parent directory.

#### identified_lettuces_rgb_images:

Includes extracted RGB single lettuce images based on predicted detections from the last run.

#### identified_lettuces_depth_images:

Includes extracted depth single lettuce images based on predicted detections from the last run.

---

### single_lettuce

This directory includes the code for the deep CNN Regressor. It is split into the following subdirectories:

#### images:

This directory contains the preprocessing directory:

- **preprocessing**:
- This directory contains the code for the image preprocessing conducted before the model's training.
- **depth_images**: Contains the original single-lettuce depth images
- **rgb_images**: Contains the original single-lettuce RGB images
- **depth_images_cropped**: Contains the cropped depth images
- **rgb_images_cropped**: Contains the cropped RGB images
- **depth_images_cropped_grayscale**: Contains the cropped depth images which were converted to grayscale
- **depth_to_grayscale.ipynb**: Contains the code to convert the cropped depth images to grayscale. If rerun, it will overwrite the depth_images_cropped_grayscale directory
- **image_cropping.ipynb**: Contains the code to crop both RGB and depth images based on a manually specified center. If run, it will overwrite the cropped images, requiring manual re-specification of 388 lettuce images centers.

#### training_logs:

Contains the training event logging from each fold which are used to visualize the training using TensorBoard.

#### single_lettuce_cv_model.ipynb:

Contains the code to define, initialize, train, test, and evaluate the model. Training is performed during cross-validation and it takes substantial time. To avoid this, the models from each fold are saved along with the visualization images regarding the model's predictions when compared to actual labels. If the training code is re-run, training_logs will be overwritten, as well as the saved models. Conducting the cross-validation also consumes a large amount of memory.

## Dependencies

The code for each model was run on different Anaconda environments, each with certain installed packages due to collisions on the typing-extensions versions between tensorflow and torch.

### Environment 1: single_lettuce code (deep CNN Regressor model)

Package Version
---------------------------- --------------------
absl-py 2.1.0
@@ -209,7 +177,8 @@ wheel 0.41.2
wrapt 1.16.0
zipp 3.17.0

#### ENVIRONMENT 2 : multi_lettuce code (deep YOLOv8 lettuce detection model)

### Environment 2: multi_lettuce code (deep YOLOv8 lettuce detection model)

Package Version
---------------------------- --------------------
@@ -381,3 +350,8 @@ wheel 0.42.0
widgetsnbextension 4.0.9
wrapt 1.14.1






0 comments on commit 0dc4db6

Please sign in to comment.