Skip to content

Repo for UWROV's Submission 2023 MATE ML Challenge. This repo contains notebooks to train & run models and a desktop app implementing the optimized CV model.

Notifications You must be signed in to change notification settings

srihariKrishnaswamy/ML-Challenge

Repository files navigation

ML-Challenge: SeaScout

This progam is a deepsea organisim detector and classifier for the 2023 MATE Machine Learning Challenge.

Developed by Srihari Krishnaswamy and Vivian Wang, part of the Underwater Remotely Operated Vehicles Team at the University of Washington.

Project Overview

SeaScout uses a Yolov5 object detection model for detections and classifications. Specifically, training was done based on the MBARI Monterey Bay Benthic Object Detector, also found in FathomNet's Model Zoo. In Training, some layers of the Neural Network were left frozen, but enough layers were unfrozen to get optimal model performance. The model was trained on data from last year's Deepsea-Detector project, but the dataset was expanded to include more data from Fathomnet. The training data for our model can be found in our Roboflow project.

Getting Started

Downloading the Project

This project requires python 3.10 or higher. It also uses Git LFS to store files such as our models and base videos.

Run the following lines in a command terminal to clone the repository, initialize LFS and install requirements:

cd ML-Challenge
git lfs install; git lfs fetch; git lfs pull
pip3 install -qr requirements.txt

Running the UI

To run the project's UI, run this command:

python ui.py

image

After the UI starts, you are free to process videos. In order for videos to be processed, they have to be moved into the videos folder inside the project folder. Similarly, in order for a model to be used, it has to be in the iterations folder inside the main project folder. The user can process multiple videos and log them onto the same excel file, which will appear in the latest folder inside the output folder after processing is finished. Processed videos will appear here as well. The user is free to kill the video processing at any point, but if this happens, no excel file or processed videos will be generated for the user to see.

Running in the Command Line/Terminal or Google Colab

The UI for our project is a wrapper for a python script which invokes object detection and detection logging. In order to run this script on its own, run the following command:

python master_detect_data.py --videos FIRST_VIDEO_HERE.mp4 SECOND_VIDEO_HERE.mp4 --model MODEL_HERE.pt

Just as in the UI, the entered videos and model must be valid and in the videos or iterations (NOT MODELS) folder respectively. For instance, a valid statement running the script would be:

python master_detect_data.py --videos descent.mp4 seafloor.mp4 --model SeaScout.pt

since the file SeaScout.pt is in the iterations folder, and each of the .mp4 files are in the videos folder.

Resources:

Dataset: Roboflow project Model Training Notebook: Colab Notebook Additional in-Depth Documentation: Documentation

Acknowledgements:

We would like to thank the following people and organizations:

About

Repo for UWROV's Submission 2023 MATE ML Challenge. This repo contains notebooks to train & run models and a desktop app implementing the optimized CV model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published