Skip to content

a-daksh/pallet_ground_detection_ros2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Pallet and Ground Detection-Segmentation in ROS2

This project implements a pallet and ground detection-segmentation application in ROS2, designed for manufacturing or warehousing environments.

Prerequisites

Before running this project, ensure you have the following installed:

  1. ROS2 Humble (on Ubuntu 22.04).
  2. Python libraries: pip install ultralytics opencv-python numpy torch torchvision cv_bridge matplotlib

Installation Instructions

Step 1: Clone the Repository

Create a ROS2 workspace and clone this repository:

mkdir -p ~/ros2_ws/
cd ~/ros2_ws/
git clone [email protected]:a-daksh/pallet_ground_detection_ros2.git

Step 2: Download Required Files

Download the required files from Drive link

  • db3_files folder (containing camera_data.db3).
  • best.pt (trained YOLO model weights).

Place them in the following locations:

  • db3_files/ folder should be placed inside ros2_ws/.
  • best.pt should be placed in ros2_ws/src/yolo_inference/yolo_inference/ (next to yolo_node.py).

Step 3: Build the Workspace

Build the ROS2 workspace:

cd ~/ros2_ws
colcon build --packages-select yolo_inference
source install/setup.bash

Usage Instructions

Step 1: Run the YOLO Inference Node

On the first terminal, run the YOLO inference node:

cd ~/ros2_ws
ros2 run yolo_inference yolo_node

Step 2: Replay Camera Data

On the second terminal, replay the .db3 bag file to simulate camera data:

ros2 bag play ~/ros2_ws/db3_files/camera_data.db3 --loop

Step 3: Visualize Outputs

On the third terminal, visualize raw and annotated images using rqt_image_view:

ros2 run rqt_image_view rqt_image_view

Dataset Preparation

The dataset_preparation/ folder contains two Jupyter notebooks that streamline the process of preparing the dataset and training the YOLO model.

1. Annotate Dataset Notebook

  • File: annotate_dataset.ipynb
  • Purpose:
    • This notebook is used to annotate images in the dataset using Grounding DINO.
    • It generates bounding boxes and segmentation masks for pallets and ground, which are then saved as Pascal VOC format.
  • How to Use:
    1. Open the notebook in Google Colab or your local Jupyter environment.
    2. Upload your dataset of images and change path in code.
    3. Run the cells and download the annotated dataset for training.

2. Augmenting and Converting data

  • For converting the dataset to YOLO format, I used Roboflow, an online platform that provides tools for better visualization and management of annotated data.
  • Roboflow also supports data augmentation techniques such as exposure adjustment, noise addition, flipping, and more. These tools were used to augment the dataset, effectively increasing its size and diversity to simulate various real-world conditions.

3. Train YOLO Model Notebook

  • File: train_yolo_model.ipynb
  • Purpose:
    • This notebook is used to train a YOLO model on the annotated dataset.
    • It supports both object detection (for pallets) and semantic segmentation (for pallets + ground).
  • How to Use:
    1. Open the notebook in Google Colab or your local Jupyter environment.
    2. Upload the annotated dataset generated by annotate_dataset.ipynb.
    3. Configure training parameters such as epochs, batch size, and learning rate.
    4. Train the model and download the trained weights (best.pt) for deployment.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published