Skip to content

An autonomous robot car which uses symbol recognition to perform tasks. It operates on a Raspberry Pi 3 and utilizes functions available in Python's OpenCV library.

License

Notifications You must be signed in to change notification settings

julianganjs/symbol-recognizing-robot-car

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Symbol Recognizing Robot Car

This repository contains the source code for an autonomous robot car which uses symbol recognition to perform tasks such as following directions, stopping, counting shapes and measuring distances. The robot operates on a Raspberry Pi 3 and utilizes functions available in the OpenCV library.

Components

  • Raspberry Pi 3 Model B
  • Raspberry Pi Camera Module 2
  • L298N Motor Driver

Dependencies

  • Python 3.6 or higher
  • OpenCV
  • PyTesseract
  • RPi.GPIO
  • NumPy

Run these commands in the terminal:

pip install opencv-python
pip install pytesseract
pip install RPi.GPIO

Features

Symbol Recognition

  • Methodology

    In order to prevent background imagery from interrupting the symbol recognition process of the delivery robot, color masking is utilized to extract the region of interest within each symbol. This region of interest is identified by the purple border surrounding each symbol. From the OpenCV library, cv2.inRange is used to extract only the specific purple color range from the input camera feed. From the extracted image, cv2.boudingRect is then used to determine the width to height ratio of the purple border using x, y, w, h coordinates. This ensures that only a purple border of this size can be recognized, thus removing the error of interruptions from purple objects in the background. A blue border is drawn around the ROI in the video output using cv2.rectangle and the x, y, w, h coordinates. This informs the user if the purple border has been recognized, along with the ROI in the input camera feed where the symbol recognition process will take place.

Direction Following

  • Methodology

    Color dominance was determined to be the most effective way to recognize and differentiate arrows in 4 different directions. It uses K-Means Clustering from the OpenCV library to obtain centroids of each color in the ROI. The blue circle of the arrow symbol is split into 9 zones. The white centroid of the top, bottom, left and right zones are compared to locate the tip of the arrow. Once located, the direction is then identified.



    Color masking was used to isolate the red color of the stop sign in the ROI. The extracted octagon was then identified using cv2.HoughCircles from the OpenCV library. If one or multiple circles were detected, this lets the robot know that it should halt.
  • Symbols

Distance Measuring

  • Methodology

    The right rectangle is first isolated and cv2.boudingRect is then used to determine the width to height ratio of the rectangle. This ratio is used to ensure the correct rectangle in the video feed is being recognized.



    Each time the begin symbol is recognized, the delivery robot travels a total of 9.5cm in a time of 0.3s. Thus, the more times the symbol is shown, the further it travels.

    A stop measuring symbol is used to end the distance measuring process. This distance is displayed in the output camera feed viewed by the user. By using cv2.inRange to isolate the red circle from the traffic light symbol, cv2.HoughCircles is then used to locate any circles in red from the ROI.



    Once the symbol is recognized, the total travelled distance will be displayed on the output camera feed for the user’s view using cv2.putText. The distance counter will also reset to prepare for the next distance measuring process.
  • Symbols

Shape Counting

  • Methodology

    The shapes were converted to orange to prevent background interference during the shape counting process. By utilizing the OpenCV library, cv2.findContours returns the edges of any shape within the ROI. From these contours, cv2.approxPolyDP then locates connecting edges to return shapes. A higher arc length was set to prevent micro contours from forming and affecting the shape counting. The shapes are identified with their number of edges. Once identified, the shapes’ names are displayed on the output camera feed for the user’s view.

  • Symbol

Code Structure

  • direction.py: This code allows the robot car to follow directions by recoginizing the four directional symbols and stop symbol.
  • distance.py: This code allows the robot car to measure distances by recoginizing the start measuring symbol and traffic light symbol.
  • shapes.py: This code allows the robot car to count shapes by recognizing the shapes symbol.

Poster

License

This project is licensed under the MIT License.

About

An autonomous robot car which uses symbol recognition to perform tasks. It operates on a Raspberry Pi 3 and utilizes functions available in Python's OpenCV library.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages