You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+88-15Lines changed: 88 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,25 @@
1
1
# Depthai ROS Repository
2
-
Hi and welcome to the main depthai-ros respository!
2
+
Hi and welcome to the main depthai-ros respository! Here you can find ROS related code for OAK cameras from Luxonis. Don't have one? You can get them [here!](https://shop.luxonis.com/)
3
+
4
+
Main features:
5
+
6
+
* You can use the cameras as classic RGBD sensors for your 3D vision needs.
7
+
* You can also load Neural Networks and get the inference results straight from camera!
8
+
9
+
You can develop your ROS applications in following ways:
10
+
11
+
* Use classes provided in `depthai_bridge` to construct your own driver (see `stereo_inertial_node` example on how to do that)
12
+
* Use `depthai_ros_driver` class (currently available on ROS2 Humble) to get default experience (see details below on how)
13
+
14
+

15
+
3
16
4
17
Supported ROS versions:
5
18
- Noetic
6
19
- Galactic
7
20
- Humble
8
21
9
-
For development check out respective git branches.
10
-
11
-
### Install from ros binaries
12
-
13
-
Add USB rules to your system
14
-
```
15
-
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
16
-
sudo udevadm control --reload-rules && sudo udevadm trigger
17
-
```
18
-
Install depthai-ros. (Available for Noetic, foxy, galactic and humble)
19
-
`sudo apt install ros-<distro>-depthai-ros`
22
+
For usage check out respective git branches.
20
23
21
24
## Docker
22
25
You can additionally build and run docker images on your local machine. To do that, add USB rules as in above step, clone the repository and inside it run (it matters on which branch you are on):
Currently, recommended way to launch cameras is to use executables from depthai_ros_driver package.
102
+
103
+
This runs your camera as a ROS2 Component and gives you the ability to customize your camera using ROS parameters.
104
+
Paramerers that begin with `r_` can be freely modified during runtime, for example with rqt.
105
+
Parameters that begin with `i_` are set when camera is initializing, to change them you have to call `stop` and `start` services. This can be used to hot swap NNs during runtime, changing resolutions, etc. Below you can see some examples:
106
+
107
+
#### Setting RGB parameters
108
+

109
+
#### Setting Stereo parameters
110
+

111
+
#### Stopping/starting camera for power saving/reconfiguration
112
+

113
+
114
+
Stopping camera also can be used for power saving, as pipeline is removed from the device. Topics are also removed when camera is stopped.
115
+
116
+
As for the parameters themselves, there are a few crucial ones that decide on how camera behaves.
117
+
*`camera.i_pipeline_type` can be either `RGB` or `RGBD`. This tells the camera whether it should load stereo components. Default set to `RGBD`
118
+
*`camera.i_nn_type` can be either `none`, `rgb` or `spatial`. This is responsible for whether the NN that we load should also take depth information (and for example provide detections in 3D format). Default set to `spatial`
119
+
*`camera.i_mx_id`/`camera.i_ip` are for connecting to a specific camera. If not set, it automatically connects to the next available device.
120
+
*`nn.i_nn_config_path` represents path to JSON that contains information on what type of NN to load, and what parameters to use. Currently we provide options to load MobileNet, Yolo and Segmentation (not in spatial) models. To see their example configs, navigate to `depthai_ros_driver/config/nn`. Defaults to `mobilenet.json` from `depthai_ros_driver`
121
+
122
+
To use provided example NN's, you can set the path to:
123
+
*`depthai_ros_driver/segmentation`
124
+
*`depthai_ros_driver/mobilenet`
125
+
*`depthai_ros_driver/yolo`
126
+
127
+
All available camera-specific parameters and their default values can be seen in `depthai_ros_driver/config/camera.yaml`.
128
+
129
+
Currently, we provide few examples:
130
+
131
+
*`camera.launch.py` launches camera in RGBD, and NN in spatial (Mobilenet) mode.
132
+
*`rgbd_pcl.launch.py` launches camera in basic RGBD configuration, doesn't load any NNs. Also loads ROS depth processing nodes for RGBD pointcloud.
133
+
*`example_multicam.launch.py` launches several cameras at once, each one in different container. Edit the `multicam_example.yaml` config file in `config` directory to change parameters
134
+
135
+

136
+
*`example_segmentation.launch.py` launches camera in RGBD + semantic segmentation (pipeline type=RGBD, nn_type=rgb)
137
+
*`pointcloud.launch.py` - similar to `rgbd_pcl.launch.py`, but doesn't use RGB component for pointcloud
138
+
*`example_marker_publish.launch.py` launches `camera.launch.py` + small python node that publishes detected objects as markers/tfs
139
+
*`rtabmap.launch.py` launches camera and RTAB-MAP RGBD SLAM (you need to install it first - `sudo apt install ros-$ROS_DISTRO-rtabmap-ros`). You might need to set manual focus via parameters here.
0 commit comments