Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
felixchenfy committed Feb 5, 2019
1 parent db39757 commit 67b0cc1
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 15 deletions.
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,15 @@
Monocular Visual Odometry
=======================================

**Content:** A simple **Monocular Visual Odometry** (VO) with initialization, tracking, local map, and optimization.
(Currently optimization is done on single frame and points. I'm still debugging the bundle adjustment for multi frames).
**Content:** A **Monocular Visual Odometry** (VO) with initialization, tracking, local map, and bundle adjustment.

**Video demo**: http://feiyuchen.com/wp-content/uploads/vo_with_opti.mp4
On the left: **White line** is the estimated camera trajectory, whose **red dots** are keyframes. **Green line** is ground truth. **Red map points** are triangulated by last keyframe.
On the right: **Green** are keypoints. **Red** are inlier matches with map points.

![](https://github.com/felixchenfy/Monocular-Visual-Odometry-Data/raw/master/result/vo_with_opti.gif)

**Project purpose:** This is a practice after I read the [Slambook](https://github.com/gaoxiang12/slambook). This will also be my final project of the course EESC-432 Advanced Computer Vision, so this repo will be kept on updating until and completed by **March 15th**.
**Project purpose:** This is a practice after I read the [Slambook](htcdtps://github.com/gaoxiang12/slambook). This will also be my final project of the course EESC-432 Advanced Computer Vision, so this repo will be kept on updating until and completed by **March 15th**.

**Data files:** Please download from here: https://github.com/felixchenfy/Monocular-Visual-Odometry-Data

Expand All @@ -23,7 +22,7 @@ Monocular Visual Odometry
- [1.1. Initialization](#11-initialization)
- [1.2. Tracking](#12-tracking)
- [1.4. Local Map](#14-local-map)
- [1.3. Optimization](#13-optimization)
- [1.3. Bundle Adjustment](#13-bundle-adjustment)
- [1.5. Other details](#15-other-details)
- [2. File Structure](#2-file-structure)
- [2.1. Folders](#21-folders)
Expand Down Expand Up @@ -61,7 +60,7 @@ Keep on estimating the next camera pose. First, find map points that are in the
## 1.4. Local Map

**Insert keyframe:** If the relative pose between current frame and previous keyframe is large enough with a translation or rotation larger than the threshold, insert current frame as a keyframe.
Do feature matching between current and previous keyframe. Get inliers by epipoloar constraint. If a inlier keypoint hasn't been triangulated before, then triangulate it and push it to local map.
Do feature matching between current and previous keyframe. Get inliers by epipoloar constraint. If a inlier keypoint hasn't been triangulated before, then triangulate it and push it to local map. All map points have a unique ID.

**Clean up local map:** Remove map points that are: (1) not in current view, (2) whose view_angle is larger than threshold, (3) rarely be matched as inlier point. (See Slambook Chapter 9.4.)

Expand All @@ -71,15 +70,15 @@ Graphs are built at two stages of the algorithm:
2) During triangulation, I also update the 2d-3d correspondance between current keypoints and triangulated mappoints, by either a direct link or going through previous keypoints that have been triangulated.


## 1.3. Optimization
## 1.3. Bundle Adjustment

Apply optimization to this single frame : Using the inlier 3d-2d corresponding from PnP, we can compute the sum of reprojection error of each point pair to form the cost function. By computing the deriviate wrt (1) points 3d pos and (2) camera pose, we can solve the optimization problem using Gauss-Newton Method and its variants. These are done by **g2o** and its built-in datatypes of `VertexSBAPointXYZ`, `VertexSE3Expmap`, and `EdgeProjectXYZ2UV`. See Slambook Chapter 4 and Chapter 7.8.2 for more details.
Since I've built the graph in the previous step, I know what the 3d-2d point correspondances are in all frames.

Currently, I only feed the camera pose and keypoints of the current frame into the optimizer. Though I set the points to be unfixed during optimization, I don't use the result to update points's pos (Simply because its result looks better).

TODO: Bundle Adjustment. I've coded this part, but the program throws error. I need some time to fix it.
Apply optimization to the previous N frames, where the cost function is the sum of reprojection error of each 3d-2d point pair. By computing the deriviate wrt (1) points 3d pos and (2) camera poses, we can solve the optimization problem using Gauss-Newton Method and its variants. These are done by **g2o** and its built-in datatypes of `VertexSBAPointXYZ`, `VertexSE3Expmap`, and `EdgeProjectXYZ2UV`. See Slambook Chapter 4 and Chapter 7.8.2 for more details.

TODO: Currently, I only do optimization on the camera poses. There is a bug of huge error when I simultaneously optimize the map points, which might due to the wrong weight between mappoints and camera poses. DEBUG later.

Besides, my result shows that the best result achieves when I optimimze only a single frame (instead of multiple), which is kind of weired. (I don't think its a bug of the program, since Performance of optimizing on 5 or 15 frames is about the same.)

## 1.5. Other details

Expand Down Expand Up @@ -183,9 +182,9 @@ I also put two video links here which I recorded on my computer:
[1. VO video, with optimization on single frame](
https://github.com/felixchenfy/Monocular-Visual-Odometry-Data/blob/master/result/vo_with_opti.mp4)
[2. VO video, no optimization](https://github.com/felixchenfy/Monocular-Visual-Odometry-Data/blob/master/result/vo_no_opti.mp4)
With optimization on single frame, the result is slightly better, as you can see that the shape of the estimated trajectory is closer to the truth trajectory.
(You may mentally scale the two camera trajectories to compare the shape.)
With bundle adjustment (only on camera pose, and single frame), the result improves, as you can see that the shape of the estimated trajectory is very closer to the truth trajectory.

TODO: Debug BA, why I failed to optimize map points at the same time.

# 6. Reference

Expand Down
2 changes: 1 addition & 1 deletion config/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ MIN_ROTATED_ANGLE: 0.08

# ------------------- Optimization -------------------
USE_BA: "true" # Use bundle adjustment for camera and points in single frame. 1 for true, 0 for false
MAX_NUM_FRAMES_FOR_BA: 15 # <= 20. I set the "BUFF_SIZE" in "vo.h" as 20, so only previous 20 frames are stored.
MAX_NUM_FRAMES_FOR_BA: 1 # <= 20. I set the "BUFF_SIZE" in "vo.h" as 20, so only previous 20 frames are stored.
information_matrix: "1.0 0.0 0.0 1.0"
FIX_MAP_PTS: "true" # TO DEBUG: If I set it to true and optimize both camera pose and map points, there is huge error.
UPDATE_MAP_PTS: "false"
33 changes: 33 additions & 0 deletions doc/buglist.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,36 @@ if (is_in) cout << p1->content; or delete this sentence

Then, an segmentation error occurs at some random location.
It seems that p1=mappoints[mappoint_id] has ruin the original data.

# ==========================
# C++

### vector

* speed:
almost same as array
https://stackoverflow.com/questions/8848575/fastest-way-to-reset-every-value-of-stdvectorint-to-0

* set all to zero
https://stackoverflow.com/questions/8848575/fastest-way-to-reset-every-value-of-stdvectorint-to-0
std::fill(v.begin(), v.end(), 0);//13.4s. But this is recommaneded stype.
memset(&v[0], 0, v.size() * sizeof v[0]); //0.125s

* resize and clear

* copy
` vector B(3,0)
vector A = B; // This is a memory copy. Later if change B, A won't change.
` use swap

### deque
deque is basically same as vector, but:
supports two-ends constant-time insert and delete, while vector only supports one end.

## Template
For a funcion with template only in return type instead of input type, there could be some trouble. I cannot manually do a switch in the function, such as if(T==bool){return bool}else if(T==string){return string}. It gives error that bool cannot be convert to string, and vice versa.
I have figure out a good solution.

# Euler Angle of cv::Rodrigues(input, output)
Axis of r_vec: x, y, z

4 changes: 2 additions & 2 deletions src/vo.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -217,9 +217,9 @@ void VisualOdometry::callBundleAdjustment()
my_basics::Config::get<string>("information_matrix"));
static const bool FIX_MAP_PTS = my_basics::Config::getBool("FIX_MAP_PTS");
static const bool UPDATE_MAP_PTS =my_basics::Config::getBool("UPDATE_MAP_PTS");
cout << FIX_MAP_PTS << UPDATE_MAP_PTS << endl;
// cout << FIX_MAP_PTS << UPDATE_MAP_PTS << endl;

if (USE_BA != 1)
if (USE_BA != true)
{
printf("\nNot using bundle adjustment ... \n");
return;
Expand Down

0 comments on commit 67b0cc1

Please sign in to comment.