@@ -7,7 +7,7 @@ object in the case of computer vision area to do later some 3D rendering or in t
7
7
obtain an object pose in order to grasp it and do some manipulation. However, this is not a trivial
8
8
problem to solve due to the fact that the most common issue in image processing is the computational
9
9
cost of applying a lot of algorithms or mathematical operations for solving a problem which is basic
10
- and immediateley for humans.
10
+ and immediately for humans.
11
11
12
12
Goal
13
13
----
@@ -63,7 +63,7 @@ The tutorial consists of two main programs:
63
63
64
64
-# ** Model registration**
65
65
66
- This applicaton is exclusive to whom don't have a 3D textured model of the object to be detected.
66
+ This application is exclusive to whom don't have a 3D textured model of the object to be detected.
67
67
You can use this program to create your own textured 3D model. This program only works for planar
68
68
objects, then if you want to model an object with complex shape you should use a sophisticated
69
69
software to create it.
@@ -110,7 +110,7 @@ The tutorial consists of two main programs:
110
110
-c, --confidence (value:0.95)
111
111
RANSAC confidence
112
112
-e, --error (value:2.0)
113
- RANSAC reprojection errror
113
+ RANSAC reprojection error
114
114
-f, --fast (value:true)
115
115
use of robust fast match
116
116
-h, --help (value:true)
@@ -269,7 +269,7 @@ Here is explained in detail the code for the real time application:
269
269
270
270
Firstly, we have to set which matcher we want to use. In this case is used
271
271
@ref cv::FlannBasedMatcher matcher which in terms of computational cost is faster than the
272
- @ref cv::BFMatcher matcher as we increase the trained collectction of features. Then, for
272
+ @ref cv::BFMatcher matcher as we increase the trained collection of features. Then, for
273
273
FlannBased matcher the index created is *Multi-Probe LSH: Efficient Indexing for High-Dimensional
274
274
Similarity Search* due to *ORB* descriptors are binary.
275
275
@@ -381,12 +381,12 @@ Here is explained in detail the code for the real time application:
381
381
as not, there are false correspondences or also called *outliers*. The [Random Sample
382
382
Consensus](http://en.wikipedia.org/wiki/RANSAC) or *Ransac* is a non-deterministic iterative
383
383
method which estimate parameters of a mathematical model from observed data producing an
384
- aproximate result as the number of iterations increase. After appyling *Ransac* all the *outliers*
384
+ approximate result as the number of iterations increase. After appyling *Ransac* all the *outliers*
385
385
will be eliminated to then estimate the camera pose with a certain probability to obtain a good
386
386
solution.
387
387
388
388
For the camera pose estimation I have implemented a *class* **PnPProblem**. This *class* has 4
389
- atributes : a given calibration matrix, the rotation matrix, the translation matrix and the
389
+ attributes : a given calibration matrix, the rotation matrix, the translation matrix and the
390
390
rotation-translation matrix. The intrinsic calibration parameters of the camera which you are
391
391
using to estimate the pose are necessary. In order to obtain the parameters you can check
392
392
@ref tutorial_camera_calibration_square_chess and @ref tutorial_camera_calibration tutorials.
@@ -406,7 +406,7 @@ Here is explained in detail the code for the real time application:
406
406
407
407
PnPProblem pnp_detection(params_WEBCAM); // instantiate PnPProblem class
408
408
@endcode
409
- The following code is how the *PnPProblem class* initialises its atributes :
409
+ The following code is how the *PnPProblem class* initialises its attributes :
410
410
@code{.cpp}
411
411
// Custom constructor given the intrinsic camera parameters
412
412
@@ -431,13 +431,13 @@ Here is explained in detail the code for the real time application:
431
431
surfaces and sometimes the pose estimation seems to have a mirror effect. Therefore, in this this
432
432
tutorial is used ITERATIVE method due to the object to be detected has planar surfaces.
433
433
434
- The OpenCV Ransac implementation wants you to provide three parameters: the maximum number of
434
+ The OpenCV RANSAC implementation wants you to provide three parameters: the maximum number of
435
435
iterations until stop the algorithm, the maximum allowed distance between the observed and
436
436
computed point projections to consider it an inlier and the confidence to obtain a good result.
437
- You can tune these paramaters in order to improve your algorithm performance. Increasing the
437
+ You can tune these parameters in order to improve your algorithm performance. Increasing the
438
438
number of iterations you will have a more accurate solution, but will take more time to find a
439
439
solution. Increasing the reprojection error will reduce the computation time, but your solution
440
- will be unaccurate. Decreasing the confidence your arlgorithm will be faster, but the obtained
440
+ will be unaccurate. Decreasing the confidence your algorithm will be faster, but the obtained
441
441
solution will be unaccurate.
442
442
443
443
The following parameters work for this application:
@@ -446,7 +446,7 @@ Here is explained in detail the code for the real time application:
446
446
447
447
int iterationsCount = 500; // number of Ransac iterations.
448
448
float reprojectionError = 2.0; // maximum allowed distance to consider it an inlier.
449
- float confidence = 0.95; // ransac successful confidence.
449
+ float confidence = 0.95; // RANSAC successful confidence.
450
450
@endcode
451
451
The following code corresponds to the *estimatePoseRANSAC()* function which belongs to the
452
452
*PnPProblem class*. This function estimates the rotation and translation matrix given a set of
@@ -458,7 +458,7 @@ Here is explained in detail the code for the real time application:
458
458
void PnPProblem::estimatePoseRANSAC( const std::vector<cv::Point3f> &list_points3d, // list with model 3D coordinates
459
459
const std::vector<cv::Point2f> &list_points2d, // list with scene 2D coordinates
460
460
int flags, cv::Mat &inliers, int iterationsCount, // PnP method; inliers container
461
- float reprojectionError, float confidence ) // Ransac parameters
461
+ float reprojectionError, float confidence ) // RANSAC parameters
462
462
{
463
463
cv::Mat distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1); // vector of distortion coefficients
464
464
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1); // output rotation vector
@@ -479,8 +479,8 @@ Here is explained in detail the code for the real time application:
479
479
}
480
480
@endcode
481
481
In the following code are the 3th and 4th steps of the main algorithm. The first, calling the
482
- above function and the second taking the output inliers vector from Ransac to get the 2D scene
483
- points for drawing purpose. As seen in the code we must be sure to apply Ransac if we have
482
+ above function and the second taking the output inliers vector from RANSAC to get the 2D scene
483
+ points for drawing purpose. As seen in the code we must be sure to apply RANSAC if we have
484
484
matches, in the other case, the function @ref cv::solvePnPRansac crashes due to any OpenCV *bug*.
485
485
@code{.cpp}
486
486
if(good_matches.size() > 0) // None matches, then RANSAC crashes
@@ -558,7 +558,7 @@ Here is explained in detail the code for the real time application:
558
558
559
559
\f[X = (x,y,z,\dot x,\dot y,\dot z,\ddot x,\ddot y,\ddot z,\psi,\theta,\phi,\dot \psi,\dot \theta,\dot \phi,\ddot \psi,\ddot \theta,\ddot \phi)^T\f]
560
560
561
- Secondly, we have to define the number of measuremnts which will be 6: from \f$R\f$ and \f$t\f$ we can
561
+ Secondly, we have to define the number of measurements which will be 6: from \f$R\f$ and \f$t\f$ we can
562
562
extract \f$(x,y,z)\f$ and \f$(\psi,\theta,\phi)\f$. In addition, we have to define the number of control
563
563
actions to apply to the system which in this case will be *zero*. Finally, we have to define the
564
564
differential time between measurements which in this case is \f$1/T\f$, where *T* is the frame rate of
0 commit comments