-
Notifications
You must be signed in to change notification settings - Fork 767
Open
Labels
Description
There are a few different functions that would be nice to have.
- color_to_depth
- Various transformation matrices
- IR image x,y to Color x,y and back
- IR cloud xyz to color x,y and back
- Getting the intrinsic matrix
- I know there is Freenect2Device::ColorCameraParams, but for some reason once I put it into a matrix and called
cv::decomposeHomographyMat(H, K, rotations, translations, normals);
from OpenCV, none of the solutions have been projecting correctly for me. - There is that interesting predefined constant... does that matter in that case?
- I know there is Freenect2Device::ColorCameraParams, but for some reason once I put it into a matrix and called
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
Select code repository
Activity
xlz commentedon Dec 17, 2015
Previous discussion: #223 #41
There are some problem extracting these matrices losslessly from the builtin parameters.
ahundt commentedon Dec 27, 2015
Here are example motivating use cases that can be used to help determine what is actually needed:
@xlz those previous discussions are definitely relevant, and alternative ways to achieve this aside from the standard transformation matrices are definitely worth considering, particularly if they are more accurate.
Haven't thought about it yet but it is probably also worth determining if color_to_depth can be inverted in a reasonable manner.
floe commentedon Dec 27, 2015
My knowledge of multiple-view geometry is a bit rusty, but IIRC, for the special case of the Kinect (one color camera & one depth camera), the exact transformation from one camera space to the other always depends on the live depth values and therefore can't be expressed as a "classic" transformation matrix, right?
Did you already have a look at the
bigdepth
parameter toRegistration::apply
? (https://openkinect.github.io/libfreenect2/classlibfreenect2_1_1Registration.html#a8be3a548ff440e30f5e89a4b053e04fa) This may be useful to address at least some of your issues.ahundt commentedon Dec 27, 2015
ahundt commentedon Dec 27, 2015
@floe Yeah, it seems there is a bit more to it because the depth emitter and receiver need to both be accounted for in addition to the rgb sensor.
It looks like there are some potentially useful resources (papers, etc) in the following google results, and these results.
Some of those are for the original Kinect, so of course structured light vs ToF differences apply where appropriate.
floe commentedon Dec 27, 2015
You're right, my previous post was slightly incorrect. From one camera space to the other, you can indeed use undistortion + regular transformation matrix. However, to actually map one pixel to the corresponding one in the other image (i.e. from one image space to the other), you need the live depth values.
ahundt commentedon Dec 27, 2015
Sorry, my edits made this confusing. For future readers, I originally made a note of the similarity between camera field of views in stereo vs depth + rgb cameras. I then realized there is more complexity to this than the stereo case and edited my post.
xlz commentedon Dec 27, 2015
Let me describe the current status:
kinect2_depth_to_color
), but it is not in a matrix multiplication form.TheSome intrinsic parameters (distortion terms) of the color camera are not known. (Focal lengths, principal points are known.)I gave it a go #223 (comment) trying to figure the extrinsic matrix from
kinect2_depth_to_color
but did not succeed. You can always look at it again.To use matrix for extrinsic transformation requires either you supply the matrix by external means, or you figure out the extrinsic matrix from
kinect2_depth_to_color
.floe commentedon Dec 27, 2015
IIRC
kinect2_depth_to_color
also subsumes the intrinsic parameters of the color camera?xlz commentedon Dec 27, 2015
I think yes, in #223 there is a plot showing that there is some (tiny) distortion produced by
kinect2_depth_to_color
.xlz commentedon Dec 27, 2015
By the way, I found one old proposal: