You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to record kinect data to a file and play it back?
I'm calling this data logging but don't want to confuse it with the status/error logging components of the library I found in my searching for support.
If it doesn't exist, consider this a request for it! 👍
I also think this would be very valuable, even when not using ROS or pcl (I'm currently working on a kinect project in which neither of them are used).
I guess the easiest way would be to store the raw rgb data packages obtained from the device (which are nicely JPG compressed) and either store the depth values for the depth stream directly, or compress them using PNG (which seems to give good compression results, in my experience).
It is possibe. You can use NiViewer that comes with openni2 to record. You'll get a .oni file that you can playback. But if you want to do skeleton detection with Nite2, you'll have to overcome some bugs in Nite2 and I don't know, if somebody has ever did this on Linux. If there is somebody please let me know. But recording and playback is with openni2 no problem.
I don't need skeleton detection, just have to log the data stream because not everyone in my group has usb3 support, plus it would be able to test new code away from the physical rig just using a log of the data. I'm not using ROS and hope to remain cross platform and minimize overhead so I don't think I should necessarily start using it either, unfortunately.
@ahundt thanks for the feedback, I see. Actually, at the moment I am saving individual frames as well. I am using opencv to convert libfreenect2::Frames into opencv cv::Mat
and then either to save as CV_16UC1 .png(for the depth images, CV_8UC4 for the rgb and registered images) files or as CV_32FC1 .exr files.
Mat registeredInv;
registeredInvMat.convertTo(registeredInv, CV_16UC1);
imwrite(filepath + "registeredInv/" + filename +".png", registeredInv);
or
imwrite(filepath + "registeredInv/" + filename +".exr", registeredInvMat);
However, though I prefer the 16bit .png files due to the file size I am facing the problem I am describing at the #509 with the bigdepth image and the white background if I try to restore it.
Also I noticed in your source code from the link above in the depth grabber function:
you are transforming the libfreenect2::Frame to a CV_8UC4 cv::Mat, shouldn't that be a CV_32FC1 cv::Mat?
You may be right, I guess the depth is 32 bit float values? Since I'm just flipping it and nothing else 4 bytes and a 32bit float are the same size so I guess that would explain why it works even if I'm using the wrong type... I should fix that, thanks :-)
Tentative specification for such a recording tool.
JPEG re-compression. Raw JPEG images are 20MB/s. Maybe use TurboJPEG to save scale it down to half and then save it with lower quality.
Depth compression. "Raw" 32-bit float depth images are 25MB/s. Maybe use a 16-bit fixed point number to represent that (with 1mm precision, the range of 65m is more than enough).
Use format compatible with PCL. The most efficient one is PCLZF, which can be directly converted to pointclouds with no further registration. Storing full point clouds is very inefficient.
May implement it like this:
kinect2_record [-raw | -pclzf] [cl | gl] [serial]
-raw: unsynchronized, raw jpeg, 16-bit depth, 16-bit ir, 40MB/s
-pclzf: PCLZF format with registered color (512x424) and undistorted depth
[cl | gl] [serial]: the same arguments of Protonect.cpp
Will the overhead of opening and closing many files a second be a problem? What about an optional dependency on a library designed for this sort of thing like flatbuffers or hdf5?
I'm not a JPEG expert, but it is apparently possible to convert baseline to progressive without re-encoding (at least jpegtran can do that). Then you can discard an adjustable percentage of coefficients while avoiding extra CPU load.
@xlz beyond overhead also consider the issue of forwards/backwards compatibility and versioning. That will either have to be manually maintained by the project or offloaded to a tool that has already dealt with the issue. For example, versioning and serialization is well supported by flatbuffers. At a minimum if you go with a directory of files I suggest writing out a "header" or "index" file that includes a version number and a way to quickly organize and look up data.
Great, camera.txt and a frame XML would suffice particularly if there is a version number at the beginning.
What about a "raw" format that just writes the raw incoming/outgoing kinect usb byte stream to a file? wireshark's usb pcap might be a way to achieve this! Then the data dump to a folder you describe would be perfect for analysis, and pcap files could be used for raw recording if they could be utilized by libfreenect2 when wireshark plays them back.
In addition to kinect2_record, were you imagining an API to interact programmatically with recording and playback functionality?
Activity
xlz commentedon Oct 21, 2015
ROS can do this. I can't think of a use case of playback where there is no ROS.
Record and export to PCL would be a useful addition to examples.
svddries commentedon Oct 22, 2015
I also think this would be very valuable, even when not using ROS or pcl (I'm currently working on a kinect project in which neither of them are used).
I guess the easiest way would be to store the raw rgb data packages obtained from the device (which are nicely JPG compressed) and either store the depth values for the depth stream directly, or compress them using PNG (which seems to give good compression results, in my experience).
davudadiguezel commentedon Oct 22, 2015
It is possibe. You can use NiViewer that comes with openni2 to record. You'll get a .oni file that you can playback. But if you want to do skeleton detection with Nite2, you'll have to overcome some bugs in Nite2 and I don't know, if somebody has ever did this on Linux. If there is somebody please let me know. But recording and playback is with openni2 no problem.
ahundt commentedon Oct 22, 2015
I don't need skeleton detection, just have to log the data stream because not everyone in my group has usb3 support, plus it would be able to test new code away from the physical rig just using a log of the data. I'm not using ROS and hope to remain cross platform and minimize overhead so I don't think I should necessarily start using it either, unfortunately.
xlz commentedon Oct 22, 2015
Raw JPEG images are 20MB/s. "Raw" 32-bit float depth images are 25MB/s. Do you think you need compression?
ahundt commentedon Oct 30, 2015
Hmm, at that data rate it could be tough to log to a non-ssd without compression so it would probably be a good option.
ttsesm commentedon Dec 17, 2015
@ahundt did you manage to store raw data?
ahundt commentedon Dec 27, 2015
@Theodoret Right now I'm just saving single frame point clouds out with this: https://github.com/ahundt/libfreenect2pclgrabber
A real implementation would likely still prove useful.
ttsesm commentedon Dec 27, 2015
@ahundt thanks for the feedback, I see. Actually, at the moment I am saving individual frames as well. I am using opencv to convert libfreenect2::Frames into opencv cv::Mat
and then either to save as CV_16UC1 .png(for the depth images, CV_8UC4 for the rgb and registered images) files or as CV_32FC1 .exr files.
However, though I prefer the 16bit .png files due to the file size I am facing the problem I am describing at the #509 with the bigdepth image and the white background if I try to restore it.
Also I noticed in your source code from the link above in the depth grabber function:
you are transforming the libfreenect2::Frame to a CV_8UC4 cv::Mat, shouldn't that be a CV_32FC1 cv::Mat?
ahundt commentedon Dec 30, 2015
You may be right, I guess the depth is 32 bit float values? Since I'm just flipping it and nothing else 4 bytes and a 32bit float are the same size so I guess that would explain why it works even if I'm using the wrong type... I should fix that, thanks :-)
xlz commentedon Jan 1, 2016
Tentative specification for such a recording tool.
JPEG re-compression. Raw JPEG images are 20MB/s. Maybe use TurboJPEG to save scale it down to half and then save it with lower quality.May implement it like this:
The -raw format saves frames in a directory:
-pclzf:
ahundt commentedon Jan 5, 2016
Will the overhead of opening and closing many files a second be a problem? What about an optional dependency on a library designed for this sort of thing like flatbuffers or hdf5?
2 remaining items
xlz commentedon Jan 6, 2016
The JPEG images are baseline mode.
I don't want libpng dependency, so I'll save IR data as raw 16-bit.
floe commentedon Jan 7, 2016
I'm not a JPEG expert, but it is apparently possible to convert baseline to progressive without re-encoding (at least
jpegtran
can do that). Then you can discard an adjustable percentage of coefficients while avoiding extra CPU load.ahundt commentedon Jan 11, 2016
@xlz beyond overhead also consider the issue of forwards/backwards compatibility and versioning. That will either have to be manually maintained by the project or offloaded to a tool that has already dealt with the issue. For example, versioning and serialization is well supported by flatbuffers. At a minimum if you go with a directory of files I suggest writing out a "header" or "index" file that includes a version number and a way to quickly organize and look up data.
xlz commentedon Jan 11, 2016
@ahundt
camera.txt
andframe_$epochsecond.microsecond.xml
should suffice for a metadata file.ahundt commentedon Jan 11, 2016
Great,
camera.txt
and a frame XML would suffice particularly if there is a version number at the beginning.What about a "raw" format that just writes the raw incoming/outgoing kinect usb byte stream to a file? wireshark's usb pcap might be a way to achieve this! Then the data dump to a folder you describe would be perfect for analysis, and pcap files could be used for raw recording if they could be utilized by libfreenect2 when wireshark plays them back.
In addition to kinect2_record, were you imagining an API to interact programmatically with recording and playback functionality?
xlz commentedon Jan 12, 2016
libfreenect2 is not low level enough to capture USB byte stream.
andreydung commentedon Jul 25, 2017
@xlz is this resolved yet? Could you please tell me how to use this?
I went though the source code and could not find linect2_record