Skip to content

hdmi2csi

Tobias Kammacher edited this page Nov 1, 2017 · 112 revisions

The HDMI2CSI module

Quicklinks

WARNING: This repository is deprecated! All new development is done on 28.1, based on Kernel version 4.4 (see new Development Wiki).

The High-Performance Multimedia Group has developed a High Definition Multimedia Interface (HDMI®) to MIPI® Camera Serial Interface Type 2 (CSI-2) converter module (HDMI2CSI) as a plug-in to the NVIDIA Jetson TX1 development kit.

The HDMI2CSI module supports 4K video resolution for next-generation embedded Ultra High Definition video applications. The HDMI2CSI module offers two 4K/2K HDMI video and audio streams to be simultaneously converted in MIPI CSI-2 video and TDM audio format that can be processed by the Jetson TX1 processor.

More information about the hardware as well as contact information for ordering is available at the HPMM-blog.

hdmi2csi_tx1-1-676x615

Fig. 1: The HDMI2CSI Board attached to the Nvidia TX1 Evaluation Board

Drivers

The drivers to use the HDMI2CSI board are available in the https://github.com/InES-HPMM/linux-l4t/ repository. The OS is Linux4Tegra from Nvidia, which is based on the Linux kernel 3.10. Our different branches are forks of the original Linux by Nvidia with our custom drivers for the TC358840 HDMI-to-CSI bridge chip on top. This repository supports L4T versions 24.1 and 24.2.1. All new development is done on 28.1, based on Kernel version 4.4 (see new Development Wiki).

Features

Our drivers allow capturing of HDMI sources in formats up to 2160p30 (UHD). In the current state, the capturing will work in most cases. But this is not production ready code! It is suitable to evaluate the capabilities of the HDMI-to-CSI bridge in a prototype phase. More complex HDMI functionality (e.g. changing resolution of the HDMI source on the fly, while plugged-in) is not yet supported. If you are interested in improving the drivers and moving towards more production-ready code, contact us!

The supported features vary based on the branch:

Branch L4T Version Dynamic Format Resolve HDMI-In Ports Max. Resolution Status EDID Audio
Recommended Version! hdmi2csi/l4t-r24-2.1 R24.2.1 Yes A and B (requires U-Boot modification, see #1 2160p30 Under active development Fixed. Native: 2160p30, 1080p50. Extended: 2160p30, 1080p60, 720p29.97/30, 1080p30, 1080p50, 1080p29.97/30. Stereo Capture on HDMI In Port B with separate cable (see Capturing HDMI Audio )
hdmi2csi/l4t-r24-1-dev-4K R24.1 Yes A (B can be used parallel from commit a2b2fc9 2160p30 Development stopped Fixed (checksum error) -

Deprecated: hdmi2cs/l4t-r23-1, hdmi2cs/l4t-r23-1-dev-4K, hdmi2cs/l4t-r24-1

Drivers for L4T R24.2.1

The drivers have now been ported to L4T 24.2.1.

Please note:

  • Changes in U-Boot require that the user replaces U-boot files --> Workaround Pinmux
  • Note that we are building the drivers on R24.2 .1 (even newer than 24.2). Please use a compatible Root-FS (e.g. the Ubuntu16.04 coming with the new Jetpack 2.3 .1)
  • Nvidia has introduced the driver for the TC358840 in 24.2, but we are not (yet) compatible with their camera implementation. Nvidia seems to currently only support max. 4 CSI lanes and capturing for 1080p60. Therefore we will wait until they support full 8 CSI lanes and capturing 2160p30 before possibly using their interfaces for the TC358840.

Background Information

Background information about the drivers and their structure is available at driver background information.

Compiling the kernel and drivers

Quickstart

To get started quickly, flash the TX1 with our prebuilt image with L4T The new 24.2.1 image is now available.

Custom Kernel Compilation

Users with knowledge of Linux may want to compile the kernel from our sources. More information about compiling the kernel are available at Custom Kernel Compilation.

Capturing HDMI video with the HDMI2CSI module

EDID / HDMI Source

The hdmi2csi board acts as HDMI sink (with limited feature set). Every HDMI sink defines the formats (resolutions, timings,..) it supports in an EDID file. In our case the EDID (v1.3) is fixed and included in the tc358840 driver. Currently these formats are enabled:

  • 16:9
    • 3840x2160p30
    • 1920x1080p24/25/30/50/60
    • 1280x720p30
  • 64:27 ("21:9")
    • 1920x1080p29.97/30/60
    • 1280x720p29.97/30

Tested Sources:

  • Nvidia Shield (1920x1080p60, 3840x2160p30)
  • WD Live TV (1920x1080p60)
  • GoPro Hero 4 (1920x1080p60)
  • QuantumData 780B (all enabled formats)

Load Driver

Make sure the subdevice driver is loaded:

sudo modprobe tc358840

The GStreamer framework is supported for video capturing.

Capture 1080p60 with any branch

The following command can be used to capture a HDMI input at 1080p60 plugged into the HDMI-A input and display it on a display attached to the HDMI output:

gst-launch-1.0 v4l2src ! 'video/x-raw, width=1920, height=1080, framerate=60/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=60/1, format=I420' ! nvoverlaysink sync=false

Capture 2160p30 with hdmi2csi/l4t-r24-1-dev-4K or hdmi2csi/l4t-r24-2.1

Capturing 2160p30 is currently only supported by the hdmi2csi/l4t-r24-1-dev-4K and hdmi2csi/l4t-r24-2.1 dev branches. It is also necessary to set the TX1 to maximum performance by running the performance script: jetson_clocks_max.sh from the Nvidia Documentation. There is a script in the Tegra Linux Driver Package R24.1 Release Notes PDF in chapter 2.3 "Maximizing TX1 performance" which is similar.

sudo su
./jetson_clocks_max.sh
./max_perf.sh

Use one of the following two methods to capture 2160p30 video on the HDMI-input and display it on the HDMI output.

MMAP mode and hardware-accelerated path

This method is based on the (slow) method of MMAP for passing buffers through the pipeline. The advantage of this method is that we can use the default GStreamer version 1.2.4. But this is not the best performing method in terms of CPU usage.

This uses the MMAP io-mode (per default) and the hardware-accelerated path with the Nvidia video convert/scaler nvvidconv in the video memory memory:NVMM to write directly on top of the video buffer (using nvoverlaysink).

# Set display resolution to 2160p30 (if not automatically done)
export DISPLAY=:0
xrandr --output HDMI-0 --mode 3840x2160 --rate 30.0

# Launch capture pipeline
gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, framerate=30/1, format=I420' ! nvoverlaysink sync=false

## io-mode=2 is the default for v4l2src (MMAP method for buffer passing)

Userptr/Dmabuf mode

Alternatively we can use different modes of passing buffers through the pipeline (e.g. Userptr or Dmabuf mode). This method requires a current version of GStreamer (at least 1.6.0) and some patches (see build GStreamer manually for details). This method can also be used to write to a openGL display sink, and these methods can achieve better CPU performance than MMAP. More information about the io-modes is available here.

Some example pipelines are:

#Dmabuf
gst-launch-1.0 v4l2src io-mode=4 device=/dev/video0 do-timestamp=true ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! xvimagesink sync=false
#Userptr
gst-launch-1.0 v4l2src io-mode=3 device=/dev/video0 do-timestamp=true ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! xvimagesink sync=false

Capturing Audio from HDMI Input

See Capturing HDMI Audio

Future Development

The performance for achieving 2160p30 video capturing varies between the different methods of buffer passing. Also there are limitations for which elements can be used with certain modes (especially with the hardware accelerated video conversion nvvidconv). Furthermore Userptr and Dmabuf modes require specific versions of GStreamer and additional workarounds to function.

The next development step is to improve usability of the different modes and hopefully allow the use of nvvidconv with the better performing modes.

Also the drivers in the dev-branches will be rewritten to improve readability and maintainability.

The current state of support for io-modes is described in Support for io-modes.

If you are interested in collaborating, we are happy to test out changes you made to the code or talk about possible improvements. You can contact us via email to [email protected].

Examples

Note that the device= parameter of v4l2src for

  • /dev/video0 selects HDMI-In A (maximum resolution 2160p30)
  • /dev/video2 selects HDMI-In B (maximum resolution 1080p60)

GStreamer

Some of these pipelines may require GStreamer plugins that are only available in the custom built GStreamer 1.8.0. Therefore it is necessary to switch to it by changing environment variables:

export DISPLAY=:0; export LD_LIBRARY_PATH=/home/ubuntu/build/gst_1.8.0/out/lib/; export PATH=/home/ubuntu/build/gst_1.8.0/out/bin/:$PATH; export GST_PLUGIN_PATH=$LD_LIBRARY_PATH

Capturing and Rendering

  • Capture 2160p30 on HDMI-In A and render on HDMI Display
    • gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, framerate=30/1, format=I420' ! nvoverlaysink sync=false
  • Capture 1080p60 on HDMI-In B and render on HDMI Display
    • gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=1920, height=1080, framerate=60/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=60/1, format=I420' ! nvoverlaysink sync=false
  • Userptr mode for buffer passing for improved performance (requires modifications to GStreamer)
    • gst-launch-1.0 v4l2src io-mode=3 device=/dev/video0 do-timestamp=true ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! xvimagesink sync=false
  • Dmabuf mode for buffer passing for improved performance (requires modifications to GStreamer)
    • gst-launch-1.0 v4l2src io-mode=4 device=/dev/video0 do-timestamp=true ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! xvimagesink sync=false

Video Processing on GPU

  • Use GPU plugin for video processing nvivafilter
    • gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=3840, height=2160, format=UYVY, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12' ! nvtee ! nvivafilter cuda-process=true pre-process=true post-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink display-id=0 -e

Streaming

H.265 Encode and Send on TX1, receive on PC with VLC
  • Stream RTP (H.265 encoded) of Input HDMI-A on TX1 <- warning: H.265 encoder has some problems with 2160p30
    • gst-launch-1.0 v4l2src ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! queue ! omxh265enc bitrate=2000000 ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! mpegtsmux ! rtpmp2tpay ! udpsink port=5000 async=false sync=false host=192.168.0.1
    • Change host=... to the IP of the receiver
  • On the receiver create a file mpeg_ts.sdp and open it with VLC:
v=0
m=video 5000 RTP/AVP 33
c=IN IP4 192.168.0.2
a=rtpmap:33 MP2T/90000
H.264 Encode and Send on TX1, receive on PC with GStreamer
  • Stream RTP (H.264 encoded) Input of HDMI-B on TX1
    • gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, width=1920, height=1080, framerate=60/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! queue ! omxh264enc bitrate=20000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! mpegtsmux ! rtpmp2tpay ! udpsink port=5000 async=false sync=false host=192.168.0.1
    • Change host=... to the IP of the receiver
    • Encoding bitrate may need to be adjusted, based on video content
  • And receive it on a host PC with GStreamer: (or with VLC, as described above)
gst-launch-1.0 udpsrc port=5000 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)MP2T-ES" ! rtpbin ! rtpmp2tdepay ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! xvimagesink sync=false -vvv -e

Recording

  • Save to disk (H.264 encoded)
    • gst-launch-1.0 v4l2src ! 'video/x-raw, width=3840, height=2160, framerate=30/1, format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! queue ! omxh264enc bitrate=8000000 ! h264parse ! matroskamux ! filesink location=test_4k_h264.mkv -e

Visionworks

The HDMI2CSI board can be used as video source in Visionworks/OpenCV. Currently the performance is limited to 4-6 FPS due to a color space conversion that is executed in CPU, but a better performance is expected. For details please consult this forum thread: https://devtalk.nvidia.com/default/topic/980505/jetson-tx1/visionworks-cannot-fetch-the-frame-from-dev-video0-tc358840-/1 The only modification necessary is one line in the nvxio package, that defines the video format.

Alternatively the GStreamer OpenCV plugins can be used for limited use cases. (Custom compiled GStreamer 1.8 Plugins-Bad: ` cvdilate, cvequalizehist, cverode, cvlaplace, cvsmooth, cvsobel, edgedetect, faceblur, facedetect, motioncells, pyramidsegment, templatematch, opencvtextoverlay, handdetect, skindetect, retinex, segmentation, grabcut, disparity`)

OpenCV

Basic support for capturing with OpenCV was added in L4T 24.2.1 with commit https://github.com/InES-HPMM/linux-l4t/commit/b09bf71699a1c3e0b2f8f6a16e437c3ef218b57a

OpenCV Capture API

To set up video capture in OpenCV, use cv2.VideoCapture(device) where device is the camera-index (the same as in /dev/video0-2). Compare: http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html

Current issues: Format negotiation, Clean exit

Multimedia API

To be investigated.