You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the XDMA driver for C2H transfers, the observed speed is consistently capped at ~120MB/s, regardless of whether the PCIe link is operating at Gen1 x1 (2.5GT/s) or Gen2 x1 (5GT/s). This suggests a possible bottleneck in the driver, DMA engine, or PCIe configuration.
Steps Taken:
Verified PCIe link speed using lspci -vvv (confirms 5GT/s Gen2 x1 operation). lspci_xdma_log.txt
Ensured XDMA module is correctly loaded and initialized.
Expected Behavior:
At Gen2 x1 (5GT/s), the speed should be significantly higher than Gen1.
Performance should scale with PCIe link speed.
Questions:
Is there any known limitation in the XDMA driver for i.MX8MP?
Are there additional tuning parameters for increasing throughput?
Would appreciate any insights or recommendations for debugging this further.
Logs and additional details can be provided upon request.
The text was updated successfully, but these errors were encountered:
Hi @krithick14, try the following recommendations:
Set up ILA to watch XDMA AXI bus, and perform multiple transactions.
Your transfer rate depends on AXI frequency, so try to crank it up as much as your application requires.
If you are using AXI MM, make sure you are using it in AXI Full mode, not in AXI Lite, because AXI Full supports burst transactions
See if your data source can supply data continuously without dropping VALID signal, otherwise it creates a bottleneck.
See if your data source can provide data immediately when requested, otherwise it creates a delay that also reduces transfer speed. If your data source can provide data immediately and continuously, you'll see a single transaction rate to be near the maximum possible rate for your AXI frequency. It means that bottleneck is between transactions.
If you have a large transaction, it is separated in smaller chunks by the driver. When one chunk is finished, the driver receives an interrupt and requests a new chunk of data. If on your system interrupts are slow, try to switch the driver to polling mode insmod xdma.ko poll_mode=1
Make sure that debug outputs in XDMA driver are disabled
Below are my results with different AXI frequencies and PCIe modes, mostly in Lite mode, with one result in Full mode, with iMX8MM, community patch set, MSI interrupt mode, XDMA 2018.2 and small bottlenecks in data source. If you are using these results in some sort of scientific paper, please add a link to me, I'd be very appreciated.
Setup Details:
PCIe Link Speeds Tested:
XDMA Transfer: C2H (FPGA to IMX)
Data Type: RAW RGB32 video
IMX Linux Kernel Version: 6.6.52
Vivado Version: 2022.2
Issue Description:
When using the XDMA driver for C2H transfers, the observed speed is consistently capped at ~120MB/s, regardless of whether the PCIe link is operating at Gen1 x1 (2.5GT/s) or Gen2 x1 (5GT/s). This suggests a possible bottleneck in the driver, DMA engine, or PCIe configuration.
Steps Taken:
Verified PCIe link speed using lspci -vvv (confirms 5GT/s Gen2 x1 operation). lspci_xdma_log.txt
Ensured XDMA module is correctly loaded and initialized.
Expected Behavior:
At Gen2 x1 (5GT/s), the speed should be significantly higher than Gen1.
Performance should scale with PCIe link speed.
Questions:
Would appreciate any insights or recommendations for debugging this further.
Logs and additional details can be provided upon request.
The text was updated successfully, but these errors were encountered: