Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The xdma combined with bram test succeeded, but the test failed when replaced with ddr4 #324

Open
LOCKEDGATE opened this issue Feb 25, 2025 · 3 comments

Comments

@LOCKEDGATE
Copy link

I first connected the IP of xdma and bram through block design, and then connected automatically. Then I put it on the board and tested it with run-test.sh, and the test passed.
Then I used block design again to connect xdma and ddr4 (without bram), and when I ran run-test.sh again, an error was reported:
Info: Number of enabled h2c channels = 1

Info: Number of enabled c2h channels = 1

Info: The PCIe DMA core is memory mapped.

Info: Running PCIe DMA memory mapped write read test

 transfer size: 1024

 transfer count: 1

Info: Writing to h2c channel 0 at address offset 0.

Info: Wait for current transactions to complete.

[ 77.929332] xdma:xdma_xfer_submit: xfer 0xa4f43bf6,1024, s 0x1 timed out, ep 0x400.

[ 77.936967] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xa581890c) = 0x1fc00006 (id).

[ 77.944674] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xf9083608) = 0x00000001 (status).

[ 77.952725] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xa8e2faa2) = 0x00f83e1f (control)

[ 77.960777] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xbf40cb75) = 0x98450040 (first_desc_lo)

[ 77.969346] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xb7c74409) = 0x00000000 (first_desc_hi)

[ 77.977911] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xe71bd4ca) = 0x00000000 (first_desc_adjacent).

[ 77.987079] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xf6a7ae72) = 0x00000000 (completed_desc_count).

[ 77.996333] xdma:engine_reg_dump: 0-H2C0-MM: ioread32(0xc8811a87) = 0x00f83e1e (interrupt_enable_mask)

[ 78.005590] xdma:engine_status_dump: SG engine 0-H2C0-MM status: 0x00000001: BUSY

[ 78.013030] xdma:transfer_abort: abort transfer 0xa4f43bf6, desc 1, engine desc queued 0.

/dev/xdma0_h2c_0, W off 0x4a57c4, 0x0 != 0x0.

write file: Unknown error 512

Info: Writing to h2c channel 0 at address offset 1024.

Info: Wait for current transactions to complete.

Figure 1 below is the BD design of the project, and Figure 2 is the address allocation. Looking forward to your reply.

Image

Image

@ivansun1688
Copy link

It seems that we meet same problem!

@dmitrym1
Copy link

Hi @LOCKEDGATE. Based on "write file: Unknown error 512" I assume you are not using alonbl's latest patch #240 . Give it a try, it may fix your issue.
Based on the reg dump I can say that the DMA engine reports BUSY status, it means it still doing something, at least it thinks so. I see that you already have AXI bus connected to System ILA, this is where you should direct your debugging effort. Try to see if it still continues transaction after you get timeout error. See if there is anything wrong with the protocol itself. Remember that DDR works fast only in burst mode. If for some reason you are getting single transfers or very narrow bursts, that's where the performance drop occurs. Try smaller transaction size and see if it works.

@LOCKEDGATE
Copy link
Author

Hi @LOCKEDGATE. Based on "write file: Unknown error 512" I assume you are not using alonbl's latest patch #240 . Give it a try, it may fix your issue. Based on the reg dump I can say that the DMA engine reports BUSY status, it means it still doing something, at least it thinks so. I see that you already have AXI bus connected to System ILA, this is where you should direct your debugging effort. Try to see if it still continues transaction after you get timeout error. See if there is anything wrong with the protocol itself. Remember that DDR works fast only in burst mode. If for some reason you are getting single transfers or very narrow bursts, that's where the performance drop occurs. Try smaller transaction size and see if it works.

Yeah, thank you very much for your help. I am using the driver version 2020.2, which may have the problem you mentioned. I will try the patch according to your mention and check the waveform carefully. If I have any results, I will report it here. Thank you again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants