-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detects intel gpu but doesn't use it #3
Comments
which intel gpu? vertd only supports arc gpus because of their insanely good media encoders |
Integrated gpu of an n100 Idk how/why only arc would be supperted when everything since kaby lake uses the same driver and works the same over qsv and vaapi |
i can't find a single thing about the iGPU (not even the model?) but if it prints that line, it means it will try to use qsv: pub fn encoder_priority(&self) -> Vec<&str> {
match self {
ConverterGPU::AMD => vec!["amf"],
ConverterGPU::Intel => vec!["qsv"],
ConverterGPU::NVIDIA => vec!["nvenc"],
ConverterGPU::Apple => vec!["videotoolbox"],
}
} i don't have an intel gpu to test this, so if this behaviour is wrong, please let me know. vertd has only been tested internally on an M3 macbook pro, an rtx 3080 and an rtx 4000 ada lovelace |
also keep in mind your gpu is an iGPU and, as such, probably has very limited encoder support (but, again, can't find any docs to verify that) |
When exactly is it supposed to use this? in htop I only see an ffprobe command running for a few seconds, and there is no parameter set in there to set up hardware acceleration. Also sorry, I can't really follow rust syntax so idk where I would find that in the code. Is there supposed to be an ffmpeg command following this where the hardware is being used?
The de/encoding capabilities of the n100 are actually pretty good, arguably better than any AMD encoder/decoder apart from maybe the very latest ones on RDNA3. It also allows multiple concurrent streams being encoded without driver trickery, unlike Nvidia. And it also supports all the important codecs (h264, h265, AV1 all in decoding and encoding. Even VP9 which both Nvidia and AMD can only decode, but intel can do both). On things like jellyfin or plex it can handle multiple transcodes at a time unless you're working with stuff like 4k hdr, then it should only do about one and a half streams. I will investigate further. Right now I am having two issues
In a little while I will see if these are connected by removing the GPU from the docker container and send an update. Thanks for looking into this so far and for creating this project :) |
Nope, removing the DRI device passthrough doesn't fix issue number one
After the conversion of a file it just outputs the "uploaded file" line in the log, no error or anything else after that. For reference, here's my docker compose of vertd:
The commented out lines at the bottom are how I pass through the /dev/dri/RenderD128 device which is responsible for encoding/decoding. This exact way also works on Jellyfin Also, isn't it a little wasteful to include CUDA for everyone? CUDA is multiple gigabytes by itself isnt it? |
the docker container (for now) is just for an official instance which is coming soon. getting it working was hard enough, i barely know how to use docker, so there's no shot i'll get a container working for hardware i don't own |
If you do want to support these one day I suggest you look at the jellyfin docker container, they have implemented hardware acceleration beautifully and are also using ffmpeg under the hood |
Hey, I passed through the intel dri device via docker with the
lines in the docker compose. The log output of virtd also shows the line
But when I try to convert a video it doesn't use the gpu at all, instead putting a 100% load on one CPU core for a few seconds and then cancelling the process with the error message "Error converting file: filename" on the website. No error on the vert or vertd log
Not sure if this is even supposed to work yet, but I'd imagine getting intel and amd support should be relatively easy since over vaapi they are the exact same and passing through the /dev/dri folder (or just the specific device like /dev/dri/RenderD128) should be enough
The text was updated successfully, but these errors were encountered: