Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detects intel gpu but doesn't use it #3

Open
MGThePro opened this issue Feb 16, 2025 · 8 comments
Open

Detects intel gpu but doesn't use it #3

MGThePro opened this issue Feb 16, 2025 · 8 comments

Comments

@MGThePro
Copy link

Hey, I passed through the intel dri device via docker with the

    devices:
      - /dev/dri:/dev/dri

lines in the docker compose. The log output of virtd also shows the line

[2025-02-16T10:59:04Z INFO  vertd] detected a Intel GPU -- if this isn't your vendor, open an issue.

But when I try to convert a video it doesn't use the gpu at all, instead putting a 100% load on one CPU core for a few seconds and then cancelling the process with the error message "Error converting file: filename" on the website. No error on the vert or vertd log

Not sure if this is even supposed to work yet, but I'd imagine getting intel and amd support should be relatively easy since over vaapi they are the exact same and passing through the /dev/dri folder (or just the specific device like /dev/dri/RenderD128) should be enough

@not-nullptr
Copy link
Member

which intel gpu? vertd only supports arc gpus because of their insanely good media encoders

@MGThePro
Copy link
Author

Integrated gpu of an n100

Idk how/why only arc would be supperted when everything since kaby lake uses the same driver and works the same over qsv and vaapi

@not-nullptr
Copy link
Member

not-nullptr commented Feb 16, 2025

i can't find a single thing about the iGPU (not even the model?) but if it prints that line, it means it will try to use qsv:

pub fn encoder_priority(&self) -> Vec<&str> {
    match self {
        ConverterGPU::AMD => vec!["amf"],
        ConverterGPU::Intel => vec!["qsv"],
        ConverterGPU::NVIDIA => vec!["nvenc"],
        ConverterGPU::Apple => vec!["videotoolbox"],
    }
}

i don't have an intel gpu to test this, so if this behaviour is wrong, please let me know. vertd has only been tested internally on an M3 macbook pro, an rtx 3080 and an rtx 4000 ada lovelace

@not-nullptr
Copy link
Member

also keep in mind your gpu is an iGPU and, as such, probably has very limited encoder support (but, again, can't find any docs to verify that)

@MGThePro
Copy link
Author

i can't find a single thing about the iGPU (not even the model?) but if it prints that line, it means it will try to use qsv:

pub fn encoder_priority(&self) -> Vec<&str> {
match self {
ConverterGPU::AMD => vec!["amf"],
ConverterGPU::Intel => vec!["qsv"],
ConverterGPU::NVIDIA => vec!["nvenc"],
ConverterGPU::Apple => vec!["videotoolbox"],
}
}

When exactly is it supposed to use this? in htop I only see an ffprobe command running for a few seconds, and there is no parameter set in there to set up hardware acceleration. Also sorry, I can't really follow rust syntax so idk where I would find that in the code. Is there supposed to be an ffmpeg command following this where the hardware is being used?

also keep in mind your gpu is an iGPU and, as such, probably has very limited encoder support (but, again, can't find any docs to verify that)

The de/encoding capabilities of the n100 are actually pretty good, arguably better than any AMD encoder/decoder apart from maybe the very latest ones on RDNA3. It also allows multiple concurrent streams being encoded without driver trickery, unlike Nvidia. And it also supports all the important codecs (h264, h265, AV1 all in decoding and encoding. Even VP9 which both Nvidia and AMD can only decode, but intel can do both). On things like jellyfin or plex it can handle multiple transcodes at a time unless you're working with stuff like 4k hdr, then it should only do about one and a half streams.

I will investigate further. Right now I am having two issues

  1. any video conversion cancels after ~5 seconds
  2. Hardware isn't being used

In a little while I will see if these are connected by removing the GPU from the docker container and send an update. Thanks for looking into this so far and for creating this project :)

@MGThePro
Copy link
Author

Nope, removing the DRI device passthrough doesn't fix issue number one
It's also autodetecting a Nvidia GPU for some reason 🤨

==========
== CUDA ==
==========
CUDA Version 12.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use the NVIDIA Container Toolkit to start this container with GPU support; see
   https://docs.nvidia.com/datacenter/cloud-native/ .
[2025-02-17T15:08:24Z INFO  vertd] starting vertd
[2025-02-17T15:08:24Z INFO  vertd] working w/ ffmpeg 6.1.1-3ubuntu5 and ffprobe 6.1.1-3ubuntu5
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
[2025-02-17T15:08:25Z WARN  vertd::converter::gpu] are you in a docker container? assuming NVIDIA, please open a PR and fix this if you're not.
[2025-02-17T15:08:25Z INFO  vertd] detected a NVIDIA GPU -- if this isn't your vendor, open an issue.

After the conversion of a file it just outputs the "uploaded file" line in the log, no error or anything else after that.

For reference, here's my docker compose of vertd:

  vertd:
    container_name: vertd
    image: VERT-sh/vertd:latest
    environment:
      - PORT=24153
    ports:
      - 24153:24153
    #devices:
    #  - /dev/dri:/dev/dri

The commented out lines at the bottom are how I pass through the /dev/dri/RenderD128 device which is responsible for encoding/decoding. This exact way also works on Jellyfin

Also, isn't it a little wasteful to include CUDA for everyone? CUDA is multiple gigabytes by itself isnt it?

@not-nullptr
Copy link
Member

the docker container (for now) is just for an official instance which is coming soon. getting it working was hard enough, i barely know how to use docker, so there's no shot i'll get a container working for hardware i don't own

@MGThePro
Copy link
Author

If you do want to support these one day I suggest you look at the jellyfin docker container, they have implemented hardware acceleration beautifully and are also using ffmpeg under the hood
It's all documented (at least the setup process) in a lot of detail as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants