Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BMP processing #28

Open
m1adow opened this issue Nov 20, 2024 · 0 comments
Open

BMP processing #28

m1adow opened this issue Nov 20, 2024 · 0 comments

Comments

@m1adow
Copy link

m1adow commented Nov 20, 2024

Hello! I have a question. I am trying to encode and decode frames. On input, I have a 32 bit depth bmp but on output, I have 24 bit depth and in Format24bppRgb format. How to achieve 32 bit depth Bgra format using your library?

Encoding code

encoder = new CodecContext(Codec.FindEncoderById(AVCodecID.H264))
{
    Width = 1920,
    Height = 1080,
    Framerate = new AVRational(1, 30),
    TimeBase = new AVRational(1, 30),
    PixelFormat = AVPixelFormat.Yuv420p,
};

encoder.Open(null, new MediaDictionary
{
    ["crf"] = "30",
    ["tune"] = "zerolatency",
    ["preset"] = "veryfast"
});

rgbFrame.Width = 1920;
rgbFrame.Height = 1080;
rgbFrame.Format = (int)AVPixelFormat.Bgra;

yuvFrame.Width = 1920;
yuvFrame.Height = 1080;
yuvFrame.Format = (int)AVPixelFormat.Yuv420p;

unsafe
{
    fixed (byte* ptr = frame)
    {
        rgbFrame.Data[0] = (nint)ptr;
    }
}
rgbFrame.Linesize[0] = 7680;
rgbFrame.Pts = 0;

yuvFrame.EnsureBuffer();
yuvFrame.MakeWritable();
videoFrameConverter.ConvertFrame(rgbFrame, yuvFrame);
yuvFrame.Pts = 1;

var frames = new List<byte[]>();
foreach (var packet in encoder.EncodeFrame(yuvFrame, packetRef))
{
    frames.Add(packet.Data.AsSpan().ToArray());
}

Decoding code:

decoder = new CodecContext(Codec.FindDecoderById(AVCodecID.H264))
{
    Width = 1920,
    Height = 1080,
    PixelFormat = AVPixelFormat.Yuv420p,
};
decoder.Open();

var decodedFrames = new List<byte[]>();
foreach (var packetBytes in packets)
{
    unsafe
    {
        fixed (byte* bodyPtr = packetBytes)
        {
            AVPacket avPacket = default;
            _ = ffmpeg.av_packet_from_data(&avPacket, bodyPtr, packetBytes.Length);
            using var packet = Packet.FromNative(&avPacket, false);
            packet.Flags = 1;
            decoder.SendPacket(packet);
        }
    }

    var decodeResult = decoder.ReceiveFrame(yuvFrame);
    if (decodeResult == CodecResult.Success)
    {
        using var rgbFrame = new Frame()
        {
            Width = yuvFrame.Width,
            Height = yuvFrame.Height,
            Format = (int)AVPixelFormat.Bgra,
        };

        rgbFrame.EnsureBuffer();
        rgbFrame.MakeWritable();
        videoFrameConverter.ConvertFrame(yuvFrame, rgbFrame);

        var frame = rgbFrame.EncodeToBytes(formatName: "image2pipe");
        decodedFrames.Add(frame);
    }
}

So, what should I change in my code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant