avcodec/dpx: Fix heap-buffer-overflow in 16-bit decoding

Fixes a heap-buffer-overflow in `libavcodec/dpx.c` triggered by a stale
`unpadded_10bit` flag in the `DPXDecContext`. This flag, set for 10-bit
unpadded frames, persisted across `decode_frame` calls. If a subsequent
frame was 16-bit, the stale flag caused incorrect buffer size
validation, allowing truncated buffers to pass checks designed for
smaller 10-bit packed data. This led to an out-of-bounds read in
`av_image_copy_plane` during 16-bit decoding.

The fix explicitly resets `dpx->unpadded_10bit = 0` at the start of
`decode_frame` to ensure correct validation for each frame.

Fixes: https://issues.oss-fuzz.com/issues/464471792
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array read
Fixes: 464471792/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DPX_DEC_fuzzer-5275522210004992
This commit is contained in:
Oliver Chang
2025-12-05 02:07:10 +00:00
committed by michaelni
parent 07e3d760bf
commit 9849a274df

View File

@@ -612,6 +612,7 @@ static int decode_frame(AVCodecContext *avctx, AVFrame *p,
av_dict_set(&p->metadata, "Input Device", input_device, 0);
// Some devices do not pad 10bit samples to whole 32bit words per row
dpx->unpadded_10bit = 0;
if (!memcmp(input_device, "Scanity", 7) ||
!memcmp(creator, "Lasergraphics Inc.", 18)) {
if (avctx->bits_per_raw_sample == 10)