Both were implemented with a minimalistic API (1 enum and 3 functions), which works for all 35 pixelformats. First of all we have an enum for the channel definitions:
typedef enum
{
GAVL_CCH_RED, // Red
GAVL_CCH_GREEN, // Green
GAVL_CCH_BLUE, // Blue
GAVL_CCH_Y, // Luminance (also grayscale)
GAVL_CCH_CB, // Chrominance blue (aka U)
GAVL_CCH_CR, // Chrominance red (aka V)
GAVL_CCH_ALPHA, // Transparency (or, to be more precise, opacity)
} gavl_color_channel_t;
For getting the exact grayscale format for one color channel you first call:int gavl_get_color_channel_format(const gavl_video_format_t * frame_format,
gavl_video_format_t * channel_format,
gavl_color_channel_t ch);
It returns 1 on success or 0 if the format doesn't have the requested channel. After you have the channel format, extracting and inserting is done with:int gavl_video_frame_extract_channel(const gavl_video_format_t * format,
gavl_color_channel_t ch,
const gavl_video_frame_t * src,
gavl_video_frame_t * dst);
int gavl_video_frame_insert_channel(const gavl_video_format_t * format,
gavl_color_channel_t ch,
const gavl_video_frame_t * src,
gavl_video_frame_t * dst);
In the gmerlin tree, there are test programs exctractchannel
and insertchannel
which test the functions for all possible combinations of pixelformats and channels. They are in the gmerlin tree and not in gavl because we load and save the test images with gmerlin plugins.