音视频技术开发--V4L2学习(二)
2021/10/28 6:11:34
本文主要是介绍音视频技术开发--V4L2学习(二),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
接上篇《音视频技术开发--V4L2学习(一)》
三、 V4L2 Sample增加正确的色彩格式
首先接上面运行sample,运行画面花屏问题,怀疑是UVC采集的不是YUV420,我们尝试改成其他像素格式试试。
打卡代码的capturer_mmap.c文件,可以看到sample里面只有三种格式(如下图),我们增加YUV422(实际测试接的UVC只支持YVYU格式)试试。
在/usr/include/linux/videodev2.h头文件中,包含了像素格式的定义。我们这边选择YVYU格式,增加到capturer_mmap.c;
对capturer_mmap.c的修改,主要为:
1、增加YVYU色彩格式,命令行-p 3时,使用该格式通过V4L2配置到UVC;
2、修改对应的每帧读取大小;
[root@localhost V4l2_samples-0.4.1]# diff capturer_mmap.c capturer_mmap.c.bak 98,100d97 < case 3: < Bpf = width*height*2; < break; 336,338d332 < break; < case 3: < fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YVYU;
通过以上修改,capturer_mmap程序读出来正确的YVYU采集数据;
之后相应得修改viewer.c,使之正确显示YVYU格式,修改如下:
[root@localhost V4l2_samples-0.4.1]# diff viewer.c viewer.c.bak 220,298d219 < //read the data from memory, converts that data to RGB, and call Put (shows the picture) < static void process_image_yuv422 (uint8_t * videoFrame, image_context image_ctx, int width, int height) < { < XImage * xImage1 = image_ctx.xImage; < uint8_t * imageLine1; < int xx,yy; < int x,y; < int bpl,Bpp,amp; < double r,g,b; < unsigned char Y,U,V; < unsigned char R,G,B,RG,GB; < imageLine1 = (uint8_t *) xImage1->data; < bpl=xImage1->bytes_per_line; < Bpp=xImage1->bits_per_pixel/8; < int ret; < < ret = read(STDIN_FILENO, videoFrame , width*height*2); < < switch (xImage1->depth) < { < case 16://process one entire frame < for (yy = 0; yy < (height); yy++) < { < for (xx =0; xx < (width/2); xx++) < { < //in every loop 2 pixels are processed < V = videoFrame[(width*2*y)+(1*x+1)]; < U = videoFrame[(width*2*y)+(1*x+3)]; < < Y = videoFrame[(width*2*y)+(1*x)]; < yuv_to_rgb_16(Y, U, V, &RG, &GB); < imageLine1[(bpl*y)+(Bpp*x)]=GB; < imageLine1[(bpl*y)+(Bpp*x)+1]=RG; < < Y = videoFrame[(width*2*y)+(1*(x+2))]; < yuv_to_rgb_16(Y, U, V, &RG, &GB); < imageLine1[(bpl*y)+(Bpp*(x+1))]=GB; < imageLine1[(bpl*y)+(Bpp*(x+1))+1]=RG; < } < } < break; < < case 24: < for (yy = 0; yy < (height); yy++) < { < for (xx =0; xx < (width/2); xx++) < { < x=2*xx; < y=yy; < < //in every loop 2 pixels are processed < V = videoFrame[(width*2*y)+(2*x+1)]; < U = videoFrame[(width*2*y)+(2*x+3)]; < < Y = videoFrame[(width*2*y)+(2*x)]; < yuv_to_rgb_24(Y, U, V, &R, &G, &B); < imageLine1[(bpl*y)+(Bpp*x)]=B; < imageLine1[(bpl*y)+(Bpp*x)+1]=G; < imageLine1[(bpl*y)+(Bpp*x)+2]=R; < < Y = videoFrame[(width*2*y)+(2*(x+2))]; < yuv_to_rgb_24(Y, U, V, &R, &G, &B); < imageLine1[(bpl*y)+(Bpp*(x+1))]=B; < imageLine1[(bpl*y)+(Bpp*(x+1))+1]=G; < imageLine1[(bpl*y)+(Bpp*(x+1))+2]=R; < } < } < break; < default: < fprintf(stderr,"\nError: Color depth not supported\n"); < exit(EXIT_FAILURE); < break; < } < image_put(image_ctx, 0, 0, 0, 0, width, height); < < if (XPending(image_ctx.display) > 0) < XNextEvent(image_ctx.display, image_ctx.event); //refresh the picture < } < 467,468c388 < PIX_FMT_RGB32, < PIX_FMT_YUYV --- > PIX_FMT_RGB32 778,780d697 < break; < case PIX_FMT_YUYV: < process_image_yuv422 (videoFrame, img_ctx, width, height);
make后执行命令: ./capturer_mmap -D /dev/video0 -w 640*480 -p 3 | ./viewer -w 640*480 -p 3;可以看到测试画面播放显示正常。
这篇关于音视频技术开发--V4L2学习(二)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-23Springboot应用的多环境打包入门
- 2024-11-23Springboot应用的生产发布入门教程
- 2024-11-23Python编程入门指南
- 2024-11-23Java创业入门:从零开始的编程之旅
- 2024-11-23Java创业入门:新手必读的Java编程与创业指南
- 2024-11-23Java对接阿里云智能语音服务入门详解
- 2024-11-23Java对接阿里云智能语音服务入门教程
- 2024-11-23JAVA对接阿里云智能语音服务入门教程
- 2024-11-23Java副业入门:初学者的简单教程
- 2024-11-23JAVA副业入门:初学者的实战指南