BT.656视频信号解码
BT.656视频信号解码
BT.656协议标准
ITU-R BT.601和ITU-R BT.656是ITU-R(国际电信联盟)制定的标准。严格来说ITU-R BT.656是ITU-R BT.601的一个子协议。
两种协议区别在于:
ITU-R BT.601 16位数据传输;Y、U、V信号同时传输,是并行数据,行场同步信号单独输出;
ITU-R BT.656 8/10位数据传输;不需要同步信号,串行数据传输,输出速率是601的2倍,先传Y,再传UV。行场同步信号嵌入在数据流中。
BT.656
对于PAL制式分辨率为720*576来说,每一帧有576行,奇场288行,偶场288行;
去隔行采用的方法:
利用两片存储芯片,采用乒乓交替读写的方式来完成。实现步骤:首先将输入的第一帧图像奇场数据依次隔行存入存储器A中,然后再将偶数场数据隔行插入到存储器A中的奇场数据之间,这样A中就存入了一帧完整的视频帧。同样的方式将第二帧图像奇场和偶场数据依次存入到B中,在对B进行写操作的同时依次逐行从A中读出之前存入的第一帧图像数据。依次类推,通过乒乓操作方式保证存储器A和B分别工作在读写的状态,并不断切换即可完成隔行扫描到逐行扫描的转换。
凡是做模拟信号采集的,很少不涉及BT.656标准的,因为常见的模拟视频信号采集芯片都支持输出BT.656的数字信号,那么,BT.656到底是何种格式呢?
本文将主要介绍 标准的 8bit BT656(4:2:2)YCbCr SDTV(标清)数字视频信号格式,主要针对刚刚入门模拟视频采集的初学者入门之用。
1. 帧的概念(Frame)
一个视频序列是由N个帧组成的,采集图像的时候一般有2种扫描方式,一种是逐行扫描(progressive scanning),一种是隔行扫描(interlaced scanning)。对于隔行扫描,每一帧一般有2个场(field),一个叫顶场(top field),一个叫底场(bottom field)。假设一帧图像是720行,那么,顶场就包含其中所有的偶数行,而底场则包含其中所有的奇数行。
2. 场的概念(field)
注意,上面提到顶场和底场,用的是“包含”二字,而不是说完全由后者组成,因为在BT.656标准中,一个场是由三个部分组成的:
场 = 垂直消隐顶场(First Vertical
Blanking) + 有效数据行(Active Video) + 垂直消隐底场(Second Vertical
Blanking)
对于顶场,有效数据行就是一帧图像的所有偶数行,而底场,有效数据行就是一帧图像的所有奇数行。顶场和底场的空白行的个数也有所不同,那么,对于一个标准的 8bit BT656(4:2:2)SDTV(标清)的视频而言,对于一帧图像,其格式定义如下:
由上图可以知道,对于PAL制式,每一帧有625行,其中,顶场有效数据288行,底场有效数据也是288行,其余行即为垂直消隐信号。为什么是288行?因为PAL制式的SDTV或者D1的分辨率为 720*576,即一帧有576行,故一场为288行。
由上图我们还可以知道,顶场有效数据的起始行为第23行,底场有效数据的起始行为第335行。
另外,上图中的 F 标记奇偶场,V标记 是否为垂直消隐信号。
3. 每一行的组成(Lines)
下面说明每一行的组成,一行是由4个部分组成:
行 = 结束码(EAV) + 水平消隐(Horizontal Vertical
Blanking) + 起始码(SAV) + 有效数据(Active Video)
典型的一行数据组成如下图所示:
起始码(SAV)和结束码(EAV),它是标志着一行的开始结束的重要标记,也包含了其他的一些重要的信息,后面将会讲到。
为什么水平消隐 是280字节,这个我暂时还没搞清楚,不知道是不是标准定义的。
为什么一行中的有效数据是 1440 字节? 因为PAL制式的SDTV或者D1的分辨率为 720*576,即一行有720个有效点,由于采集的是彩色图像,那么一行就是由亮度信息(Y)和色差信息(CbCr)组成的,由于是 YCbCr422格式,故一行中有720列Y,720列CbCr,这样,一行的有效字节数就自然为 720 x 2 = 1440 字节了。
4. EAV和SAV
EAV和SAV都是4个字节(Bytes),由上面的图可以知道,SAV后面跟着的就是有效的视频数据了。那么,EAV和SAV的格式是怎么样的呢?
EAV和SAV的4个字节的格式规定如下(下面以16进制表示):FF 00 00 XY
其中,前三个字节为固定的,必须是FF 00 00,而第4个字节(XY)是根据场、消隐信息而定的,其8个bit含义如下: 1 F V H P3 P2
P1 P0
其中,F:标记场信息,传输顶场时为0,传输底场时为1
V:标记消隐信息,传输消隐数据时为1,传输有效视频数据时为0
H:标记EAV还是SAV,SAV为0,EAV为1
而 P0~P3为保护比特,其值取决于F、H、V,起到校验的作用,计算方法如下:
对8bit BT656(4:2:2)数字视频信号数据进行解码
(注:bt.656视频信号的来源实际上是TW9912芯片输出的视频信号)
测试平台采用Alinx7020板卡,将转换得到的RGB数据以HDMI编码输出显示(参照板卡例程);测试结果图像显示起始有几行是黑条,考虑可能是模块中是从起始行就算有效数据,而bt.656逐行输出可能是从23行之后才是有效视频数据的,本人未做进一步测试;
创建RTL模块如下:
`timescale 1ns / 1ps
//////////////////////////////////////////////////////////////////////////////////
// Company:
// Engineer:
//
// Create Date: 2020/05/09 11:09:41
// Design Name:
// Module Name: top
// Project Name:
// Target Devices:
// Tool Versions:
// Description:
//
// Dependencies:
//
// Revision:
// Revision 0.01 – File Created
// Additional Comments:
//
//////////////////////////////////////////////////////////////////////////////////
module bt656_to_rgb
(
//
system reset
input RST, // system reset
// BT656 input
input CLK_i, //
clock
input [7:0] bt_data_i, //
video data
input VSYNC_i, // bt656 vertical synchronization
input HSYNC_i, //
bt656 horizontal synchronization
//
output
output post_frame_clk_o, //
pix_out_clock
output
wire post_frame_href, // Hs /de
output wire post_frame_vsync, // Vs
output Video_active, // data enable
output wire[23:0] post_img_rgb_0 // video data output
);
//define
wire
clk27,clk54;
reg[10:0]
pixcnt_total;
reg[7:0]
Yreg,Crreg,Cbreg;
reg
hs_in_d;
reg [7:0] pix_data;
reg [15:0] pix_cnt;
reg active_video;
reg [15:0] row_cnt;
reg [2:0] state;
reg hs_,
vs_, hs_reg, vs_reg;
wire [7:0] R_o1 ,G_o1
,B_o1;
reg [7:0] R_o2 ,G_o2
,B_o2;
always @(posedge CLK_i or posedge RST)
begin
if(RST)
hs_in_d <= 0;
else
begin
hs_in_d <=
HSYNC_i;
end
end
reg
active_video_de;
reg
Video_active_reg;
//reg [2:0] state;
always @(posedge CLK_i or posedge RST)
begin
if(RST) begin
pixcnt_total <= 0;
pix_cnt <= 0;
active_video_de <= 0;
row_cnt <= 0;
hs_ <= 0;
vs_ <= 0;
end
else
case(state)
3’d0: begin
if(VSYNC_i) begin
state <= 3’d1;
end
end
3’d1: begin
if(HSYNC_i &&
!hs_in_d) begin
pixcnt_total <=
0;
end
else begin
pixcnt_total <=
pixcnt_total + 1;
end
//720个像素 720个Y亮度分量,360个cb、cr色度分量 720*576
if((pixcnt_total >= 288)
&& (pixcnt_total <= 1728))
begin
pix_cnt <= pix_cnt + 1;
active_video_de <= 1;
end
else begin
pix_cnt <= 0;
active_video_de <= 0;
end
//行
if(pix_cnt == 1440) begin
hs_ <= 1;
row_cnt <= row_cnt + 1;
end
else begin
hs_ <= 0;
end
if(row_cnt == 576) begin
//if(row_cnt == 720) begin
//if(row_cnt == 582) begin
//if(row_cnt == 625) begin
state <= 3’d0;
row_cnt <= 0;
vs_ <= 1;
Video_active_reg <= 1;
end
else begin
vs_ <= 0;
Video_active_reg <= 0;
end
end
endcase
end
assign
Video_active = Video_active_reg;
wire [7:0] post_img_red_0;
//Processed Image Red output
wire [7:0] post_img_green_0; //Processed Image Green
output
wire [7:0] post_img_blue_0; //Processed Image Blue output
Video_Image_Processor u_Video_Image_Processor_0
(
//global clock
.clk (CLK_i), //cmos video pixel clock
.rst_n (1),
//global reset
.per_frame_vsync (vs_), //Prepared Image data vsync valid signal
.per_frame_href (active_video_de),
//Prepared Image data href vaild signal
.per_frame_clken (active_video_de),
//Prepared Image data output/capture enable clock
.per_frame_YCbCr (bt_data_i), //Prepared
Image red data to be processed
//Image data has been processd
.post_frame_vsync (post_frame_vsync), //Processed Image data vsync
valid signal
.post_frame_href (post_frame_href), //Processed
Image data href vaild signal
.post_frame_clken (), //Processed
Image data output/capture enable clock
.post_img_red (post_img_red_0), //Processed Image Red output
.post_img_green (post_img_green_0), //Processed
Image Green output
.post_img_blue (post_img_blue_0) //Processed Image Blue output
);
assign
post_frame_clk_o = CLK_i;
assign
post_img_rgb_0 = {post_img_red_0,post_img_green_0,post_img_blue_0};
endmodule
子模块1如下:
/*————————————————————————-
| Oooo |
+——————————-oooO–( )—————————–+
( ) )
/
\ ( (_/
\_)
———————————————————————–*/
`timescale
1ns/1ns
module
Video_Image_Processor
(
//global clock
input clk, //cmos
video pixel clock
input rst_n, //global
reset
//Image data prepred to be processd
input per_frame_vsync, //Prepared Image data vsync valid signal
input per_frame_href, //Prepared Image data href
vaild signal
input per_frame_clken, //Prepared Image data output/capture enable
clock
input [7:0] per_frame_YCbCr, //Prepared Image data of YCbCr 4:2:2 {CbY} {CrY}
//Image data has been processd
output post_frame_vsync, //Processed Image data vsync valid signal
output post_frame_href, //Processed Image data href vaild signal
output post_frame_clken, //Processed Image data output/capture enable
clock
output [7:0] post_img_red, //Processed
Image Red output
output [7:0] post_img_green, //Processed
Image Green output
output [7:0] post_img_blue //Processed
Image Blue output
);
//————————————-
//Convert the
YCbCr4:2:2 format to YCbCr4:4:4 format.
//CMOS YCbCr444 data output
wire post1_frame_vsync; //Processed Image data vsync valid
signal
wire post1_frame_href; //Processed Image data href vaild signal
wire post1_frame_clken; //Processed Image data output/capture
enable clock
wire [7:0] post1_img_Y; //Processed Image data of YCbCr
4:4:4
wire [7:0] post1_img_Cb; //Processed Image data of
YCbCr 4:4:4
wire [7:0] post1_img_Cr; //Processed Image data of
YCbCr 4:4:4
VIP_YCbCr422_YCbCr444 u_VIP_YCbCr422_YCbCr444
(
//global clock
.clk (clk), //cmos
video pixel clock
.rst_n (rst_n), //system reset
//Image data prepred to be processd
.per_frame_vsync (per_frame_vsync), //Prepared
Image data vsync valid signal
.per_frame_href (per_frame_href), //Prepared Image data href vaild signal
.per_frame_clken (per_frame_clken), //Prepared
Image data output/capture enable clock
.per_frame_YCbCr (per_frame_YCbCr), //Prepared
Image red data to be processed
//Image data has been processd
.post_frame_vsync (post1_frame_vsync), //Processed Image data vsync valid signal
.post_frame_href (post1_frame_href), //Processed
Image data href vaild signal
.post_frame_clken (post1_frame_clken), //Processed Image data output/capture
enable clock
.post_img_Y (post1_img_Y), //Processed Image brightness output
.post_img_Cb (post1_img_Cb), //Processed
Image blue shading output
.post_img_Cr (post1_img_Cr) //Processed
Image red shading output
);
//————————————-
//Convert the
YCbCr444 format to RGB888 format.
VIP_YCbCr444_RGB888 u_VIP_YCbCr444_RGB888
(
//global clock
.clk (clk), //cmos
video pixel clock
.rst_n (rst_n), //system reset
//Image data prepred to be processd
.per_frame_vsync (post1_frame_vsync), //Prepared
Image data vsync valid signal
.per_frame_href (post1_frame_href), //Prepared Image data href vaild signal
.per_frame_clken (post1_frame_clken), //Prepared
Image data output/capture enable clock
.per_img_Y (post1_img_Y), //Prepared Image data
of Y
.per_img_Cb (post1_img_Cb), //Prepared Image data
of Cb
.per_img_Cr (post1_img_Cr), //Prepared Image data
of Cr
//Image data has been processd
.post_frame_vsync (post_frame_vsync), //Processed Image data vsync
valid signal
.post_frame_href (post_frame_href), //Processed
Image data href vaild signal
.post_frame_clken (post_frame_clken), //Processed Image data
output/capture enable clock
.post_img_red (post_img_red), //Prepared Image green data to be processed
.post_img_green (post_img_green), //Prepared Image green data to be processed
.post_img_blue (post_img_blue) //Prepared Image blue data to be processed
);
Endmodule
子模块2如下:
`timescale 1ns/1ns
module VIP_YCbCr422_YCbCr444
(
//global
clock
input clk, //cmos
video pixel clock
input rst_n, //global
reset
//CMOS
16Bit YCbCr data input: {CbYCrYCbYCrY}
input per_frame_vsync, //Prepared Image data vsync valid signal
input per_frame_href, //Prepared Image data href
vaild signal
input per_frame_clken, //Prepared Image data output/capture enable
clock
input [7:0] per_frame_YCbCr, //Prepared
Image data of YCbCr 4:2:2 CbYCrY
//CMOS
YCbCr444 data output
output post_frame_vsync, //Processed Image data vsync valid
signal
output post_frame_href, //Processed Image data href vaild signal
output post_frame_clken, //Processed Image data output/capture enable
clock
output reg [7:0] post_img_Y, //Processed Image data of YCbCr 4:4:4
output reg [7:0] post_img_Cb, //Processed Image data of YCbCr 4:4:4
output reg [7:0] post_img_Cr //Processed Image data of YCbCr 4:4:4
);
//——————————————
//lag n pixel clocks
reg [4:0] post_frame_vsync_r;
reg [4:0] post_frame_href_r;
reg [4:0] post_frame_clken_r;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
post_frame_vsync_r
<= 0;
post_frame_href_r
<= 0;
post_frame_clken_r
<= 0;
end
else
begin
post_frame_vsync_r
<= {post_frame_vsync_r[3:0],
per_frame_vsync};
post_frame_href_r
<= {post_frame_href_r[3:0], per_frame_href};
post_frame_clken_r
<= {post_frame_clken_r[3:0], per_frame_clken};
end
end
assign post_frame_vsync
= post_frame_vsync_r[4];
assign post_frame_href
= post_frame_href_r[4];
assign post_frame_clken
= post_frame_clken_r[4];
wire yuv_process_href = per_frame_href
|| post_frame_href_r[3];
wire yuv_process_clken = per_frame_clken
|| post_frame_clken_r[3];
//——————————————-
//convert YCbCr422 to YCbCr444
reg [3:0] yuv_state;
reg [7:0] mY0, mY1, mY2, mY3;
reg [7:0] mCb0, mCb1;
reg [7:0] mCr0, mCr1;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
yuv_state
<= 4’d0;
{mY0,
mCb0, mCr0} <= {8’h0, 8’h0, 8’h0};
mY1
<= 8’h0;
{mY2,
mCb1, mCr1} <= {8’h0, 8’h0, 8’h0};
mY3 <= 8’h0;
{post_img_Y,
post_img_Cb, post_img_Cr} <= {8’h0, 8’h0, 8’h0};
end
else
if(yuv_process_href) //lag 2 data
enable clock and need 2 more clocks
begin
if(yuv_process_clken) //lag 2 data enable clock and need 2 more
clocks
case(yuv_state) //—YCbCr
4’d0: begin //reg
p0
yuv_state <= 4’d1;
{mCb0} <= per_frame_YCbCr;
end
4’d1: begin //reg
p1
yuv_state <= 4’d2;
{mY0} <= per_frame_YCbCr;
end
4’d2: begin //p0; reg p2
yuv_state <= 4’d3;
{mCr0} <= per_frame_YCbCr;
end
4’d3: begin //p1; reg p4
yuv_state <= 4’d4;
{mY1} <= per_frame_YCbCr;
end
4’d4: begin //p2; reg p0
yuv_state <= 4’d5;
{mCb1} <= per_frame_YCbCr;
{post_img_Y, post_img_Cb, post_img_Cr} <=
{mY0, mCb0, mCr0};
end //p4; reg
p1
4’d5: begin
yuv_state <= 4’d6;
{mY2} <= per_frame_YCbCr;
{post_img_Y, post_img_Cb, post_img_Cr}
<= {mY1, mCb0, mCr0};
end
4’d6: begin //p2; reg p0
yuv_state <= 4’d7;
{mCr1} <= per_frame_YCbCr;
end //p4; reg
p1
4’d7: begin
yuv_state <= 4’d8;
{mY3} <= per_frame_YCbCr;
end
4’d8: begin //p2; reg p0
yuv_state <= 4’d9;
{mCb0} <= per_frame_YCbCr;
{post_img_Y, post_img_Cb, post_img_Cr} <=
{mY2, mCb1, mCr1};
end //p4; reg
p1
4’d9: begin
yuv_state <= 4’d10;
{mY0} <= per_frame_YCbCr;
{post_img_Y, post_img_Cb, post_img_Cr}
<= {mY3, mCb1, mCr1};
end
4’d10: begin //p2; reg p0
yuv_state <= 4’d11;
{mCr0} <= per_frame_YCbCr;
end //p4; reg
p1
4’d11: begin
yuv_state <= 4’d4;
{mY1} <= per_frame_YCbCr;
end
endcase
else
begin
yuv_state <= yuv_state;
{mY0, mCb0, mCr0} <= {mY0, mCb0, mCr0};
mY1 <= mY1;
{mY2, mCb1, mCr1} <= {mY2, mCb1, mCr1};
mY3 <=
mY3;
{post_img_Y, post_img_Cb, post_img_Cr}
<= {post_img_Y, post_img_Cb, post_img_Cr};
end
end
else
begin
yuv_state <= 3’d0;
{mY0, mCb0, mCr0} <= {8’h0, 8’h0, 8’h0};
{mY1, mCb1, mCr1} <= {8’h0, 8’h0, 8’h0};
{post_img_Y, post_img_Cb, post_img_Cr} <=
{8’h0, 8’h0, 8’h0};
end
end
endmodule
子模块3如下:
`timescale 1ns/1ns
module VIP_YCbCr444_RGB888
(
//global
clock
input clk, //cmos
video pixel clock
input rst_n, //global
reset
//CMOS
YCbCr444 data output
input per_frame_vsync, //Prepared Image data vsync valid signal
input per_frame_href, //Prepared Image data href
vaild signal
input per_frame_clken, //Prepared Image data output/capture enable
clock
input [7:0] per_img_Y, //Prepared
Image data of Y
input [7:0] per_img_Cb, //Prepared
Image data of Cb
input [7:0] per_img_Cr, //Prepared
Image data of Cr
//CMOS
RGB888 data output
output post_frame_vsync, //Processed Image data vsync valid
signal
output post_frame_href, //Processed Image data href vaild signal
output post_frame_clken, //Processed Image data output/capture enable
clock
output [7:0] post_img_red, //Prepared
Image green data to be processed
output [7:0] post_img_green, //Prepared
Image green data to be processed
output [7:0] post_img_blue //Prepared
Image blue data to be processed
);
//——————————————–
/*********************************************
R =
1.164(Y-16) + 1.596(Cr-128)
G =
1.164(Y-16) – 0.391(Cb-128) – 0.813(Cr-128)
B =
1.164(Y-16) + 2.018(Cb-128)
->
R =
1.164Y + 1.596Cr – 222.912
G =
1.164Y – 0.391Cb – 0.813Cr + 135.488
B =
1.164Y + 2.018Cb – 276.928
->
R
<< 9 = 596Y +
817Cr – 114131
G
<< 9 = 596Y – 200Cb – 416Cr + 69370
B
<< 9 = 596Y + 1033Cb – 141787
**********************************************/
reg [19:0] img_Y_r1; //8 + 9 + 1 = 18Bit
reg [19:0] img_Cb_r1, img_Cb_r2;
reg [19:0] img_Cr_r1, img_Cr_r2;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
img_Y_r1
<= 0;
img_Cb_r1
<= 0; img_Cb_r2 <= 0;
img_Cr_r1
<= 0; img_Cr_r2 <= 0;
end
else
begin
img_Y_r1 <= per_img_Y
* 18’d596;
img_Cb_r1
<= per_img_Cb * 18’d200;
img_Cb_r2 <= per_img_Cb
* 18’d1033;
img_Cr_r1
<= per_img_Cr * 18’d817;
img_Cr_r2 <= per_img_Cr
* 18’d416;
end
end
//——————————————–
/**********************************************
R
<< 9 = 596Y +
817Cr – 114131
G
<< 9 = 596Y – 200Cb – 416Cr + 69370
B
<< 9 = 596Y + 1033Cb – 141787
**********************************************/
reg [19:0] XOUT;
reg [19:0] YOUT;
reg [19:0] ZOUT;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
XOUT
<= 0;
YOUT
<= 0;
ZOUT
<= 0;
end
else
begin
XOUT
<= (img_Y_r1 + img_Cr_r1 – 20’d114131)>>9;
YOUT
<= (img_Y_r1 – img_Cb_r1 – img_Cr_r2 + 20’d69370)>>9;
ZOUT
<= (img_Y_r1 + img_Cb_r2 – 20’d141787)>>9;
end
end
//——————————————
//Divide 512 and get the result
//{xx[19:11], xx[10:0]}
reg [7:0] R, G, B;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
R
<= 0;
G
<= 0;
B
<= 0;
end
else
begin
R
<= XOUT[10] ? 8’d0 : (XOUT[9:0] > 9’d255) ? 8’d255 : XOUT[7:0];
G
<= YOUT[10] ? 8’d0 : (YOUT[9:0] > 9’d255) ? 8’d255 : YOUT[7:0];
B
<= ZOUT[10] ? 8’d0 : (ZOUT[9:0] > 9’d255) ? 8’d255 : ZOUT[7:0];
end
end
//——————————————
//lag n clocks signal sync
reg [2:0] post_frame_vsync_r;
reg [2:0] post_frame_href_r;
reg [2:0] post_frame_clken_r;
always@(posedge clk or negedge rst_n)
begin
if(!rst_n)
begin
post_frame_vsync_r
<= 0;
post_frame_href_r
<= 0;
post_frame_clken_r
<= 0;
end
else
begin
post_frame_vsync_r
<= {post_frame_vsync_r[1:0],
per_frame_vsync};
post_frame_href_r
<= {post_frame_href_r[1:0], per_frame_href};
post_frame_clken_r
<= {post_frame_clken_r[1:0], per_frame_clken};
end
end
assign post_frame_vsync
= post_frame_vsync_r[2];
assign post_frame_href
= post_frame_href_r[2];
assign post_frame_clken
= post_frame_clken_r[2];
assign post_img_red = post_frame_href
? R : 8’d0;
assign
post_img_green = post_frame_href ? G : 8’d0;
assign
post_img_blue = post_frame_href ? B : 8’d0;
endmodule