Perception(1.2.1-1.4)


4.1.2.2 Image Coordinate System

4.1.2.2圖像坐標系

Based on the camera transformation matrix, another coordinate system is provided which applies to the camera image. The ImageCoordinateSystem is provided by the module CoordinateSystemProvider. The origin of the y-coordinate lies on the horizon within the image (even if it is not visible in the image). The x-axis points right along the horizon whereby the y-axis points downwards orthogonal to the horizon (cf. Fig. 4.4). For more information see also [29]. Using the stored camera transformation matrix of the previous cycle in which the same camera took an image enables the CoordinateSystemProvider to determine the rotation speed of the camera and thereby interpolate its orientation when recording each image row. As a result,the representation ImageCoordinateSystem provides a mechanism to compensate for different recording times of images and joint angles as well as for image distortion caused by the rolling shutter. For a detailed description of this method, applied to the Sony AIBO, see [23]

基於攝像機變換矩陣,提供了另一種坐標系統,其適用於
到相機圖像。該ImageCoordinateSystem由模塊CoordinateSystemProvider提供。圖像內的y軸原點位於視野內(即使它是在圖像中不可見)。x軸水平向右,y軸垂直向下。使用前一循環的所存儲的相機變換矩陣,在這個循環里拍照片的相機使得 CoordinateSystemProvider確定相機的旋轉速度,並且因此記錄每個圖像行時,從而插值其定位。結果是,representation ImageCoordinateSystem提供了一種機制,以補償圖像和關節角度的不同記錄時間以及卷簾快門所造成的圖象失真。

4.1.3 Body Contour
If the robot sees parts of its body, it might confuse white areas with field lines or other robots. However, by using forward kinematics, the robot can actually know where its body is visible in the camera image and exclude these areas from image processing. This is achieved by modeling the boundaries of body parts that are potentially visible in 3-D (cf. Fig. 4.5 left) and projecting them back to the camera image (cf. Fig. 4.5 right). The part of the projection that intersects with the camera image or above is provided in the representation BodyContour. It is used by image processing modules as lower clipping boundary. The projection relies on the representation ImageCoordinateSystem, i.e., the linear interpolation of the joint angles to match the time when the image was taken.

4.1.3體輪廓

如果機器人看到它的身體部位,它可能會混淆與現場線或其它機器人的白色區域。但是,通過使用 forward kinematics (這里引出了著名的D-H參數wikiD-H參數zhihu(這個是中文的,說的也更容易理解些)),機器人能夠真正知道它的身體是可見
攝像機圖像並且排除處理這些區域。這是通過模擬在三維潛在可見的身體部位的界限並且投影它們到相機圖像來實現的。在representation BodyContour中,投影的部分和相機的圖像以及被提供的以上相交。它是被圖像處理模塊用作低剪切邊界。投影依賴於representation ImageCoordinateSystem,即關節角度的線性內插,以匹配時圖像拍攝的時間。

4.1.4 Color Classification
Identifying the color classes of pixels in the image is done by the ECImageProvider when computing the ECImage. In order to be able to clearly distinguish different colors and easily define color classes while still being able to compute the ECImage for every camera image in real time,
the YHS2 color space is used.

4.1.4顏色分類

在計算ECImage時,識別像素的顏色類在圖像中被ECImageProvider完成。為了能夠清楚地辨別不同的顏色和很容易地定義色類,同時仍能夠實時為每攝像機計算ECImage,就使用YHS2色彩空間。(ECImage是什么,忘了,記得前面的翻譯里有過解釋)

YHS2 Color Space

The YHS2 color space is defined by applying the idea behind the HSV color space, i.e. defining
the chroma components as a vector in the RGB color wheel, to the YUV color space. In YHS2,
the hue component H describes the angle of the vector of the U and V components of the color
in the YUV color space, while the saturation component S describes the length of that vector
divided by the luminance of the corresponding pixel. The luminance component Y is just the
same as it is in YUV. By dividing the saturation by the luminance, the resulting saturation
value describes the actual saturation of the color more accurately, making it more useful for
separating black and white from actual colors. This is because in YUV, the chroma components
are somewhat dependent of the luminance (cf. Fig. 4.6).

YHS2色彩空間

所述YHS2顏色空間是運用HSV顏色空間的理念而定義,即對於YUV顏色空間,定義色度分量作為在RGB顏色輪的載體。在YHS2中,色調分量H描述在YUV顏色空間內的U,V顏色分量矢量的矢量角,而飽和度分量S描述了矢量的被相應的像素的亮度除以的長度。亮度分量Y是剛好它在YUV相同。通過由亮度除以飽和,所得的飽和值更准確的描述了顏色的實際飽和值,使之更有用,對於分離黑色和白色,從實際顏色。這是因為在YUV,色度分量有些依賴於亮度。

Classification Method

Classifying a pixel’s color is done by first applying a threshold to the saturation channel. If it
is below the given threshold, the pixel is considered to describe a non-color, i.e. black or white.
In this case, whether the color is black or white is determined by applying another threshold
to the luminance channel. However, if the saturation of the given pixel is above the saturation
threshold, the pixel is of a certain color, if its hue value lies within the hue range defined for
that color.

分類方法

判斷一個像素的顏色是通過首先施加一個針對飽和通道的閾值完成的。如果它低於給定的閾值,則象素被認為描述的是非彩色,即黑色或白色。
在這種情況下,顏色是否是黑色或白色通過施加亮度閾值來確定。然而,如果給定的像素的飽和度是飽和以上閾值時,若其色調值位於那個顏色限定的色調范圍內,那像素則是那個特定的顏色。

This approach was used throughout the RoboCup 2016 and provided very good results. It is however possible to use an alternative classification method by setting the simpleClassification parameter of the ECImageProvider to false. In this case, colors are defined not only by a hue
range, but by one range of values for each of H, S and Y. Additionally, white and black are not
separated by a single threshold for the Y channel, but there is a minimum Y threshold for white
and a maximum Y threshold for black to enable non-classified pixels with low saturation.

整個2016機器人世界杯采用這種方法,並提供了非常好的效果。它無論如何都可能通過設置ECImageProvider的simpleClassification參數為false。在這種情況下,顏色由不僅由色調定義,也由每個H,S和Y值的一個范圍,白色和黑色不僅通過對Y信道的單個閾值而分開,也有一個對於白色的最小Y閾值和對於黑色的最大Y閾值,以使低飽和度的未分類像素能區分開。

In order to classify the whole camera image in real time, both the color conversion to YHS2 and
the color classification are done using SSE instructions.

Fig. 4.7 shows representations of an image from the upper camera in the YHS2 color space and a classification based on it for the colors white, black, green and “none” (displayed gray).

為了實時對整個攝像機影像分類,無論是顏色轉換到YHS2和
顏色分類使用SSE指令完成。
圖4.7顯示出來自上照相機的圖像在YHS2色彩空間的representation和
基於它的對於白,黑,綠和“無”(顯示灰度)顏色的分類。


注意!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系我们删除。



 
粤ICP备14056181号  © 2014-2020 ITdaan.com