Depth Image Colorization Formula
I am currently testing the depth image colorization which convert the depth values to color values (in HUE colorspace).
I also write a python program according to the formula described in the whitepaper and I wonder that whether the following two equations are correct or not.
(1) according to the second condition for finding the colorized red pixel, what if we may get the negative values?
pr = 255-dnormal if (255 < dnormal <= 510)
(2) in the equation of finding dnormal for inverse colorization which used disparity values, should we multiply the dnormal formula with 1529?
Thank you in advanced.
1. In a recent discussion about this depth compression by colorization equation at the link below, it was advised that the values are negative because inverse (minus-value) colorization is being applied. The inverse colorization method uses disparity values instead of depth.
2. Are you referring to the equation in the image below, please?
If so then I could not find information to confirm how 1529 is handled in this equation. The discussion in the link below where a RealSense user shared the code of their depth recovery logic may provide some insights though.
Thank you for your reply and discussions you mentioned.
(1) Although I've read your explanation and discussion posts, I still do not understand clearly. I understand that the HUE color space has 6 gradations in the up and down directions of R,G, and B (-255 to 255) as described in the whitepaper. However, if we got the negative values by using disparity values (pr = 255 - dnormal = negative value), how do we visualize with RGB image? Is there any relationship I did not know? It would be of great help if you could explain more details about this with example or give me some resources.
(2) I referred to the following equation, as I used the disparity values for inverse colorization. In this equation, the dnormal is not multiplied with 1529 as in the one you referred.
Thank you very much.
My interpretation of the paper is that the color values are in the range 0-255, and the hue value is arrived at with the following equation:
(hue value x 6) - 1
The hue value will be between 0 and 255.
So the maximum hue value of '1529' would be achieved by multiplying 255 by 6 to get 1530 and then subtracting 1 from 1530 to arrive at the final value of 1529.
(255 x 6) - 1 = 1529
My rough understanding of the code in the paper is that it is performing the depth colorization by applying a color scheme to the data to determine the method by which the depth data is colored using the colorizer. The color schemes (listed in the link below) range from 0 to 9, and the depth colorization by compression system is applying color scheme number 9, called Hue.
My further understanding is that once the depth is colorized then the image is treated like an ordinary JPEG image "which can easily be compressed, stored, and transmitted using widely available HW and software tools" and is compressed. After it has been compressed, it can be converted back into a depth map image by making use of dnormal.
It is at this stage (recovery of the depth map from the jpeg) that the problems begin, as it is a process that needs to be performed outside of the RealSense SDK.
Because of the difficulties involved in this outside-SDK processing, it is recommended that the depth compression by colorization paper be considered a standalone experiment that will not be developed or supported any further beyond what is already described in the paper. Instead, a different depth compression method should be used if possible (for example, recording a bag file in compressed Z16H format, as described in the link below).
Thank you for your explanations.
You mentioned that the pixel values will be negative because of the disparity values (for inverse colorization). But, we can also get negative pixel values if we perform uniform colorization which use depth values. I mean the pixel values will be negative whenever it is true for the second condition of pr (also for the fourth condition of pb) no matter what we use depth values or disparity values.
My understanding for this condition is that we can get both negative and positive pixel values because Hue color space is used which has 6 gradations in the up and down directions of R, G, and B (according to the whitepaper).
I wonder my knowledge is correct or not.
Thank you for your time.
There is not much that I can add due to there being no technical references for the depth compression by colorization technique outside of the paper itself and so there is insufficient information to construct accurate advice from.
I did research RGB in general though and confirmed that RGB hue can have 6 gradations of intensity in both directions.
Colors can be thought of as 'minus colors' in terms of their position on the color chart directly opposite another color on the other side of the chart. For example, Magenta can be called 'minus green' because it is opposite green on the color wheel.
I noticed there are two methods of colorization in the paper: uniform colorization and inverse colorization. And the main difference of the two methods is their depth normalization. The following equation is the normalized depth in the inverse colorization method.
I think there should be a 1529 in the equation, since we are mapping depth into a 1529-level Hue color space.
Is there a 1529 missing ?
My understanding is that this isn't something that needs to be thought about because in your program script, setting the flag is_disparity to true to use inverse colorization instead of uniform colorization handles the change to colorization by disparity value for you, as described in the section of the paper linked to below.
Yes, we do not need to figure out the math behind if we use realsense SDK.
Yet we are working on integrating realsense into a board that wireless communicates with a PC. In this case, I have to find a way to compress the depth image and send it as regular RGB image encoding. We are about to encoding the depth image using the method proposed in the paper. So the math does matter to me.
It seems like a 1529 missing in the equation. And I just want to verify whether I am understanding correctly.
Checking the formula images in the paper, my understanding from the text is:
1. If using uniform colorization then the formula is multiplied by 1529.
2. If inverse colorization is used then the formula is not multiplied by 1529.
3. When recovering an inverse colorized image, the formula is rearranged to include 1529 as shown below.
What I am suggesting is that a 1529 is probably unintentionally omitted while editing equation 2.
The recovery of depth is just to derive the depth d from the colorization equation.
I did some math, which I put in the picture below.
The uniform colorization equation works fine.
I tried the inverse colorization without 1592 and with 1592. And the case with 1592 yields the correct recovered depth value, which proves the d_normal is inverse colorization should have a 1529.
Thank you very much for your explanation.
An update to the depth compression through colorization white-paper document is not planned as it was a proof-of-concept experiment that is now not recommended for use. Compression with other methods such as LZ4 is instead recommended. The paper's method does clearly suit your particular project though.
Please sign in to leave a comment.