Ask Your Question
1

LookUp Table for 16-Bit Images

asked 2014-11-24 00:06:13 -0600

updated 2014-11-25 00:39:43 -0600

Hi all!

I'm having an 16-Bit Gray scale Image. I want to reduce the Grey values of the pixels in it. I tried to use LUT but it seems that it can only work for 8-Bit Images. What is the efficient way to reduce a matrix through LUT? Any help is appreciated!

edit retag flag offensive close merge delete

Comments

I guess your option is either to convert to 8 bit first or implement the function of 18 bit. Afaik it doesn't exist yet but it should be quite similar to the 8 bit case. As to the answers, they will appear, the accept page is currently bugged. Devs are fixing it and putting the hosting somewhere else.

StevenPuttemans gravatar imageStevenPuttemans ( 2014-11-26 08:17:46 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
3

answered 2014-12-01 03:30:51 -0600

Thank you for helping me to solve this problem! Here is my code for 16-Bit Look up table based reduction. hope this might be useful for someone!

main()
{
    Size Img_Size(320,240);
    Mat Img_Source_16(Size(320,240),CV_16UC1,Scalar::all(0));
    Mat Img_Destination_16(Size(320,240),CV_16UC1,Scalar::all(0));

    unsigned short LookupTable[4096];
    for (int i = 0; i < 4096; i++)
    {
        LookupTable[i]= 4096-i;
    }

    int i=0;
    for (int Row = 0; Row < Img_Size.height; Row++)
    {
        for (int Col = 0; Col < Img_Size.width; Col++)
        {
            Img_Source_16.at<short>(Row,Col)= i;
            i++;
            if(i>=4095)
                i=0;
        }
    }

    imshow("Img_Source",Img_Source_16);

    t1.start();
    Img_Destination_16= ScanImageAndReduceC_16UC1(Img_Source_16.clone(),LookupTable);

    imshow("Img_Destination",Img_Destination_16);
    t1.stop();
}

Mat& ScanImageAndReduceC_16UC1(Mat& I, const unsigned short* const table)
{
    // accept only char type matrices
    CV_Assert(I.depth() != sizeof(uchar));

    int channels = I.channels();

    int nRows = I.rows;
    int nCols = I.cols * channels;

    if (I.isContinuous())
    {
        nCols *= nRows;
        nRows = 1;
    }

    int i,j;
    unsigned short* p = (unsigned short*)I.data;
    for( unsigned int i =0; i < nCols*nRows; ++i)
        *p++ = table[*p];

    return I;
}
edit flag offensive delete link more

Comments

Nice one :) Might up to making a PR and integrating this in OpenCV?

StevenPuttemans gravatar imageStevenPuttemans ( 2014-12-01 05:29:02 -0600 )edit

Sure! it'll be my pleasure!

Balaji R gravatar imageBalaji R ( 2014-12-01 10:16:33 -0600 )edit
0

answered 2014-11-25 04:09:46 -0600

kbarni gravatar image

It's very easy to implement a custom function for LUT coloring.

See my answer in this topic: http://answers.opencv.org/question/50781/false-coloring-of-grayscale-image/

In short: you create a RGB lookup table of desired length (65536 in this case), then for each gray pixel P to get the false colored pixel C:

C[0]=LUT[P][0];
C[1]=LUT[P][1];
C[2]=LUT[P][2];
edit flag offensive delete link more

Question Tools

Stats

Asked: 2014-11-24 00:06:13 -0600

Seen: 5,681 times

Last updated: Dec 01 '14