Ask Your Question
2

Image Processing Filter

asked 2012-09-04 07:14:56 -0600

Mark gravatar image

updated 2018-03-30 00:58:06 -0600

I have to find out the border of the area which are marked as red color circle. Here's an example image of one object.

Can you help me the way to find this four different border?
Which filter should i use to find out this border?

edit retag flag offensive close merge delete

Comments

4

I'm not even sure that this can be done manually in a consistent way (that several people will do the same thing). I think you need to define your problem much more precisely.

Adi gravatar imageAdi ( 2012-09-06 02:27:51 -0600 )edit

Thanks for reply Here i want to find out the border which inside of the red color circle only. I have try different idea ,but could not success.

Mark gravatar imageMark ( 2012-09-06 23:49:30 -0600 )edit
1

@Mark why did you update your question?

LBerger gravatar imageLBerger ( 2018-03-30 02:19:32 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2012-09-09 04:10:42 -0600

alexcv gravatar image

updated 2012-09-09 04:13:29 -0600

Hi, since the circled areas you wanna catch have specific spatial features. And also if these features appear to be similar on all the dataset you use, then, a features detection+description process can be applied, using classical features detector/descriptor like SIFT.

A simple way to do this can be like this : Considering a dataset of image samples where these artifacts are visible, then, apply the SIFT detector (check http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#Ptr<featuredetector< a=""> FeatureDetector::create(const string& detectorType)> > ).

  1. First check if the detector is able to provide some keypoints on these features (And do not care about the other detected features ! They will be distinguished later on) !. Use method the drawkeypoint method for visual check ( <http://docs.opencv.org/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html?highlight=drawkeypoint#void drawKeypoints(const Mat& image, const vector<keypoint>& keypoints, Mat& outImage, const Scalar& color, int flags)> ). You can make tests with various keypoint detector and choose the one that detects your features the more often (whatever the over detected keypoints are !).

  2. Second step : describe features in order to distinguish them. For that use a feature descriptor, choose the most appropriate doing tests using the flexible method DescriptorExtractor::create ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=features%20descriptor#descriptorextractor-create ) Use DescriptorExtractor::compute method to describe each previously detected keypoint.

  3. Then, manually label the features that the detector finds and store... this will help you to distinguish your targets from the other detected keypoints.

  4. Finally, considering a new image dataset, do the same 1 and 2 steps (detection+description) and match your stored hand labelled target features with each the new image detected and described keypoints. You can use a descriptor matcher available from the flexible DescriptorMatcher::create method ( http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html?highlight=descriptor%20match#descriptormatcher-create ).

This is a simple kind of spatial feature matching that can be really efficient... IF your features are reproducible from one image to the other. !

For more advanced features matching, check out research papers on "bag of words" to push the button further !!!

Have a nice coding and experimentation !

edit flag offensive delete link more

Comments

Can i know how to write the code for step3?Thanks.

Yiyi gravatar imageYiyi ( 2012-10-19 11:48:03 -0600 )edit

I means how to write the code to label the the features.

Yiyi gravatar imageYiyi ( 2012-10-19 11:56:24 -0600 )edit
1

Hi, Actually, step 3 is mainly done by hand : you introduce human knowledge to the system by manually labelling a set of features to tell the system if it is a target or not. Typically, considering a dataset, you (and collaborators) create a report file (xml?) that provides, for each potential target if it is a target to consider or not. This is a manual long and boring (but required) step. Once done, in general high level features detection, we apply a classification stage where a classifier (SVM, KNN, etc.) is trained on the labelled dataset. The classifier learns how to distinguish detected features of the dataset taking into account the hand made groundtruth labels. In your case, this classification may be simplified is features signatures are easy to distinguish.

Hope it helps i

alexcv gravatar imagealexcv ( 2012-10-21 08:12:52 -0600 )edit

Can you give me an sample or example of coding on how to do it?I does not know to manually labeling features.Thanks in advanced.

Yiyi gravatar imageYiyi ( 2012-10-22 04:24:00 -0600 )edit

Well, i think we missunderstood but no problem about this. Actually, you already did this hand labelling on the image you provided at the beginning, you showed where is your target. To go further, once feature points are detected in your hand circled areas, just store the corresponding image descriptor you chose (SIFT or else) at each keypoint. Store this in "Positive samples descriptors matrix". Consider them as reference target points that you can try to match later with new image samples. You should also store feature points that are not detected in your hand labelled areas and store there descriptor signature in a "non target matrix". This would allow better classification methods to work. Well, this was a simplified description of features matching methods, also consider BagOfWords.

alexcv gravatar imagealexcv ( 2012-10-23 04:57:08 -0600 )edit
0

answered 2012-09-07 16:03:08 -0600

unxnut gravatar image

If you can specify the area in general (such as a slightly larger area than the object), you can perform thresholding to convert the image into foreground and background. Then, apply blob analysis (using the cvBlobsLib add on for OpenCV) and extract the outline of the object.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2012-09-04 07:14:56 -0600

Seen: 940 times

Last updated: Mar 30 '18