Ask Your Question
0

coin date ocr

asked 2014-05-16 02:22:54 -0600

coinocr gravatar image

updated 2014-05-16 02:52:08 -0600

Hello, I am trying to setup an automated way to get the date from coins. I am keeping it simple and decided to first try it on pennies since the dates are in the same location in relation to its circle. I am using tesseract for the OCR of the date. I am having difficulty processing the date from the penny so that tesseract will correctly translate it.

My question is, does anyone have a good idea what I can do to the image from a penny's date field so that tesseract will translate correctly. I am trying morphologies but can't get the right setting so that it works.

Here is my code(needs alot of cleanup but does the job)

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "tesseract/baseapi.h"
#include "iostream"

using namespace cv;
using namespace std;

static void help()
{
    cout << "\nThis program demonstrates circle finding with the Hough transform.\n"
            "Usage:\n"
            "./houghcircles <image_name>, Default is pic1.png\n" << endl;
}

/**
 * Rotate an image
 */
void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
    int len = std::max(src.cols, src.rows);
    cv::Point2f pt(len/2., len/2.);
    cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);

    cv::warpAffine(src, dst, r, cv::Size(len, len));
}

int main(int argc, char** argv)
{
    const char* filename = argc >= 2 ? argv[1] : "board.jpg";


    int centerx,centery, radius;

    //Mat img = imread(filename, 0);
    Mat img = imread(filename,0);

    Mat org;
    img.copyTo(org);

    Mat date_img;

    if(img.empty())
    {
        help();
        cout << "can not open " << filename << endl;
        return -1;
    }

    Mat cimg;
    medianBlur(img, img, 5);
    cvtColor(img, cimg, COLOR_GRAY2BGR);

    vector<Vec3f> circles;
    HoughCircles(img, circles, HOUGH_GRADIENT, 1, 300,
                 100, 30, 400, 2000 // change the last two parameters
                                // (min_radius & max_radius) to detect larger circles
                 );
    for( size_t i = 0; i < circles.size(); i++ )
    {
        Vec3i c = circles[i];
        circle( cimg, Point(c[0], c[1]), c[2], Scalar(0,255,255), 3, LINE_AA);
        circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, LINE_AA);

    centerx = c[0]; 
    centery = c[1];
    radius = c[2];  

    cv::Mat mask = cv::Mat::zeros( img.rows, img.cols, CV_8UC1 );
    circle( mask, Point(c[0], c[1]), c[2], Scalar(255,255,255), -1, 8, 0 ); //-1 means filled
    org.copyTo( cimg, mask ); // copy values of img to dst if mask is > 0.

    break;

    }

    //cv::Mat roi( cimg, cv::Rect( centerx-radius, centery-radius, radius*2, radius*2 ) );
    cout << "DIAMETER: " <<  radius  << " CENTERX: " << centerx << " CENTERY: " << centery << endl;

    cv::Rect myROI(centerx-radius,  centery-radius, radius*2,radius*2);
    cimg = cimg(myROI);

 //  rotate(cimg, 90, cimg);




 //   rectangle(cimg, Point(cimg.rows/1.45, cimg.cols/1.6),Point(cimg.rows/1.1,cimg.cols/1.35), Scalar(0,255,255), 3, 8, 0 );

    //get date
    date_img = cimg(Rect( Point(cimg.rows/1.45, cimg.cols/1.6),Point(cimg.rows/1.1,cimg.cols/1.35) ) ) ;
    date_img.convertTo(date_img, -1, 1.8, 1);


    Mat element = getStructuringElement(MORPH_ELLIPSE, Size(15,15), Point(-1,-1) );
    erode(date_img,date_img, element);
    //dilate ...
(more)
edit retag flag offensive close merge delete

Comments

please, can you re-edit, and format your code ? [the 10101 button]

berak gravatar imageberak ( 2014-05-16 02:26:34 -0600 )edit
coinocr gravatar imagecoinocr ( 2014-05-16 02:46:07 -0600 )edit
1

no, please, try again. just mark the code, and press the button ...

berak gravatar imageberak ( 2014-05-16 02:46:51 -0600 )edit

yea ;) like that.

berak gravatar imageberak ( 2014-05-16 02:56:51 -0600 )edit

Just a small remark, OCR was created to match printed text to a string. However, the lettertype of the example coin will pose a large challenge to be recognized by OCR. It comes close to written characters, which have been proven by active research that it is not easy to describe OCR for hand written styles.

StevenPuttemans gravatar imageStevenPuttemans ( 2014-05-16 04:33:04 -0600 )edit

1 answer

Sort by » oldest newest most voted
0

answered 2014-05-16 09:36:17 -0600

Maybe instead of trying to go the OCR route, you could build a training model on the individual digits or pairs of digits? Basically, I'm thinking of training it to recognize 18__, 19__, 20__, and then __[0-9]_ and ___[0-9]. It would also be useful to determine if you could have the trained model consider the nearest curve edge as part of the recognized object. In your penny, it would recognize 19__ ) and then recognize the tens 2 and the ones 0 separately.

Part of the reason I'm thinking this might be an easier approach is because I think a sufficiently trained model can recognize an object regardless of orientation.

edit flag offensive delete link more

Comments

Thanks for the input, I was thinking about training tesseract but it seems more appropriate to train OpenCV instead.

coinocr gravatar imagecoinocr ( 2014-05-24 14:33:04 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2014-05-16 02:22:54 -0600

Seen: 1,223 times

Last updated: May 16 '14