Ask Your Question
0

ANN_MLP output training value of 1 causes error

asked 2017-12-12 17:27:12 -0600

sjhalayka gravatar image

updated 2017-12-14 18:56:52 -0600

When I try to train the network's output neurons to produce a value of 1, it gives me an error saying that "OpenCV Error: One of arguments' values is out of range (Some of new output training vector components run exceed the original range too much)".

The input images are:

dove.png image description

flowers.png image description

peacock.png image description

statue.png image description

My full code is listed below. Near the end of the code I assign the value of 0.9 and it works. When I switch those values to 1, it fails. Thanks for any help you can provide. Otherwise, the training and testing are successful.

This is odd because when doing the XOR operation, it can train for 1: https://github.com/sjhalayka/opencv_x...

Anyway, here's the code:

#include <opencv2/opencv.hpp>
using namespace cv;
#pragma comment(lib, "opencv_world331.lib")

#include <iostream>
#include <iomanip>

using namespace cv;
using namespace ml;
using namespace std;

float round_float(const float input)
{
    return floor(input + 0.5f);
}

void add_noise(Mat &mat, float scale)
{
    for (int j = 0; j < mat.rows; j++)
    {
        for (int i = 0; i < mat.cols; i++)
        {
            float noise = static_cast<float>(rand() % 256);
            noise /= 255.0f;

            mat.at<float>(j, i) = (mat.at<float>(j, i) + noise*scale) / (1.0f + scale);

            if (mat.at<float>(j, i) < 0)
                mat.at<float>(j, i) = 0;
            else if (mat.at<float>(j, i) > 1)
                mat.at<float>(j, i) = 1;
        }
    }
}

int main(void)
{
    const int image_width = 64;
    const int image_height = 64;

    // Read in 64 row x 64 column images
    Mat dove = imread("dove.png", IMREAD_GRAYSCALE);
    Mat flowers = imread("flowers.png", IMREAD_GRAYSCALE);
    Mat peacock = imread("peacock.png", IMREAD_GRAYSCALE);
    Mat statue = imread("statue.png", IMREAD_GRAYSCALE);

    // Reshape from 64 rows x 64 columns image to 1 row x (64*64) columns
    dove = dove.reshape(0, 1);
    flowers = flowers.reshape(0, 1);
    peacock = peacock.reshape(0, 1);
    statue = statue.reshape(0, 1);

    // Convert CV_8UC1 to CV_32FC1
    Mat flt_dove(dove.rows, dove.cols, CV_32FC1);

    for (int j = 0; j < dove.rows; j++)
        for (int i = 0; i < dove.cols; i++)
            flt_dove.at<float>(j, i) = dove.at<unsigned char>(j, i) / 255.0f;

    Mat flt_flowers(flowers.rows, flowers.cols, CV_32FC1);

    for (int j = 0; j < flowers.rows; j++)
        for (int i = 0; i < flowers.cols; i++)
            flt_flowers.at<float>(j, i) = flowers.at<unsigned char>(j, i) / 255.0f;

    Mat flt_peacock(peacock.rows, peacock.cols, CV_32FC1);

    for (int j = 0; j < peacock.rows; j++)
        for (int i = 0; i < peacock.cols; i++)
            flt_peacock.at<float>(j, i) = peacock.at<unsigned char>(j, i) / 255.0f;

    Mat flt_statue = Mat(statue.rows, statue.cols, CV_32FC1);

    for (int j = 0; j < statue.rows; j++)
        for (int i = 0; i < statue.cols; i++)
            flt_statue.at<float>(j, i) = statue.at<unsigned char>(j, i) / 255.0f;

    Ptr<ANN_MLP> mlp = ANN_MLP::create();

    // Slow the learning process
    //mlp->setBackpropMomentumScale(0.1);

    // Neural network elements
    const int num_input_neurons = dove.cols; // One input neuron per grayscale pixel
    const int num_output_neurons = 2; // 4 images to classify, so number of bits needed is ceiling ...
(more)
edit retag flag offensive close merge delete

Comments

what are you trying to achieve here ? ann needs one-hot encoding, setting both output neurons to 0.9 does not make any sense.

berak gravatar imageberak ( 2017-12-13 01:41:04 -0600 )edit

I remembered error problem in python 3.5...the index is out of range.. You set value 0.9 is a type of float. When you set value of one..it is not type float. If you want to set value of 1... change to (0,0) instead of (0,1). And see what happened.

supra56 gravatar imagesupra56 ( 2017-12-13 07:34:20 -0600 )edit

@berak Thanks for the information. I am using the encoding scheme where there are n classifications, and ceiling(ln(n)/ln(2)) output neurons. Does one-hot encoding learn faster?

sjhalayka gravatar imagesjhalayka ( 2017-12-13 08:20:50 -0600 )edit

@supra56 -- I just train to give the output values of 0.1 or 0.9 now. Thanks for your expertise.

sjhalayka gravatar imagesjhalayka ( 2017-12-13 08:22:19 -0600 )edit

Anyway, I didn't think having an output value of 1 would cause an error... Setting the activation function to SIGMOID_SYM gives you a range of [-1.7159, 1.7159]. That leaves lots of room for the output value to be 1. :(

sjhalayka gravatar imagesjhalayka ( 2017-12-13 08:56:47 -0600 )edit

@sjhalayka , if you have 15 classes, you have to set one of your outputs to 1 and the other 14 to 0.

the prediction will return the index of the largest number (the classID), that's all there is to it.

the error stems from either setting more than one neuron to 1, or having all of them at 0

berak gravatar imageberak ( 2017-12-13 09:25:27 -0600 )edit

@berak I understand one hot encoding; I just choose not to use it. My copy of "Practical Neural Network Recipes in C++" also uses the scheme that I'm using (see page 16 of 493, if you can find a copy). In fact, it warns against one-hot encoding. Who knows?

sjhalayka gravatar imagesjhalayka ( 2017-12-13 09:40:43 -0600 )edit

your book is from 1993, that is 24 years ago now ! (you probably werent even born then)

berak gravatar imageberak ( 2017-12-13 10:07:29 -0600 )edit
1

That is true, the book is old, but it's not like one-hot encoding is some kind of up and coming leading-edge research area. :D

I was like 16 when that book came out, yeah.

sjhalayka gravatar imagesjhalayka ( 2017-12-13 10:15:30 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2017-12-13 19:03:09 -0600

sjhalayka gravatar image

updated 2017-12-15 20:14:05 -0600

It appears that the problem only occurs when UPDATE_WEIGHTS is used.

I used the UPDATE_WEIGHTS | NO_INPUT_SCALE | NO_OUTPUT_SCALE parameter set and it works with values -1 and 1 now; 100% success rate. I could have sworn that I already tried this, without success. Anyway, thanks @LBerger!

Here is my final code:

#include <opencv2/opencv.hpp>
using namespace cv;
#pragma comment(lib, "opencv_world331.lib")

#include <iostream>
#include <iomanip>

using namespace cv;
using namespace ml;
using namespace std;

float round_float(const float input)
{
    return floor(input + 0.5f);
}

void add_noise(Mat &mat, float scale)
{
    for (int j = 0; j < mat.rows; j++)
    {
        for (int i = 0; i < mat.cols; i++)
        {
            float noise = static_cast<float>(rand() % 256);
            noise /= 255.0f;

            mat.at<float>(j, i) = (mat.at<float>(j, i) + noise*scale) / (1.0f + scale);

            if (mat.at<float>(j, i) < 0)
                mat.at<float>(j, i) = 0;
            else if (mat.at<float>(j, i) > 1)
                mat.at<float>(j, i) = 1;
        }
    }
}

int main(void)
{
    const int image_width = 64;
    const int image_height = 64;

    // Read in 64 row x 64 column images
    Mat dove = imread("dove.png", IMREAD_GRAYSCALE);
    Mat flowers = imread("flowers.png", IMREAD_GRAYSCALE);
    Mat peacock = imread("peacock.png", IMREAD_GRAYSCALE);
    Mat statue = imread("statue.png", IMREAD_GRAYSCALE);

    // Reshape from 64 rows x 64 columns image to 1 row x (64*64) columns
    dove = dove.reshape(0, 1);
    flowers = flowers.reshape(0, 1);
    peacock = peacock.reshape(0, 1);
    statue = statue.reshape(0, 1);

    // Convert CV_8UC1 to CV_32FC1
    Mat flt_dove(dove.rows, dove.cols, CV_32FC1);

    for (int j = 0; j < dove.rows; j++)
        for (int i = 0; i < dove.cols; i++)
            flt_dove.at<float>(j, i) = dove.at<unsigned char>(j, i) / 255.0f;

    Mat flt_flowers(flowers.rows, flowers.cols, CV_32FC1);

    for (int j = 0; j < flowers.rows; j++)
        for (int i = 0; i < flowers.cols; i++)
            flt_flowers.at<float>(j, i) = flowers.at<unsigned char>(j, i) / 255.0f;

    Mat flt_peacock(peacock.rows, peacock.cols, CV_32FC1);

    for (int j = 0; j < peacock.rows; j++)
        for (int i = 0; i < peacock.cols; i++)
            flt_peacock.at<float>(j, i) = peacock.at<unsigned char>(j, i) / 255.0f;

    Mat flt_statue = Mat(statue.rows, statue.cols, CV_32FC1);

    for (int j = 0; j < statue.rows; j++)
        for (int i = 0; i < statue.cols; i++)
            flt_statue.at<float>(j, i) = statue.at<unsigned char>(j, i) / 255.0f;

    Ptr<ANN_MLP> mlp = ANN_MLP::create();

    // Slow the learning process
    //mlp->setBackpropMomentumScale(0.1);

    // Neural network elements
    const int num_input_neurons = dove.cols; // One input neuron per grayscale pixel
    const int num_output_neurons = 2; // 4 images to classify, so number of bits needed is ceiling(ln(n)/ln(2))
    const int num_hidden_neurons = static_cast<int>(sqrtf(image_width*image_height*num_output_neurons));
    Mat layersSize = Mat(3, 1, CV_16UC1);
    layersSize.row(0) = Scalar(num_input_neurons);
    layersSize.row(1) = Scalar(num_hidden_neurons);
    layersSize.row(2) = Scalar(num_output_neurons);
    mlp->setLayerSizes(layersSize);

    // Set various parameters
    mlp->setActivationFunction(ANN_MLP::ActivationFunctions::SIGMOID_SYM);
    TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 1, 0.000001);
    mlp->setTermCriteria(termCrit);
    mlp->setTrainMethod(ANN_MLP::TrainingMethods::BACKPROP, 0.0001);

    Mat output_training_data = Mat(1, num_output_neurons, CV_32FC1).clone();

    // Train the network once
    output_training_data.at<float>(0, 0) = -1 ...
(more)
edit flag offensive delete link more

Comments

1

About good rule for training you should read at 3m43s this video

LBerger gravatar imageLBerger ( 2017-12-15 15:20:55 -0600 )edit
sjhalayka gravatar imagesjhalayka ( 2017-12-15 20:07:59 -0600 )edit

I'll watch the whole video some time. Thank you.

sjhalayka gravatar imagesjhalayka ( 2017-12-15 20:11:54 -0600 )edit
0

answered 2017-12-14 12:03:57 -0600

LBerger gravatar image

updated 2017-12-15 11:33:39 -0600

There is no problem with your code using 0.1. My result is :

100
200
300
400
500
600
700
800
900
Success rate: 100%
Appuyez sur une touche pour continuer...

Now using 1.0 I have got same exception but you should use UPDATE_WEIGHT careffully. When you train first time, output are normalised between -0.98 and 0.98 : flag ANN_MLP::NO_OUTPUT_SCALE is not used . All new values in training data x must verified : -0.98 < x < 0.98 if you don't use ANN_MLP::NO_OUTPUT_SCALE in next call then data are checked and assertion is thrown. Use ANN_MLP::NO_OUTPUT_SCALE no assertion is thrown and results is :

100
200
300
400
500
600
700
800
900
Success rate: 68.25%
edit flag offensive delete link more

Comments

Cool. Thanks so much!

Wait, do you mean that the code ran untouched (going for values of 0.1 and 0.9), or it ran going for values of 0 and 1?

sjhalayka gravatar imagesjhalayka ( 2017-12-14 12:09:39 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-12-12 17:27:12 -0600

Seen: 650 times

Last updated: Dec 15 '17