Ask Your Question
1

Max ANN_MLP size?

asked 2019-02-28 16:19:07 -0600

WreckItTim gravatar image

updated 2019-02-28 16:20:24 -0600

Hello dev team,

I had some general questions on max MLP parameters: Max number of layers? Max number of nodes per layer (and is this dependent on the number of layers used)? Max number of epochs you can set for training termination criteria?

I know these numbers are directly limited by the size of the variable type used, but curious in implementation what the actual limits are.

Thanks! -Tim

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2019-02-28 16:35:35 -0600

sjhalayka gravatar image

updated 2019-02-28 16:48:53 -0600

Here is a basic code C++ to solve the XOR problem using ANN_MLP:

https://github.com/sjhalayka/opencv_x...

Here is the source code that performs the setLayerSizes function:

https://github.com/opencv/opencv/blob...

So it looks there is no set maximum number of layers, nor is there a set maximum number of neurons for each layer -- it's only what your RAM can handle.

Here is the source code that performs setTermCriteria:

https://github.com/opencv/opencv/blob...

It looks like the maximum number of training sessions is 1000:

https://github.com/opencv/opencv/blob...

edit flag offensive delete link more

Comments

1

Thanks for the links to source code and response!

From the train method here: https://github.com/opencv/opencv/blob...

Line 866: termcrit.maxCount = std::max((params.termCrit.type & CV_TERMCRIT_ITER ? params.termCrit.maxCount : MAX_ITER), 1);

It seems like there is no limit to number of epochs, it only uses a default of (MAX_ITER = 1000) if the user did not did not satisfy (params.termCrit.type & CV_TERMCRIT_ITER) which I believe is specifying to use max number of epochs and/or epsilon for termination criteria.

What you said about what my RAM can handle seems to be the issue, I was doing a benchmark and after 4000 epochs the time taken to train stays stagnant no matter how much more I increase the number of epochs.

WreckItTim gravatar imageWreckItTim ( 2019-02-28 16:57:18 -0600 )edit

May I ask what you're using for input data? There are some problems that can't be solved, no matter how many hidden layers you use.

sjhalayka gravatar imagesjhalayka ( 2019-02-28 17:06:06 -0600 )edit

Ya my job right now is to determine if this problem is solvable with an MLP, sorry I can't talk about it too much because I signed an NDA. Anyways, I had good prelim results but found more dependencies in the problem which require more nodes and layers. I tracked my memory usage while training, and looking through the source code, it doesn't seem like each epoch is increasing memory output so I don't think that is the issue for number of training sessions. The training time is still stagnant after 4000 epochs though, weird! It should definitely be increasing as I increase the number of epochs.

WreckItTim gravatar imageWreckItTim ( 2019-02-28 17:50:26 -0600 )edit

And it stalls even if you alter the number of layers or number of neurons per layer? You shouldn't ever really have a need for more than two hidden layers.

Where i = number of input neurons and o = number of output neurons:

For one hidden layer, the ideal number of neurons is h = sqrtf(i*o).

For two hidden layers, where r = powf(i/o, 1.0f/3.0f), the ideal number of neurons for hidden layer 1 is h1 = o*r*r, and for hidden layer 2 is h2 = o*r.

I got these equations from Practical Neural Network Recipes in C++.

sjhalayka gravatar imagesjhalayka ( 2019-02-28 18:24:45 -0600 )edit
1

I am not having any issues with number of nodes/layers. I am just using two layers with 40 nodes each. The problem is when I increase the number of epochs past 4000, it doesn't seem to be training on more epochs than 4000 because the training time is the same rather I use 4000, 5000, 10000, or a million epochs (what you call training sessions). If you use more training sessions, the training time should go up (simple math) that is true when I use between 1 and 4000 epochs, its just after 4000 epochs the training time stagnates. I am just trying to find out the limitations of using an MLP on OpenCV so I know them going forward.

WreckItTim gravatar imageWreckItTim ( 2019-02-28 18:38:33 -0600 )edit

Well, the best way to go ahead, if you want to use modern networks, would be to pick up a book on using TensorFlow. There may be a C++ TensorFlow book, I’m not sure. I wish you the best of luck.

Hopefully someone with more experience than I have will give you some advice too?

sjhalayka gravatar imagesjhalayka ( 2019-02-28 19:08:04 -0600 )edit
1

Ya thanks dude. I found the issue, the termination criteria is BROKEN on OpenCV, See this line: https://github.com/opencv/opencv/blob... Episilon will get set to a non-zero value if you do not specifically set it to 0. which is an issue in the training loop here: https://github.com/opencv/opencv/blob... Because even if the user specifies to terminate training with only the number of epochs, that line will terminate training when error is lower than the default epsilon value. I tried specifically setting termination criteria to be either epsilon or max_iter and setting epsilon=0 and it fixed the problem. Now I can train with any number of epochs! oops..

WreckItTim gravatar imageWreckItTim ( 2019-02-28 19:21:00 -0600 )edit

Glad you found out the problem. I'm not sure how to submit a bug report on GitHub.

sjhalayka gravatar imagesjhalayka ( 2019-02-28 20:45:22 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-02-28 16:19:07 -0600

Seen: 263 times

Last updated: Feb 28 '19