NeuNet Pro Tips & Tricks
Table Of Contents

Tips & Tricks

Choosing a Project

Importing Data

Configure Project

Mining Data

Training BackProp

Training SFAM

Making Predictions

Exporting Results

BackProp Training

Finish Jog Weights Reset Revert Stop GO to Training Graph Zoom Learn Rate Momentum Verify Rate Cycles Completed Best Error Current Error History Of Error
This screen is used for training the BackProp neural network.
Every time you press the GO button, a training cycle is made through your training set.
Press Finish when you wish to exit from this screen. A test report will be automatically made on your testing set.

Finish:

  • Press this button when you wish to exit from the training screen.
  • A test will be immediately run on your testing set and you will be delivered to the Scatter Graph.
  • For data mining and anomaly detection, do not train too close to zero error - but leave some error because these errors are your anomalies.

Jog Weights:

  • This button may be pressed occasionally during BackProp training.
  • Random noise is applied to your network, which will cause the training to back up and redo a recent portion of its work.
  • If you suspect your training has become stuck in a local situation, jog weights may enable it to find the global optimum.
  • The use of this button is optional and is seldom required.

Reset Training:

  • This button clears all previous training, so you may begin fresh training.

Revert:

  • This button reverts your neural network backward to the network that produced the Best Error thus far.
  • If something goes wrong with your training session, use this button to revert, without having to do a total reset.
  • This feature is useful in BackProp training if your settings of Learn Rate and Momentum should cause the current network to be messed up.

STOP:

  • This button stops the training at the end of the next verify cycle.
  • If your Best Error is not improving after repeated attempts, it is time to Stop the training.

GO:

  • The GO button is the most important button on this screen.
  • Press GO to initiate repeated training cycles through all of your training set.
  • After repeating several training cycles, a "verify" cycle is run and the screen is updated to show the current prediction error.

Graph Zoom:

  • Press this button to zoom in on the vertical scale of the graph.
  • Every second press cancels the zoom.

Learn Rate and Momentum:

  • These settings are required by the BackProp algorithm.
  • You should interactively experiment with these setting during BackProp training.
  • For most projects, the default values of 50 provide a good starting point.
  • If Learn Rate and Momentum are set too low, the training will be very slow with a smooth, gradual improvement.
  • If Learn Rate and Momentum are set too high, the training will be very choppy, and chaotic.
  • Experienced NeuNet users learn to experiment with Learn Rate and Momentum settings interactively during training. Try to set Learn Rate always greater than Momentum and find a combination of the two settings that produces some up and down "choppiness" in the error history. This choppiness will ensure that the entire data space is explored looking for the global optimum, during the early stages of training. As the current error gradually improves, you can be sure you are climbing the right hill. Now begin to decrease the Learn Rate and Momentum slowly so you can land on the peak of the hill without overshooting. The final training phase can be completed with Learn Rate less than 10 and Momentum near zero.
  • If these experiments make a mess, use Revert to return to your Best Error.

Verify Rate:

  • This setting determines how many training cycles are made before a verify cycle is run.
  • Usually the default value of 5 works very well.
  • The verify cycle is necessary to evaluate the current network and report the error.
  • The best setting depends on the speed of your computer, the number of nodes in your project and the number of records and fields in your training set.
  • Try to find a setting where your History Graph is updating every 2 to 10 seconds.
  • On very large training sets, there might be several minutes of "hour glass" before your screen updates, and some of your mouse clicks may be ignored.
  • If too many verify cycles are run, your computer will be spending too much time reporting on the current neural net instead of improving it.

Cycles Completed:

  • This counter reports how many cycles have been run on this training set.

Best Error:

  • This error indicates the Root Mean Square (RMS) error for the best verify cycle thus far.
  • This error is also called "Standard Error of Estimate".
  • The error is calculated as SquareRoot{SumOfAll[(Actual-Predicted)2] / NumberOfPredictions}
  • This calculation if performed using normalized values, so it may be stated as percent.
  • Whenever a new Best Error occurs, the neural net is saved into your project file.
  • This continual saving of the best net allows you to use the Revert button and allows to continue this project on another NeuNet session.
  • The blue vertical line on the History Graph shows which previous cycle produced the Best Error.

Current Error:

  • This error indicates the Root Mean Square (RMS) error for the most recent verify.
  • This error is also called "Standard Error of Estimate".
  • The error is calculated as SquareRoot{SumOfAll[(Actual-Predicted)2]/NumberOfPredictions}
  • This calculation if performed using normalized values, so it may be stated as percent.
  • The number that appears in this box is constantly graphed in the History Graph.

History of Error Graph:

  • This graph shows a history of the prediction error achieved during the previous verify cycles.
  • You should adjust Leran Rate and Momentum so there is some choppiness to the graph during the early stages of training.
  • The blue coloring marks which previous cycle was saved as the best.



A Complete Neural Network Development System

CorMac Technologies Inc.
34 North Cumberland Street ~ Thunder Bay ON P7A 4L3 ~ Canada
E m a i l