gorpropplus

package module
v0.0.0-...-0bc8680 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 5, 2019 License: GPL-3.0 Imports: 3 Imported by: 0

README

Gorprop+

Build Status  codecov  Go Report Card 

This project contains the Neural Network called "rprop+" (resilient backpropagation with weight backtracking) with He-et-al Initialization. The project is pure go: no external dependencies!

Rprop+, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.

Paper abstract

A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.

This method of initializing became famous through a paper submitted in 2015 by He et al, and is similar to Xavier initialization, with the factor multiplied by two. In this method, the weights are initialized keeping in mind the size of the previous layer which helps in attaining a global minimum of the cost function faster and more efficiently. The weights are still random but differ in range depending on the size of the previous layer of neurons. This provides a controlled initialisation hence the faster and more efficient gradient descent.

Why I should use this algorithm?

Rprop+ can find significantly faster than a standard back propagation algorithm, apart from that we really focus in order to produce code with "zero footprints" memory, we have many other ideas for reducing the footprint but we have already done a lot of work!

Ok cool, what should I know in order to start?

It's easy to use our library. You have just to create a new NeuralNetworkArguments variable and pass it to our library! In order to maintain the flexibility, you can set a lot of arguments. You should be particular care about ActivationFunction, DerivateActivation, ErrorFunction, and DerivateError inputs since they are the activation function and the error function the NN will use. We provided the standard Logistic and Hyperbolic Tangent as activation function and SSE function as error function, but you can implement one of this function by yourself! Just respect the interface requirement! This following example creates a new NeuralNetwork, train it, validate it and predict a new sample.

Example
    funct main(){
        // create arguments to pass to the library
    
        args := NeuralNetworkArguments{
            HiddenLayer:        []int{2},
            InputSize:          3,
            OutputSize:         2,
            Threshold:          0.01,
            StepMax:            100,
            LifeSignStep:       100,
            LinearOutput:       false,
            Minus:              10,
            Plus:               100,
            ActivationFunction: gorpropplus.Logistic,
            DerivateActivation: gorpropplus.DerivateLogistic,
            ErrorFunction:      gorpropplus.SSE,
            DerivateError:      gorpropplus.DerivateSSE,
        }
    
        // Get a fresh new neural network
        NN, err := NewNeuralNetworkAndSetup(args)
        if err != nil {
            Log.Printf("Error while creating a new neural network: %s", err.Error())
        }
        // Train the neural network
        err = nn.Train(inputData,outputData)
        if err != nil {
            Log.Printf("Error while training the neural network: %s", err.Error())
        }
        // Validate the train
        confusionMatrix,err:=nn.Validate(validationSetInput,ValidationSetOutput)
        if err != nil {
            Log.Printf("Error while training the neural network: %s", err.Error())
        }
        // Predict a new sample
        prediction,err:=nn.Predict(input)
        if err != nil {
            Log.Printf("Error while training the neural network: %s", err.Error())
        }
        Log.Printf("Prediction result: %v", prediction)
    }
Ok it seems easy. But how about performance?

Glad you ask. Even if the Golang compiler do a lot of work we work really hard in order to optimise our code. Look at this benchmark:

BenchmarkTrain-4                                           10000        174518 ns/op      152150 B/op       2573 allocs/op
BenchmarkActivationNeuronAndDerivateHiddenLayer-4        1000000          1326 ns/op         496 B/op         12 allocs/op
BenchmarkComputeNet-4                                     500000          3134 ns/op        1464 B/op         33 allocs/op
BenchmarkCalculateGradients-4                             500000          2673 ns/op        1664 B/op         40 allocs/op
BenchmarkPredictWithLinearOutputTRUE-4                   2000000           565 ns/op         400 B/op         14 allocs/op
BenchmarkPredictWithLinearOutputFALSE-4                  3000000           502 ns/op         360 B/op         12 allocs/op
This library seem very cool! Who create it?

This library was created by me (bestbug) and the best Data Scientist I've ever met Franca Marinelli

If I found a bug or something wrong?

You can create a pull request to us (really appreciate) or if you don't have any idea about what is going wrong you can always open an issue here on GitHub!

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CE

func CE(nnResult float64, expected float64) float64

CE is the "cross-entropy" error function

func DerivateCE

func DerivateCE(nnResult float64, expected float64) float64

DerivateCE is the derivate of CE ("cross-entropy") function

func DerivateIperbolicTangent

func DerivateIperbolicTangent(neuronValue float64) float64

DerivateIperbolicTangent return the derivate of tanh

func DerivateLogistic

func DerivateLogistic(neuronValue float64) float64

DerivateLogistic is the derivate of logistic

func DerivateSSE

func DerivateSSE(nnResult float64, expected float64) float64

DerivateSSE is the derivate of "sum of squared errors" function

func IperbolicTangent

func IperbolicTangent(neuronValue float64) float64

IperbolicTangent return the tanh value

func Logistic

func Logistic(neuronValue float64) float64

Logistic is the classic logistic function

func SSE

func SSE(nnResult float64, expected float64) float64

SSE is the "sum of squared errors" error function

Types

type NeuralNetwork

type NeuralNetwork struct {
	// Neural Network properties
	Weights                [][][]float64
	TotalWeights           int
	NrCol                  []int
	NrRow                  []int
	ActivationFunction     func(float64) float64          `bson:"-" json:"-" dynamo:"-"`
	DerivateActivation     func(float64) float64          `bson:"-" json:"-" dynamo:"-"`
	ErrorFunction          func(float64, float64) float64 `bson:"-" json:"-" dynamo:"-"`
	DerivateError          func(float64, float64) float64 `bson:"-" json:"-" dynamo:"-"`
	ActivationFunctionName string                         `bson:"-" json:"-" dynamo:"-"`
	ErrorFunctionName      string                         `bson:"-" json:"-" dynamo:"-"`
	LearningRate           []float64
	// Neural Network configuration
	Threshold    float64
	StepMax      int64
	LifeSignStep int64
	LinearOutput bool
	Minus        float64
	Plus         float64
}

NeuralNetwork is the actual neural network object

func NewNeuralNetworkAndSetup

func NewNeuralNetworkAndSetup(args NeuralNetworkArguments) (*NeuralNetwork, error)

NewNeuralNetworkAndSetup create a fresh new neural network struct and it inizialise it internal weights.

func (*NeuralNetwork) Predict

func (n *NeuralNetwork) Predict(input []float64) ([]float64, error)

Predict permit to have a prediction of the inputs. it returns the prediction result or an error if something went wrong.

func (*NeuralNetwork) Train

func (n *NeuralNetwork) Train(input [][]float64, output [][]float64) error

Train use A training dataset is a dataset of examples used for learning, that is to fit the parameters. If something went wrong an error is return.

func (*NeuralNetwork) Validate

func (n *NeuralNetwork) Validate(input [][]float64, output [][]float64) (*ValidationResult, error)

Validate use a validation dataset and create and return a ValidationResult

type NeuralNetworkArguments

type NeuralNetworkArguments struct {
	LearningRate       []float64
	HiddenLayer        []int
	InputSize          int
	OutputSize         int
	Threshold          float64
	StepMax            int64
	LifeSignStep       int64
	LinearOutput       bool
	Minus              float64
	Plus               float64
	ActivationFunction func(float64) float64
	DerivateActivation func(float64) float64
	ErrorFunction      func(float64, float64) float64
	DerivateError      func(float64, float64) float64
}

NeuralNetworkArguments permit to compact all the various arguments need by this library.

type ValidationResult

type ValidationResult struct {
	ConfusionMatrix   [][]int
	CorrectPrediction int
	PredictionResult  [][]float64
}

ValidationResult contain the following infos * ConfusionMatrix: A confusion matrix struct as follow:

       Exspected value

+---------+----+----+----+
|XXXX|    |    |    |    |
|XXXX| V1 | V2 | V3 | V4 |
+------------------------+
|    |    |    |    |    |
| V1 |  2 |  0 |  1 |  3 |
+------------------------+

Predicted | | | | | |

value     | V2 |  7 |  3 |  9 |  0 |
          +------------------------+
          |    |    |    |    |    |
          | V3 |  1 |  1 |  4 |  1 |
          +------------------------+
          |    |    |    |    |    |
          | V4 |  3 |  2 |  1 |  5 |
          +----+----+----+----+----+
  • CorrectPrediction: contain the number of prediction having distance from the exspected value < threshold.
  • PredictionResult: Contain all the prediction done.

Source Files

  • activationfunction.go
  • errorfunction.go
  • rpropplus.go
  • utility.go

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL