sense-neuralnetwork

1.0.2 • Public • Published

Create simple neural networks within Node-RED!

Two nodes are contained within this package:

Neural Network node: This node contains a single neural network without any learning algorithms. It can mutate itself semi-randomly, output its fitness score and convert the internal network to a string which can be parsed by another Neural Network node. The parameters of this node are as follows:

  • Layers: A JSON array of numbers. The length of the array determines the amount of layers in the network, and each number determines the amount of neurons in that layer. The first layer is the input layer, the last is the output layer. As an example, this setting [5, 2, 1] would produce a network with three layers. The input layer has five neurons, the hidden layer has two neurons and the output layer has one neuron.

  • Activation Type: The method in which the network calculates each value per node. Certain Activation Methods may be better suited to certain tasks. The available types are detailed at the bottom of this readme file.

    Input a number array to receive the network output in JSON as a new number array, depending on layer settings. The first layer determines the length of the number array input, the last layer determines the array length in output.

    Set the msg.topic when sending input to determine the node action.

    • output: supply an array of numbers for the network. It will output an array with numbers from the network.
    • mutate: mutates the network randomly.
    • fitness: supply an empty payload to get the current fitness value. Supply a number to set the new fitness value.
    • tostring: converts the network to a string and outputs it.
    • parse: supply a network-string to load the network within.

Reinforced Agent node: This node contains a neural network that can optimize itself if supplied with positive and negative reinforcement. The parameters are the same as the Neural Network node, with two extra ones detailed below:

  • Learn Evaluation Interval: after the agent receives a number of positive and negative feedbacks, the network will evaluate its current iteration and start a new one. Setting this value higher may in some cases improve learning accuracy, but will lower the speed at which progress is made. Minimum of 1, defaults to 5.
  • Mutation Scale: How drastic are changes to the internal network on each iteration? Minimum of 1.

This node can, after learning, also output a network string of its internal network. In this way, the network can be used in the above Neural Network node after learning is complete or if another learning method is desired as follow-up.

Input a number array to receive the network output in JSON as a new number array, depending on layer settings.
The first layer determines the length of the number array input, the last layer determines the array length in output.
Inputting positive or negative feedback allows the agent to improve itself.

Set the msg.topic when sending input to determine the node action.
- output: supply an array of numbers for the agent. It will output an array with numbers from the network.
- fitness: Get the current fitness value.
- tostring: converts the agent to a string and outputs it.
- parse: supply a agent-string to load the agent within.
- good: rewards positive feedback to the agent.
- bad: punishes the agent with negative feedback.
- getnetwork: outputs a network-string of the current internal neural network in the agent.
- getbestnetwork: outputs a network-string of the last best positive iteration of the internal neural network in the agent.

Activation Methods:

Sigmoid:  0...1 -> 1 / (1 + Math.exp(-input));

TanH: -1...1 -> Math.Tanh(input);

ReLU: 0...infinity -> Math.max(0, input);

SoftPlus: 0...infinity -> Math.log(Math.exp(input) + 1);

Linear: -infinity...infinity -> directly outputs weighted input;

Readme

Keywords

Package Sidebar

Install

npm i sense-neuralnetwork

Weekly Downloads

1

Version

1.0.2

License

ISC

Unpacked Size

38.4 kB

Total Files

12

Last publish

Collaborators

  • timfa