distributed parameter optimization through natural evolution strategies.
let worker =
let master =3001 // port or http/https server to bind to
that's it! run
node master.js, and then you can run
node worker.js on as many machines as you'd like to linearly speed up the optimization.
master: string. required. url for master server.
syncEpisodes: boolean. optional, default:
true. if set to false, episodes will start as soon as they end, rather than waiting for the other workers to complete their episodes. you'll get stale updates to your parameters, but for some environments (especially realtime environments) this still seems to work.
initialParameters: array. required. numbers representing the initial state of the parameters you're trying to optimize. make sure these are the same for all workers!
fitness: function. required. see example above, this is how you evaluate your parameters.
optimizer: string. optional, default: 'adam'. which optimizer to use; currently supports 'adam' or 'sgd'
cacheNoise: boolean. optional, default:
true, a block of gaussian noise is generated at startup instead of on-the-fly.
alpha: number. optional, default: 0.01. basically your stepsize / learning rate.
sigma: number. optional, default: 0.1. variance for trial parameters in fitness function.
blockSize: number or function. optional, default:
n => n. how many episodes to run between parameter updates. if it's a function, it's passed the number of workers currently online.
savePath: string. optional. if supplied, models will be saved to a file at this path and loaded from it when you start the master