Nifty Penguin Magic

# npm

## online-autocovariance

0.0.1 • Public • Published

# online-autocovariance

## How it works

Autocovariance online algorithm

covk = (1 / n) Σi=1..n-k(xi+k - μ)(xi - μ)

Then multiply values in brackets:

covk = (1 / n) Σi=1..n-k(xi+kxi + μ2 - μxi - μxi+k)

Multiply the result with (n-k)/(n-k) to get rid of Σ:

covk = α(β + μ2 - μμi - μμi+k)) = α(β + μ(μ - μi - μi+k))

Where:

α = (n - k) / n
β = (Σi=1..n-k(xi+kxi)) / (n - k) = avg(xi+kxi)
μi = (Σi=1..n-k xi) / (n - k)
μi+k = (Σi=1..n-k xi+k) / (n - k)

μ is constant here so (Σi=1..n-k μ) / (n - k) = μ

Using the resulting formula, for each lag k from 0 to K we only need to accumulate:

• μ - average of full (0, N) data interval; don't depend on k
• μi - average of (0, N-k)
• μi+k - average of (k, N) interval
• β - average of the product of xi+k and xi

To update μ iteratively we can just use a simple algorithm: online-mean

μi is just a lagged (t-k) value of μ:
◆◆◆◆◆ μi(0, N-k), k=0
◆◆◆◆◇ μi(0, N-k), k=1
◆◆◆◇◇ μi(0, N-k), k=2
To update μi we just calculate μ and push it to μi and shift all its values to the right.

μi+k and β are "delayed":
◆◆◆◆◆ μi+k(k, N), k=0
◇◆◆◆◆ μi+k(k, N), k=1
◇◇◆◆◆ μi+k(k, N), k=2

To update their values we need to track last k observations of x (xlag) and their weights (wlag) using online-lag module
xlag: [x[t], x[t-1], x[t-2], ...]
wlag: [w[t], w[t-1], w[t-2], ...]

Until we get non-zero weight wlag[k], μi+k and β are zero. Each observation adds its weight to the wlag object shifting its values. So after, let's say 3 observation, 3rd weight in wlag will be 1, that gives us non-zero value for corresponding lag. I definitely should rewrite this to make everything more clear :)

## Keywords

### Install

`npm i online-autocovariance`

### Repository

github.com/onlinestats/online-autocovariance

3

0.0.1

ISC

6.42 kB

5