koa-grounded
A distributed rate-limit middleware for Koa 2, inspired by Dcard's intern preliminary project.
Watch it in action on asciinema
Quick Start
Install
$ yarn add koa-grounded
Basic Usage
const Koa = ;const Grounded = ; const app = ; // Remember to set app.proxy flag if you are using under a reverse proxyappproxy = true; const RateLimiter = ; // Grounded Middlewareapp; // Ping Pongapp; // Using IPv4 instead of IPv6;app; console;
See the document #API for further informations.
API
Param | Type | Description |
---|---|---|
partitionKey | string |
A partition key for current Rate-Limiter to listen, will create ${partitionKey}-exp , ${partitionKey}-remaining and ${partitionKey}-ban keys on Redis Instance, and subscribe to ${partitionKey}-ban and ${partitionKey}-unban channels |
ratelimit | number |
Ratelimit for each user's session |
globalEXP | number |
Expiration time for user's ratelimit session, in microseconds (10^-6 seconds) unit |
timeout | number |
Worker-Redis synchronization intervals, in miliseconds (10^-3 seconds) unit, it is suggested to modify the value as fast as the worker localQueue size reaches MTU size |
cacheSize | number |
Maxmimum key size stored on local LRU cache |
dbStr | string |
Connection string to the Redis instance, see luin/ioredis#connect-to-redis for further information |
verbose | boolean |
Showing access log informations or not |
Running tests
$ yarn test
Overview
Concept
Introduction
Since Redis is fast enough for its in-memory data operations, the bottleneck of a Redis connection is the Round-Trip Time(RTT), which may dramatically affects throughputs of services having Redis as the centralized datastore.
This approach implemented a Eventually consistency and Availability-Partition tolerance(AP) approach using pipelined Lua scripts, LRU cache and Pub/Sub to optimize the throughput of the Rate-Limit service, and it is also capable of:
- sharing states among all workers
- key-space partitioning
- Ratelimiting
As a result, we can achieve 10x faster than normal non-pipelined Redis approach with such optimization(See #Benchmark for details).
Implementation
WIP
Benchmark
One Worker, local Redis
(Redis Instance on Intel Core i7 9850H, 16GB RAM, macOS 10.15.3)
Ping RTT
--- localhost ping statistics ---20 packets transmitted, 20 packets received, 0.0% packet lossround-trip min/avg/max/stddev = 0.050/0.093/0.113/0.017 ms
using koajs/ratelimit
➜ ~ wrk -t12 -c1200 -d5sRunning 5s test 12 threads and 1200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 34.73ms 20.94ms 627.03ms 81.09% Req/Sec 760.13 236.27 1.42k 74.61% 39505 requests in 5.07s, 11.05MB read Socket errors: connect 0, read 623, write 0, timeout 0 Non-2xx or 3xx responses: 38505Requests/sec: 7794.90Transfer/sec: 2.18MB
our implementation
➜ ~ wrk -t12 -c1200 -d5sRunning 5s test 12 threads and 1200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 13.74ms 11.99ms 479.43ms 98.00% Req/Sec 1.53k 592.69 3.90k 77.36% 84128 requests in 5.10s, 18.43MB read Socket errors: connect 0, read 529, write 0, timeout 0 Non-2xx or 3xx responses: 83128Requests/sec: 16500.27Transfer/sec: 3.61MB
One Worker, remote Redis
(Redis Instance located on TANet, vSphere6.7, 1vCPU 2GB RAM, CentOS7 + docker 19.03)
Ping RTT
--- Remote-Redis ping statistics ---20 packets transmitted, 20 packets received, 0.0% packet lossround-trip min/avg/max/stddev = 19.858/67.130/250.806/61.965 ms
using koajs/ratelimit
➜ ~ wrk -t12 -c1200 -d5sRunning 5s test 12 threads and 1200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 129.69ms 56.67ms 632.29ms 71.43% Req/Sec 155.99 71.33 323.00 65.16% 7712 requests in 5.10s, 2.10MB read Socket errors: connect 0, read 625, write 0, timeout 0 Non-2xx or 3xx responses: 6712Requests/sec: 1513.19Transfer/sec: 422.69KB
our implementation
➜ ~ wrk -t12 -c1200 -d5sRunning 5s test 12 threads and 1200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.75ms 12.07ms 449.36ms 76.08% Req/Sec 1.48k 526.48 3.13k 74.82% 81097 requests in 5.10s, 17.76MB read Socket errors: connect 0, read 613, write 0, timeout 0 Non-2xx or 3xx responses: 80097Requests/sec: 15899.12Transfer/sec: 3.48MB
Roadmap
- Worker-Threads
- Increase Unit Test Coverage
- Support for other Redis Client
- LUA script for cleaning expired keys on Redis