Node-Kfk
Why I need it
Kafka is not friendly enough for programmers who don't have a clear knowledge on it.
Considering our usage are similar at most of the time, we want to provide a simple client for simple use case on kafka.
Usage
Install
npm i kfk -S
Kafka Producer
const conf = 'client.id': 'kafka' 'metadata.broker.list': '127.0.0.1:9092'const topicConf = const options = debug: false const producer = conf topicConf options await producer console while true const msg = `-` await producer
Kafka ALO(at least once) Consumer
const conf = 'group.id': 'alo-consumer-test-1' 'metadata.broker.list': '127.0.0.1:9092'const topicConf = 'auto.offset.reset': 'largest'const options = debug: false const consumer = conf topicConf optionsawait consumerawait consumer while true await consumer
Kafka AMO(at most once) Consumer
const conf = 'group.id': 'amo-consumer-test-1' 'metadata.broker.list': '127.0.0.1:9092'const topicConf = 'auto.offset.reset': 'largest'const options = debug: false const consumer = conf topicConf optionsawait consumerawait consumer while true await consumer
Graceful Death
process.on'SIGINT', gracefulDeathprocess.on'SIGQUIT', gracefulDeathprocess.on'SIGTERM', gracefulDeath
Deep Dive
Choose your right consumer
node-kfk
provide two consumer choices for you: KafkaALOConsumer
and KafkaAMOConsumer
. ALO
means At Least Once
, and AMO
means At Most Once
.
At Least Once
If you cannot tolerate any message loss and you have handled the repetitive execution situation in your consumer function, you may want your consumer has at least once
guarantee.
KafkaALOConsumer
will monitor your consume callback function execute state and if there are any Error
thrown in your consumer callback function (or process crashed), it will begin at the offsets you last consumed successfully.
At Most Once
If you do not very care about little messages loss when problem happens, but you want to make sure that every message only can be handled on time, you can just use the KafkaAMOConsumer
.
KafkaAMOConsumer
will auto commits the offsets when fetched the messages. It has better performance than KafkaALOConsumer
, but not guarantee that all messages will be consumed.
Offset Management Detail
In KafkaAMOConsumer
, node-kfk
use the enable.auto.commit=true
and enable.auto.offset.store=true
options which completely depend on librdkafka to management the offsets and will auto commit the latest offsets periodically(the interval depends on auto.commit.interval.ms
, default is 1000
).
In KafkaALOConsumer
, we still want librdkafka to commit automatically, but we need to control offsetStore manually(now we set enable.auto.commit=true
and enable.auto.offset.store=false
). When node-kfk
ensure that all messages had been handled successfully, it will store the latest offsets in offsetStore, and wait for committed by librdkafka.
Others
The client has been tested on:
- os: linux env: KAFKA_VERSION=0.10.2.2 node_js: 8- os: linux env: KAFKA_VERSION=0.10.2.2 node_js: 10- os: linux env: KAFKA_VERSION=0.11.0.3 node_js: 10- os: linux env: KAFKA_VERSION=1.1.0 node_js: 10- os: linux env: KAFKA_VERSION=2.0.0 node_js: 10
More detailed document for conf
and topicConf
params in librdkafka and node-rdkafka