Note: we're no longer publishing this package to npm. We're using the yolean/node-kafka-cache
Docker image. For regular npm dependencies use Yolean/kafka-cache#[commit]
in your package.json.
kafka-cache
The log-backed in-memory cache we need in almost every service
Usage
const KafkaCache = ; const cache = KafkaCache; cache;
Sample data model
All ID
s are UUIDs.
type User { id: ID! email: String displayName: String! groups: [Organization]!}
type Organization { id: ID! name: String!}
enum SessionType { REGULAR MEETINGSCREEN}
type Session { id: ID! user: User! type: SessionType! ipAddress: String}
Operational aspects
We'll reference these assumptions using OPS[X], as they'll be essential for scoping.
-
We run Kubernetes (or equivalent), i.e. something with pods in which our microservice is a container.
-
Likewise, each service is configured and scaled using a deployment.
-
We monitor this using Prometheus (or equivalent), i.e a service can trust that a human will be paged if an important metric deviates from the expected.
Pod identity
Pods give each instance of our service a unique identity, typically through an environment variable:
env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
The identifier becomes $POD_NAMESPACE.[service name].$POD_NAME
.
With OPS2 we can reduce this to $POD_NAMESPACE.$POD_NAME
because the deployment's name is part of pod name
and can be assumed to reflect the service name.
Caching rules
-
The cache is only updated through topic events. In other words the container is not allowed to write to the cache.
-
Caches may lag behind their backing topic(s). We should be to monitor this using regular Kafka consumer lag.
Handling inconsistencies.
Design decisions, probably per topic and/or per service.
-
Do we accept invalid writes to the topic. I.e. do we techically enforce a schema?
-
Do we validate individual messages at read?
-
Do we validate that the cache mutation that a message leads to?