Have ideas to improve npm?Join in the discussion! ¬Ľ

    @helm-charts/bitnami-kafka
    TypeScript icon, indicating that this package has built-in type declarations

    1.9.0-0.1.0¬†‚Äʬ†Public¬†‚Äʬ†Published

    @helm-charts/bitnami-kafka

    Apache Kafka is a distributed streaming platform.

    Field Value
    Repository Name bitnami
    Chart Name kafka
    Chart Version 1.9.0
    NPM Package Version 0.1.0
    Helm chart `values.yaml` (default values)
    ## Global Docker image parameters 
    ## Please, note that this will override the image parameters, including dependencies, configured to use the global value 
    ## Current available global Docker image parameters: imageRegistry and imagePullSecrets 
    ## 
    # global: 
    #   imageRegistry: myRegistryName 
    #   imagePullSecrets: 
    #     - myRegistryKeySecretName 
     
    ## Bitnami Kafka image version 
    ## ref: https://hub.docker.com/r/bitnami/kafka/tags/ 
    ## 
    image:
      registry: docker.io
      repository: bitnami/kafka
      tag: 2.2.0
      ## Specify a imagePullPolicy 
      ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' 
      ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images 
      ## 
      pullPolicy: Always
      ## Optionally specify an array of imagePullSecrets. 
      ## Secrets must be manually created in the namespace. 
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 
      ## 
      # pullSecrets: 
      #   - myRegistryKeySecretName 
     
      ## Set to true if you would like to see extra information on logs 
      ## It turns BASH and NAMI debugging in minideb 
      ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging 
      debug: false
     
    ## StatefulSet controller supports automated updates. There are two valid update strategies: RollingUpdate and OnDelete 
    ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets 
    ## 
    updateStrategy: RollingUpdate
     
    ## Partition update strategy 
    ## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions 
    ## 
    # rollingUpdatePartition: 
     
    replicaCount: 1
     
    config: |-
    #  broker.id=-1 
    #  listeners=PLAINTEXT://:9092 
    #  advertised.listeners=PLAINTEXT://KAFKA_IP:9092 
    #  num.network.threads=3 
    #  num.io.threads=8 
    #  socket.send.buffer.bytes=102400 
    #  socket.receive.buffer.bytes=102400 
    #  socket.request.max.bytes=104857600 
    #  log.dirs=/opt/bitnami/kafka/data 
    #  num.partitions=1 
    #  num.recovery.threads.per.data.dir=1 
    #  offsets.topic.replication.factor=1 
    #  transaction.state.log.replication.factor=1 
    #  transaction.state.log.min.isr=1 
    #  log.flush.interval.messages=10000 
    #  log.flush.interval.ms=1000 
    #  log.retention.hours=168 
    #  log.retention.bytes=1073741824 
    #  log.segment.bytes=1073741824 
    #  log.retention.check.interval.ms=300000 
    #  zookeeper.connect=ZOOKEEPER_SERVICE_NAME 
    #  zookeeper.connection.timeout.ms=6000 
    #  group.initial.rebalance.delay.ms=0 
     
    ## Kafka docker image available customizations 
    ## https://github.com/bitnami/bitnami-docker-kafka#configuration 
    ## 
    ## Allow to use the PLAINTEXT listener. 
    allowPlaintextListener: true
     
    ## The address the socket server listens on. 
    # listeners: 
     
    ## Hostname and port the broker will advertise to producers and consumers. 
    # advertisedListeners: 
     
    ## ID of the Kafka node. 
    brokerId: -1
     
    ## Switch to enable topic deletion or not. 
    deleteTopicEnable: false
     
    ## Kafka's Java Heap size. 
    heapOpts: -Xmx1024m -Xms1024m
     
    ## The number of messages to accept before forcing a flush of data to disk. 
    logFlushIntervalMessages: 10000
     
    ## The maximum amount of time a message can sit in a log before we force a flush. 
    logFlushIntervalMs: 1000
     
    ## A size-based retention policy for logs. 
    logRetentionBytes: _1073741824
     
    ## The interval at which log segments are checked to see if they can be deleted. 
    logRetentionCheckIntervalMs: 300000
     
    ## The minimum age of a log file to be eligible for deletion due to age. 
    logRetentionHours: 168
     
    ## The maximum size of a log segment file. When this size is reached a new log segment will be created. 
    logSegmentBytes: _1073741824
     
    ## Log message format version 
    logMessageFormatVersion: ''
     
    ## A comma separated list of directories under which to store log files. 
    logsDirs: /opt/bitnami/kafka/data
     
    ## The largest record batch size allowed by Kafka 
    maxMessageBytes: _1000012
     
    ## Default replication factors for automatically created topics 
    defaultReplicationFactor: 1
     
    ## The replication factor for the offsets topic 
    offsetsTopicReplicationFactor: 1
     
    ## The replication factor for the transaction topic 
    transactionStateLogReplicationFactor: 1
     
    ## Overridden min.insync.replicas config for the transaction topic 
    transactionStateLogMinIsr: 1
     
    ## The number of threads doing disk I/O. 
    numIoThreads: 8
     
    ## The number of threads handling network requests. 
    numNetworkThreads: 3
     
    ## The default number of log partitions per topic. 
    numPartitions: 1
     
    ## The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. 
    numRecoveryThreadsPerDataDir: 1
     
    ## The receive buffer (SO_RCVBUF) used by the socket server. 
    socketReceiveBufferBytes: 102400
     
    ## The maximum size of a request that the socket server will accept (protection against OOM). 
    socketRequestMaxBytes: _104857600
     
    ## The send buffer (SO_SNDBUF) used by the socket server. 
    socketSendBufferBytes: 102400
     
    ## Timeout in ms for connecting to zookeeper. 
    zookeeperConnectionTimeoutMs: 6000
     
    ## Authentication parameteres 
    ## https://github.com/bitnami/bitnami-docker-kafka#security 
    ## 
    auth:
      ## Switch to enable the kafka authentication. 
      enabled: false
     
      ## Name of the existing secret containing credentials for brokerUser, interBrokerUser and zookeeperUser. 
      #existingSecret: 
     
      ## Name of the existing secret containing the certificate files that will be used by Kafka. 
      #certificatesSecret: 
     
      ## Password for the above certificates if they are password protected. 
      #certificatesPassword: 
     
      ## Kafka client user. 
      brokerUser: user
     
      ## Kafka client password. 
      # brokerPassword: 
     
      ## Kafka inter broker communication user. 
      interBrokerUser: admin
     
      ## Kafka inter broker communication password. 
      # interBrokerPassword: 
      ## Kafka Zookeeper user. 
      #zookeeperUser: 
      ## Kafka Zookeeper password. 
      #zookeeperPassword: 
     
    ## Kubernetes Security Context 
    ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ 
    ## 
    securityContext:
      enabled: true
      fsGroup: 1001
      runAsUser: 1001
     
    ## Kubernetes configuration 
    ## For minikube, set this to NodePort, elsewhere use LoadBalancer 
    ## 
    service:
      type: ClusterIP
      port: 9092
     
      ## Specify the NodePort value for the LoadBalancer and NodePort service types. 
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport 
      ## 
      # nodePort: 
     
      ## Use loadBalancerIP to request a specific static IP, 
      # loadBalancerIP: 
     
      ## Service annotations done as key:value pairs 
      annotations:
     
    ## Kafka data Persistent Volume Storage Class 
    ## If defined, storageClassName: <storageClass> 
    ## If set to "-", storageClassName: "", which disables dynamic provisioning 
    ## If undefined (the default) or set to null, no storageClassName spec is 
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on 
    ##   GKE, AWS & OpenStack) 
    ## 
    persistence:
      enabled: true
      # storageClass: "-" 
      accessModes:
        - ReadWriteOnce
      size: 8Gi
      annotations: {}
     
    ## Node labels and tolerations for pod assignment 
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector 
    ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature 
    nodeSelector: {}
    tolerations: []
     
    ## Configure resource requests and limits 
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ 
    ## 
    resources:
    #  limits: 
    #    cpu: 200m 
    #    memory: 1Gi 
    #  requests: 
    #    memory: 256Mi 
    #    cpu: 250m 
     
    ## Configure extra options for liveness and readiness probes 
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) 
    livenessProbe:
      enabled: true
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 2
      successThreshold: 1
     
    readinessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 6
      successThreshold: 1
     
    ## Prometheus Exporters / Metrics 
    ## 
    metrics:
      ## Prometheus Kafka Exporter: exposes complimentary metrics to JMX Exporter 
      kafka:
        enabled: false
     
        image:
          registry: docker.io
          repository: danielqsj/kafka-exporter
          tag: v1.0.1
          pullPolicy: Always
          ## Optionally specify an array of imagePullSecrets. 
          ## Secrets must be manually created in the namespace. 
          ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 
          ## 
          # pullSecrets: 
          #   - myRegistryKeySecretName 
     
        ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator 
        interval: 10s
     
        ## Port kafka-exporter exposes for Prometheus to scrape metrics 
        port: 9308
     
        ## Resource limits 
        resources: {}
      #      limits: 
      #        cpu: 200m 
      #        memory: 1Gi 
      #      requests: 
      #        cpu: 100m 
      #        memory: 100Mi 
     
      ## Prometheus JMX Exporter: exposes the majority of Kafkas metrics 
      jmx:
        enabled: false
     
        image:
          registry: docker.io
          repository: solsson/kafka-prometheus-jmx-exporter@sha256
          tag: a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
          pullPolicy: Always
          ## Optionally specify an array of imagePullSecrets. 
          ## Secrets must be manually created in the namespace. 
          ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 
          ## 
          # pullSecrets: 
          #   - myRegistryKeySecretName 
     
        ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator 
        interval: 10s
     
        ## Port jmx-exporter exposes Prometheus format metrics to scrape 
        exporterPort: 5556
     
        resources:
          {}
          # limits: 
          #   cpu: 200m 
          #   memory: 1Gi 
          # requests: 
          #   cpu: 100m 
          #   memory: 100Mi 
     
        ## Credits to the incubator/kafka chart for the JMX configuration. 
        ## https://github.com/helm/charts/tree/master/incubator/kafka 
        ## 
        ## Rules to apply to the Prometheus JMX Exporter.  Note while lots of stats have been cleaned and exposed, 
        ## there are still more stats to clean up and expose, others will never get exposed.  They keep lots of duplicates 
        ## that can be derived easily.  The configMap in this chart cleans up the metrics it exposes to be in a Prometheus 
        ## format, eg topic, broker are labels and not part of metric name. Improvements are gladly accepted and encouraged. 
        configMap:
          ## Allows disabling the default configmap, note a configMap is needed 
          enabled: true
          ## Allows setting values to generate confimap 
          ## To allow all metrics through (warning its crazy excessive) comment out below `overrideConfig` and set 
          ## `whitelistObjectNames: []` 
          overrideConfig:
            {}
            # jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi 
            # lowercaseOutputName: true 
            # lowercaseOutputLabelNames: true 
            # ssl: false 
            # rules: 
            # - pattern: ".*" 
          ## If you would like to supply your own ConfigMap for JMX metrics, supply the name of that 
          ## ConfigMap as an `overrideName` here. 
          overrideName: ''
        ## Port the jmx metrics are exposed in native jmx format, not in Prometheus format 
        jmxPort: 5555
        ## JMX Whitelist Objects, can be set to control which JMX metrics are exposed.  Only whitelisted 
        ## values will be exposed via JMX Exporter.  They must also be exposed via Rules.  To expose all metrics 
        ## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []` 
        ## (2) commented out above `overrideConfig`. 
        whitelistObjectNames: # [] 
          - kafka.controller:*
          - kafka.server:*
          - java.lang:*
          - kafka.network:*
          - kafka.log:*
     
    ## 
    ## Zookeeper chart configuration 
    ## 
    ## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml 
    ## 
    zookeeper:
      enabled: true
     
    externalZookeeper:
      ## This value is only used when zookeeper.enabled is set to false 
      ## Server or list of external zookeeper servers to use. 
      # servers: 

    Kafka

    Kafka is a distributed streaming platform used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

    TL;DR;

    $ helm install bitnami/kafka

    Introduction

    This chart bootstraps a Kafka deployment on a Kubernetes cluster using the Helm package manager.

    Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of Bitnami Kubernetes Production Runtime (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.

    Prerequisites

    • Kubernetes 1.4+ with Beta APIs enabled
    • PV provisioner support in the underlying infrastructure

    Installing the Chart

    To install the chart with the release name my-release:

    $ helm install --name my-release bitnami/kafka

    The command deploys Kafka on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

    Tip: List all releases using helm list

    Uninstalling the Chart

    To uninstall/delete the my-release deployment:

    $ helm delete my-release

    The command removes all the Kubernetes components associated with the chart and deletes the release.

    Configuration

    The following tables lists the configurable parameters of the Kafka chart and their default values.

    Parameter Description Default
    global.imageRegistry Global Docker image registry nil
    global.imagePullSecrets Global Docker registry secret names as an array [] (does not add image pull secrets to deployed pods)
    image.registry Kafka image registry docker.io
    image.repository Kafka Image name bitnami/kafka
    image.tag Kafka Image tag {VERSION}
    image.pullPolicy Kafka image pull policy Always
    image.pullSecrets Specify docker-registry secret names as an array [] (does not add image pull secrets to deployed pods)
    image.debug Specify if debug values should be set false
    updateStrategy Update strategy for the stateful set RollingUpdate
    rollingUpdatePartition Partition update strategy nil
    replicaCount Number of Kafka nodes 1
    config Configuration file for Kafka nil
    allowPlaintextListener Allow to use the PLAINTEXT listener true
    listeners The address the socket server listens on. nil
    advertisedListeners Hostname and port the broker will advertise to producers and consumers. nil
    brokerId ID of the Kafka node -1
    deleteTopicEnable Switch to enable topic deletion or not. false
    heapOpts Kafka's Java Heap size. -Xmx1024m -Xms1024m
    logFlushIntervalMessages The number of messages to accept before forcing a flush of data to disk. 10000
    logFlushIntervalMs The maximum amount of time a message can sit in a log before we force a flush. 1000
    logRetentionBytes A size-based retention policy for logs. _1073741824
    logRetentionCheckIntervalMs The interval at which log segments are checked to see if they can be deleted. 300000
    logRetentionHours The minimum age of a log file to be eligible for deletion due to age. 168
    logSegmentBytes The maximum size of a log segment file. When this size is reached a new log segment will be created. _1073741824
    logMessageFormatVersion Logging message format version. ``
    logsDirs A comma separated list of directories under which to store log files. /opt/bitnami/kafka/data
    maxMessageBytes The largest record batch size allowed by Kafka. 1000012
    defaultReplicationFactor Default replication factors for automatically created topics 1
    offsetsTopicReplicationFactor The replication factor for the offsets topic 1
    transactionStateLogReplicationFactor The replication factor for the transaction topic 1
    transactionStateLogMinIsr Overridden min.insync.replicas config for the transaction topic 1
    numIoThreads The number of threads doing disk I/O. 8
    numNetworkThreads The number of threads handling network requests. 3
    numPartitions The default number of log partitions per topic. 1
    numRecoveryThreadsPerDataDir The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. 1
    socketReceiveBufferBytes The receive buffer (SO_RCVBUF) used by the socket server. 102400
    socketRequestMaxBytes The maximum size of a request that the socket server will accept (protection against OOM). _104857600
    socketSendBufferBytes The send buffer (SO_SNDBUF) used by the socket server. 102400
    zookeeperConnectionTimeoutMs Timeout in ms for connecting to zookeeper. 6000
    auth.enabled Switch to enable the kafka authentication. false
    auth.existingSecret Name of the existing secret containing credentials for brokerUser, interBrokerUser and zookeeperUser. nil
    auth.certificatesSecret Name of the existing secret containing the certificate files that will be used by Kafka. nil
    auth.certificatesPassword Password for the above certificates if they are password protected. nil
    auth.brokerUser Kafka client user. user
    auth.brokerPassword Kafka client password. nil
    auth.interBrokerUser Kafka inter broker communication user admin
    auth.interBrokerPassword Kafka inter broker communication password. nil
    auth.zookeeperUser Kafka Zookeeper user. nil
    auth.zookeeperPassword Kafka Zookeeper password. nil
    securityContext.enabled Enable security context true
    securityContext.fsGroup Group ID for the container 1001
    securityContext.runAsUser User ID for the container 1001
    service.type Kubernetes Service type ClusterIP
    service.port Kafka port 9092
    service.nodePort Kubernetes Service nodePort nil
    service.loadBalancerIP loadBalancerIP for Kafka Service nil
    service.annotations Service annotations ``
    persistence.enabled Enable persistence using PVC true
    persistence.storageClass PVC Storage Class for Kafka volume nil
    persistence.accessMode PVC Access Mode for Kafka volume ReadWriteOnce
    persistence.size PVC Storage Request for Kafka volume 8Gi
    persistence.annotations Annotations for the PVC {}
    nodeSelector Node labels for pod assignment {}
    tolerations Toleration labels for pod assignment []
    resources CPU/Memory resource requests/limits Memory: 256Mi, CPU: 250m
    livenessProbe.enabled would you like a livessProbed to be enabled true
    livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
    livenessProbe.periodSeconds How often to perform the probe 10
    livenessProbe.timeoutSeconds When the probe times out 5
    livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 6
    livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed 1
    readinessProbe.enabled would you like a readinessProbe to be enabled true
    readinessProbe.initialDelaySeconds Delay before liveness probe is initiated 5
    readinessProbe.periodSeconds How often to perform the probe 10
    readinessProbe.timeoutSeconds When the probe times out 5
    readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 6
    readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed 1
    metrics.kafka.enabled Whether or not to create a separate Kafka exporter false
    metrics.kafka.image.registry Kafka exporter image registry docker.io
    metrics.kafka.image.repository Kafka exporter image name danielqsj/kafka-exporter
    metrics.kafka.image.tag Kafka exporter image tag v1.0.1
    metrics.kafka.image.pullPolicy Kafka exporter image pull policy Always
    metrics.kafka.image.pullSecrets Specify docker-registry secret names as an array [] (does not add image pull secrets to deployed pods)
    metrics.kafka.interval Interval that Prometheus scrapes Kafka metrics when using Prometheus Operator 10s
    metrics.kafka.port Kafka Exporter Port which exposes metrics in Prometheus format for scraping 9308
    metrics.kafka.resources Allows setting resource limits for kafka-exporter pod {}
    metrics.jmx.resources Allows setting resource limits for jmx sidecar container {}
    metrics.jmx.enabled Whether or not to expose JMX metrics to Prometheus false
    metrics.jmx.image.registry JMX exporter image registry docker.io
    metrics.jmx.image.repository JMX exporter image name solsson/kafka-prometheus-jmx-exporter@sha256
    metrics.jmx.image.tag JMX exporter image tag a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
    metrics.jmx.image.pullPolicy JMX exporter image pull policy Always
    metrics.jmx.image.pullSecrets Specify docker-registry secret names as an array [] (does not add image pull secrets to deployed pods)
    metrics.jmx.interval Interval that Prometheus scrapes JMX metrics when using Prometheus Operator 10s
    metrics.jmx.exporterPort JMX Exporter Port which exposes metrics in Prometheus format for scraping 5556
    metrics.jmx.configMap.enabled Enable the default ConfigMap for JMX true
    metrics.jmx.configMap.overrideConfig Allows config file to be generated by passing values to ConfigMap {}
    metrics.jmx.configMap.overrideName Allows setting the name of the ConfigMap to be used ""
    metrics.jmx.jmxPort The jmx port which JMX style metrics are exposed (note: these are not scrapeable by Prometheus) 5555
    metrics.jmx.whitelistObjectNames Allows setting which JMX objects you want to expose to via JMX stats to JMX Exporter (see values.yaml)
    zookeeper.enabled Switch to enable or disable the Zookeeper helm chart true
    externalZookeeper.servers Server or list of external zookeeper servers to use. nil

    Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

    $ helm install --name my-release \
      --set kafkaPassword=secretpassword,kafkaDatabase=my-database \
        bitnami/kafka

    The above command sets the Kafka kafka account password to secretpassword. Additionally it creates a database named my-database.

    Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

    $ helm install --name my-release -f values.yaml bitnami/kafka

    Tip: You can use the default values.yaml

    Production and horizontal scaling

    The following repo contains the recommended production settings for Kafka server in an alternative values file. Please read carefully the comments in the values-production.yaml file to set up your environment

    To horizontally scale this chart, first download the values-production.yaml file to your local folder, then:

    $ helm install --name my-release -f ./values-production.yaml bitnami/kafka
    $ kubectl scale statefulset my-kafka-slave --replicas=3

    Enable security for Kafka and Zookeeper

    If you enabled the authentication for Kafka, the SASL_SSL listener will be configured with your provided inputs. In particular you can set the following pair of credentials:

    • brokerUser/brokerPassword: To authenticate kafka clients against kafka brokers
    • interBrokerUser/interBrokerPassword: To authenticate kafka brokers between them.
    • zookeeperUser/zookeeperPassword: In the case that the Zookeeper chart is deployed with SASL authentication enabled.

    In order to configure the authentication, you must create a secret containing the kafka.keystore.jks and kafka.trustore.jks certificates and pass the secret name with the --auth.certificatesSecret option when deploying the chart.

    You can create the secret with this command assuming you have your certificates in your working directory:

    kubectl create secret generic kafka-certificates --from-file=./kafka.keystore.jks --from-file=./kafka.truststore.jks

    As an example of Kafka installed with authentication you can use this command:

    helm install --name my-release bitnami/kafka --set auth.enabled=true \
                 --set auth.brokerUser=brokerUser --set auth.brokerPassword=brokerPassword \
                 --set auth.interBrokerUser=interBrokerUser --set auth.interBrokerPassword=interBrokerPassword \
                 --set auth.zookeeperUser=zookeeperUser --set auth.zookeeperPassword=zookeeperPassword \
                 --set zookeeper.auth.enabled=-true --set zookeeper.auth.serverUser=zookeeperUser --set zookeeper.auth.serverPassword=zookeeperPassword \
                 --set zookeeper.auth.clientUser=zookeeperUser --set zookeeper.auth.clientPassword=zookeeperPassword \
                 --set auth.certificatesSecret=kafka-certificates

    Note: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the --auth.certificatesPassword option to provide your password.

    Persistence

    The Bitnami Kafka image stores the Kafka data at the /bitnami/kafka path of the container.

    Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Configuration section to configure the PVC or to disable persistence.

    Upgrading

    To 1.0.0

    Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is kafka:

    $ kubectl delete statefulset kafka-kafka --cascade=false
    $ kubectl delete statefulset kafka-zookeeper --cascade=false

    Keywords

    none

    Install

    npm i @helm-charts/bitnami-kafka

    DownloadsWeekly Downloads

    8

    Version

    1.9.0-0.1.0

    License

    MIT

    Unpacked Size

    170 kB

    Total Files

    39

    Last publish

    Collaborators

    • avatar
    • avatar