Project

General

Profile

Actions

Task #10

closed

CMS - Feature #23: Stablish standards and practices to enhance DevOps CI/CD processes.

Epic #33: Apply standards and practices to enhance DevOps CI/CD processes.

Migrate Kafka Broker to Contabo Server.

Added by Fernando Jose Capeletto Neto over 1 year ago. Updated over 1 year ago.

Status:
Closed
Priority:
Immediate
Start date:
04/04/2023
Due date:
04/09/2023
% Done:

100%

Estimated time:
8:00 h
Spent time:

Description

Migrate Kafka Broker to Local Server :
  • to enhance Performance (since CMS components are in the same network) (to be verified)
  • The trial period for Kafka broker at Confluence will end in Apr 10th.
Testing
  • Worked with docker compose configuration below.
version: '2'

services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    expose:
      - "2181" 

  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    ports:
      - "9092:9092" 
    expose:
      - "9093" 
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,EXTERNAL://lab.fernando.engineer:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" 
      KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf" 
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 
      KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 1000
      LOG4J_LOGGER_KAFKA: WARN
      LOG4J_LOGGER_ORG_APACHE_KAFKA: WARN
    depends_on:
      - zookeeper
    volumes:
      - /var/www/vhosts/kafka:/etc/kafka
[root@node1 kafka]# cat kafka_server_jaas.conf 
KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin" 
  password="admin-secret" 
  user_admin="admin-secret";
};
Client {};
  • Auth worked (above logs error when not using correct user/pass)
  • Topic configuration added as Beans direct on application.
  • Had to set allow topic creation (bad point but with working authentication is safe)
  • Reference for configuring Auth: https://habr.com/en/articles/529222/

    Example of Bad User/Pass Error =

On the Kafka Server:

kafka        | [2023-04-06 05:56:13,788] INFO [GroupCoordinator 1001]: Group trackhandler-group with generation 2 is now empty (__consumer_offsets-25) (kafka.coordinator.group.GroupCoordinator)
kafka        | [2023-04-06 05:57:40,541] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /45.29.40.136 (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
kafka        | [2023-04-06 05:57:41,486] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /45.29.40.136 (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
kafka        | [2023-04-06 05:57:42,414] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /45.29.40.136 (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
kafka        | [2023-04-06 05:57:43,823] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /45.29.40.136 (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)

On the app:

[2023-04-06 00:57:59.718] ERROR [o.a.k.c.NetworkClient] [663]: [Producer clientId=TrackHandler] Connection to node -1 failed authentication due to: Authentication failed: Invalid username or password
[2023-04-06 00:58:01.045] ERROR [o.a.k.c.NetworkClient] [663]: [Producer clientId=TrackHandler] Connection to node -1 failed authentication due to: Authentication failed: Invalid username or passwor

= Other config tested === Plan B Docker Compose File also tested
version: '2.1'

services:
  zoo1:
    image: confluentinc/cp-zookeeper:7.3.2
    hostname: zoo1
    container_name: zoo1
    ports:
      - "2181:2181" 
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo1:2888:3888

  kafka1:
    image: confluentinc/cp-kafka:7.3.2
    hostname: kafka1
    container_name: kafka1
    ports:
      - "9092:9092" 
      - "9999:9999" 
      - "29092:29092" 
    expose:
      - "29092" 
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:19092,EXTERNAL://lab.fernando.engineer:9092,DOCKER://host.docker.internal:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181" 
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO" 
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: lab.fernando.engineer
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true" 
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false" 
      KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 1000
    depends_on:
      - zoo1

  init-kafka:
      image: confluentinc/cp-kafka:7.3.2
      depends_on:
          - kafka1
      volumes:
        - /var/www/vhosts/kafka/log:/log
      entrypoint: [ '/bin/sh', '-c' ]
      command: |
          " 
          # blocks until kafka is reachable
          kafka-topics --bootstrap-server kafka1:19092 --list
          echo -e 'Creating kafka topics for lab.fernando.engineer'
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic AISMESSAGE_TOPIC --replication-factor 1 --partitions 1 --config retention.ms=3000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TACTICAL_ALERT_TOPIC --replication-factor 1 --partitions 1 --config retention.ms=5000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic SYSTEM_ALERT_TOPIC --replication-factor 1 --partitions 1 --config retention.ms=3600000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TRACKDAO_PERFORMANCE_TOPIC --replication-factor 1 --partitions 1 --config retention.ms=1250
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TRACKDAO_TOPIC --replication-factor 1 --partitions 1 --config retention.ms=1250
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic AISMESSAGE_TOPICTEST --replication-factor 1 --partitions 1 --config retention.ms=3000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TACTICAL_ALERT_TOPICTEST --replication-factor 1 --partitions 1 --config retention.ms=5000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic SYSTEM_ALERT_TOPICTEST --replication-factor 1 --partitions 1 --config retention.ms=3600000
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TRACKDAO_PERFORMANCE_TOPICTEST --replication-factor 1 --partitions 1 --config retention.m=1250
          kafka-topics --bootstrap-server kafka1:19092 --create --if-not-exists --topic TRACKDAO_TOPICTEST --replication-factor 1 --partitions 1 --config retention.ms=1250

         #Log each topic configuration during creation for further check
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name AISMESSAGE_TOPIC --describe --all > /log/AISMESSAGE_TOPIC.conf
          echo -e 'Configuration for topic AISMESSAGE_TOPIC saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TACTICAL_ALERT_TOPIC --describe --all > /log/TACTICAL_ALERT_TOPIC.conf
          echo -e 'Configuration for topic TACTICAL_ALERT_TOPIC saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name SYSTEM_ALERT_TOPIC --describe --all > /log/SYSTEM_ALERT_TOPIC.conf
          echo -e 'Configuration for topic SYSTEM_ALERT_TOPIC saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TRACKDAO_PERFORMANCE_TOPIC --describe --all > /log/TRACKDAO_PERFORMANCE_TOPIC.conf
          echo -e 'Configuration for topic TRACKDAO_PERFORMANCE_TOPIC saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TRACKDAO_TOPIC --describe --all > /log/TRACKDAO_TOPIC.conf
          echo -e 'Configuration for topic TRACKDAO_TOPIC saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name AISMESSAGE_TOPICTEST --describe --all > /log/AISMESSAGE_TOPICTEST.conf
          echo -e 'Configuration for topic AISMESSAGE_TOPICTEST saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TACTICAL_ALERT_TOPICTEST --describe --all > /log/TACTICAL_ALERT_TOPICTEST.conf
          echo -e 'Configuration for topic TACTICAL_ALERT_TOPICTEST saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name SYSTEM_ALERT_TOPICTEST --describe --all > /log/SYSTEM_ALERT_TOPICTEST.conf
          echo -e 'Configuration for topic SYSTEM_ALERT_TOPICTEST saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TRACKDAO_PERFORMANCE_TOPICTEST --describe --all > /log/TRACKDAO_PERFORMANCE_TOPICTEST.conf
          echo -e 'Configuration for topic TRACKDAO_PERFORMANCE_TOPICTEST saved'
          kafka-configs --bootstrap-server kafka1:19092 --entity-type topics --entity-name TRACKDAO_TOPICTEST --describe --all > /log/TRACKDAO_TOPICTEST.conf
          echo -e 'Configuration for topic TRACKDAO_TOPICTEST saved'
Actions #1

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
  • Due date set to 04/04/2023
  • Priority changed from Urgent to High
  • Estimated time set to 8:00 h
Actions #2

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Assignee set to Fernando Jose Capeletto Neto
Actions #3

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Parent task set to #23
Actions #4

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Parent task changed from #23 to #33
Actions #5

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Due date changed from 04/04/2023 to 04/09/2023
  • Priority changed from High to Immediate
  • Start date changed from 03/28/2023 to 04/04/2023
Actions #7

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from New to In Progress
Actions #8

Updated by Fernando Jose Capeletto Neto over 1 year ago

kafka1 | [2023-04-05 06:36:19,413] INFO KafkaConfig values:
kafka1 | advertised.listeners = INTERNAL://kafka1:19092,EXTERNAL://144.126.159.51:9092,DOCKER://host.docker.internal:29092
kafka1 | alter.config.policy.class.name = null
kafka1 | alter.log.dirs.replication.quota.window.num = 11
kafka1 | alter.log.dirs.replication.quota.window.size.seconds = 1
kafka1 | authorizer.class.name = kafka.security.authorizer.AclAuthorizer
kafka1 | auto.create.topics.enable = true
kafka1 | auto.leader.rebalance.enable = true
kafka1 | background.threads = 10
kafka1 | broker.heartbeat.interval.ms = 2000
kafka1 | broker.id = 1
kafka1 | broker.id.generation.enable = true
kafka1 | broker.rack = null
kafka1 | broker.session.timeout.ms = 9000
kafka1 | client.quota.callback.class = null
kafka1 | compression.type = producer
kafka1 | connection.failed.authentication.delay.ms = 100
kafka1 | connections.max.idle.ms = 600000
kafka1 | connections.max.reauth.ms = 0
kafka1 | control.plane.listener.name = null
kafka1 | controlled.shutdown.enable = true
kafka1 | controlled.shutdown.max.retries = 3
kafka1 | controlled.shutdown.retry.backoff.ms = 5000
kafka1 | controller.listener.names = null
kafka1 | controller.quorum.append.linger.ms = 25
kafka1 | controller.quorum.election.backoff.max.ms = 1000
kafka1 | controller.quorum.election.timeout.ms = 1000
kafka1 | controller.quorum.fetch.timeout.ms = 2000
kafka1 | controller.quorum.request.timeout.ms = 2000
kafka1 | controller.quorum.retry.backoff.ms = 20
kafka1 | controller.quorum.voters = []
kafka1 | controller.quota.window.num = 11
kafka1 | controller.quota.window.size.seconds = 1
kafka1 | controller.socket.timeout.ms = 30000
kafka1 | create.topic.policy.class.name = null
kafka1 | default.replication.factor = 1
kafka1 | delegation.token.expiry.check.interval.ms = 3600000
kafka1 | delegation.token.expiry.time.ms = 86400000
kafka1 | delegation.token.master.key = null
kafka1 | delegation.token.max.lifetime.ms = 604800000
kafka1 | delegation.token.secret.key = null
kafka1 | delete.records.purgatory.purge.interval.requests = 1
kafka1 | delete.topic.enable = true
kafka1 | early.start.listeners = null
kafka1 | fetch.max.bytes = 57671680
kafka1 | fetch.purgatory.purge.interval.requests = 1000
kafka1 | group.initial.rebalance.delay.ms = 3000
kafka1 | group.max.session.timeout.ms = 1800000
kafka1 | group.max.size = 2147483647
kafka1 | group.min.session.timeout.ms = 6000
kafka1 | initial.broker.registration.timeout.ms = 60000
kafka1 | inter.broker.listener.name = INTERNAL
kafka1 | inter.broker.protocol.version = 3.3-IV3
kafka1 | kafka.metrics.polling.interval.secs = 10
kafka1 | kafka.metrics.reporters = []
kafka1 | leader.imbalance.check.interval.seconds = 300
kafka1 | leader.imbalance.per.broker.percentage = 10
kafka1 | listener.security.protocol.map = INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
kafka1 | listeners = INTERNAL://0.0.0.0:19092,EXTERNAL://0.0.0.0:9092,DOCKER://0.0.0.0:29092
kafka1 | log.cleaner.backoff.ms = 15000
kafka1 | log.cleaner.dedupe.buffer.size = 134217728
kafka1 | log.cleaner.delete.retention.ms = 86400000
kafka1 | log.cleaner.enable = true
kafka1 | log.cleaner.io.buffer.load.factor = 0.9
kafka1 | log.cleaner.io.buffer.size = 524288
kafka1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka1 | log.cleaner.min.cleanable.ratio = 0.5
kafka1 | log.cleaner.min.compaction.lag.ms = 0
kafka1 | log.cleaner.threads = 1
kafka1 | log.cleanup.policy = [delete]
kafka1 | log.dir = /tmp/kafka-logs
kafka1 | log.dirs = /var/lib/kafka/data
kafka1 | log.flush.interval.messages = 9223372036854775807
kafka1 | log.flush.interval.ms = null
kafka1 | log.flush.offset.checkpoint.interval.ms = 60000
kafka1 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka1 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka1 | log.index.interval.bytes = 4096
kafka1 | log.index.size.max.bytes = 10485760
kafka1 | log.message.downconversion.enable = true
kafka1 | log.message.format.version = 3.0-IV1
kafka1 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka1 | log.message.timestamp.type = CreateTime
kafka1 | log.preallocate = false
kafka1 | log.retention.bytes = -1
kafka1 | log.retention.check.interval.ms = 300000
kafka1 | log.retention.hours = 168
kafka1 | log.retention.minutes = null
kafka1 | log.retention.ms = null
kafka1 | log.roll.hours = 168
kafka1 | log.roll.jitter.hours = 0
kafka1 | log.roll.jitter.ms = null
kafka1 | log.roll.ms = null
kafka1 | log.segment.bytes = 1073741824
kafka1 | log.segment.delete.delay.ms = 60000
kafka1 | max.connection.creation.rate = 2147483647
kafka1 | max.connections = 2147483647
kafka1 | max.connections.per.ip = 2147483647
kafka1 | max.connections.per.ip.overrides =
kafka1 | max.incremental.fetch.session.cache.slots = 1000
kafka1 | message.max.bytes = 1048588
kafka1 | metadata.log.dir = null
kafka1 | metadata.log.max.record.bytes.between.snapshots = 20971520
kafka1 | metadata.log.segment.bytes = 1073741824
kafka1 | metadata.log.segment.min.bytes = 8388608
kafka1 | metadata.log.segment.ms = 604800000
kafka1 | metadata.max.idle.interval.ms = 500
kafka1 | metadata.max.retention.bytes = -1
kafka1 | metadata.max.retention.ms = 604800000
kafka1 | metric.reporters = []
kafka1 | metrics.num.samples = 2
kafka1 | metrics.recording.level = INFO
kafka1 | metrics.sample.window.ms = 30000
kafka1 | min.insync.replicas = 1
kafka1 | node.id = 1
kafka1 | num.io.threads = 8
kafka1 | num.network.threads = 3
kafka1 | num.partitions = 1
kafka1 | num.recovery.threads.per.data.dir = 1
kafka1 | num.replica.alter.log.dirs.threads = null
kafka1 | num.replica.fetchers = 1
kafka1 | offset.metadata.max.bytes = 4096
kafka1 | offsets.commit.required.acks = -1
kafka1 | offsets.commit.timeout.ms = 5000
kafka1 | offsets.load.buffer.size = 5242880
kafka1 | offsets.retention.check.interval.ms = 600000
kafka1 | offsets.retention.minutes = 10080
kafka1 | offsets.topic.compression.codec = 0
kafka1 | offsets.topic.num.partitions = 50
kafka1 | offsets.topic.replication.factor = 1
kafka1 | offsets.topic.segment.bytes = 104857600
kafka1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka1 | password.encoder.iterations = 4096
kafka1 | password.encoder.key.length = 128
kafka1 | password.encoder.keyfactory.algorithm = null
kafka1 | password.encoder.old.secret = null
kafka1 | password.encoder.secret = null
kafka1 | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
kafka1 | process.roles = []
kafka1 | producer.purgatory.purge.interval.requests = 1000
kafka1 | queued.max.request.bytes = -1
kafka1 | queued.max.requests = 500
kafka1 | quota.window.num = 11
kafka1 | quota.window.size.seconds = 1
kafka1 | remote.log.index.file.cache.total.size.bytes = 1073741824
kafka1 | remote.log.manager.task.interval.ms = 30000
kafka1 | remote.log.manager.task.retry.backoff.max.ms = 30000
kafka1 | remote.log.manager.task.retry.backoff.ms = 500
kafka1 | remote.log.manager.task.retry.jitter = 0.2
kafka1 | remote.log.manager.thread.pool.size = 10
kafka1 | remote.log.metadata.manager.class.name = null
kafka1 | remote.log.metadata.manager.class.path = null
kafka1 | remote.log.metadata.manager.impl.prefix = null
kafka1 | remote.log.metadata.manager.listener.name = null
kafka1 | remote.log.reader.max.pending.tasks = 100
kafka1 | remote.log.reader.threads = 10
kafka1 | remote.log.storage.manager.class.name = null
kafka1 | remote.log.storage.manager.class.path = null
kafka1 | remote.log.storage.manager.impl.prefix = null
kafka1 | remote.log.storage.system.enable = false
kafka1 | replica.fetch.backoff.ms = 1000
kafka1 | replica.fetch.max.bytes = 1048576
kafka1 | replica.fetch.min.bytes = 1
kafka1 | replica.fetch.response.max.bytes = 10485760
kafka1 | replica.fetch.wait.max.ms = 500
kafka1 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka1 | replica.lag.time.max.ms = 30000
kafka1 | replica.selector.class = null
kafka1 | replica.socket.receive.buffer.bytes = 65536
kafka1 | replica.socket.timeout.ms = 30000
kafka1 | replication.quota.window.num = 11
kafka1 | replication.quota.window.size.seconds = 1
kafka1 | request.timeout.ms = 30000
kafka1 | reserved.broker.max.id = 1000
kafka1 | sasl.client.callback.handler.class = null
kafka1 | sasl.enabled.mechanisms = [GSSAPI]
kafka1 | sasl.jaas.config = null
kafka1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka1 | sasl.kerberos.min.time.before.relogin = 60000
kafka1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka1 | sasl.kerberos.service.name = null
kafka1 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka1 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka1 | sasl.login.callback.handler.class = null
kafka1 | sasl.login.class = null
kafka1 | sasl.login.connect.timeout.ms = null
kafka1 | sasl.login.read.timeout.ms = null
kafka1 | sasl.login.refresh.buffer.seconds = 300
kafka1 | sasl.login.refresh.min.period.seconds = 60
kafka1 | sasl.login.refresh.window.factor = 0.8
kafka1 | sasl.login.refresh.window.jitter = 0.05
kafka1 | sasl.login.retry.backoff.max.ms = 10000
kafka1 | sasl.login.retry.backoff.ms = 100
kafka1 | sasl.mechanism.controller.protocol = GSSAPI
kafka1 | sasl.mechanism.inter.broker.protocol = GSSAPI
kafka1 | sasl.oauthbearer.clock.skew.seconds = 30
kafka1 | sasl.oauthbearer.expected.audience = null
kafka1 | sasl.oauthbearer.expected.issuer = null
kafka1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
kafka1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
kafka1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka1 | sasl.oauthbearer.jwks.endpoint.url = null
kafka1 | sasl.oauthbearer.scope.claim.name = scope
kafka1 | sasl.oauthbearer.sub.claim.name = sub
kafka1 | sasl.oauthbearer.token.endpoint.url = null
kafka1 | sasl.server.callback.handler.class = null
kafka1 | sasl.server.max.receive.size = 524288
kafka1 | security.inter.broker.protocol = PLAINTEXT
kafka1 | security.providers = null
kafka1 | socket.connection.setup.timeout.max.ms = 30000
kafka1 | socket.connection.setup.timeout.ms = 10000
kafka1 | socket.listen.backlog.size = 50
kafka1 | socket.receive.buffer.bytes = 102400
kafka1 | socket.request.max.bytes = 104857600
kafka1 | socket.send.buffer.bytes = 102400
kafka1 | ssl.cipher.suites = []
kafka1 | ssl.client.auth = none
kafka1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka1 | ssl.endpoint.identification.algorithm = https
kafka1 | ssl.engine.factory.class = null
kafka1 | ssl.key.password = null
kafka1 | ssl.keymanager.algorithm = SunX509
kafka1 | ssl.keystore.certificate.chain = null
kafka1 | ssl.keystore.key = null
kafka1 | ssl.keystore.location = null
kafka1 | ssl.keystore.password = null
kafka1 | ssl.keystore.type = JKS
kafka1 | ssl.principal.mapping.rules = DEFAULT
kafka1 | ssl.protocol = TLSv1.3
kafka1 | ssl.provider = null
kafka1 | ssl.secure.random.implementation = null
kafka1 | ssl.trustmanager.algorithm = PKIX
kafka1 | ssl.truststore.certificates = null
kafka1 | ssl.truststore.location = null
kafka1 | ssl.truststore.password = null
kafka1 | ssl.truststore.type = JKS
kafka1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
kafka1 | transaction.max.timeout.ms = 900000
kafka1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka1 | transaction.state.log.load.buffer.size = 5242880
kafka1 | transaction.state.log.min.isr = 1
kafka1 | transaction.state.log.num.partitions = 50
kafka1 | transaction.state.log.replication.factor = 1
kafka1 | transaction.state.log.segment.bytes = 104857600
kafka1 | transactional.id.expiration.ms = 604800000
kafka1 | unclean.leader.election.enable = false
kafka1 | zookeeper.clientCnxnSocket = null
kafka1 | zookeeper.connect = zoo1:2181
kafka1 | zookeeper.connection.timeout.ms = null
kafka1 | zookeeper.max.in.flight.requests = 10
kafka1 | zookeeper.session.timeout.ms = 18000
kafka1 | zookeeper.set.acl = false
kafka1 | zookeeper.ssl.cipher.suites = null
kafka1 | zookeeper.ssl.client.enable = false
kafka1 | zookeeper.ssl.crl.enable = false
kafka1 | zookeeper.ssl.enabled.protocols = null
kafka1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka1 | zookeeper.ssl.keystore.location = null
kafka1 | zookeeper.ssl.keystore.password = null
kafka1 | zookeeper.ssl.keystore.type = null
kafka1 | zookeeper.ssl.ocsp.enable = false
kafka1 | zookeeper.ssl.protocol = TLSv1.2
kafka1 | zookeeper.ssl.truststore.location = null
kafka1 | zookeeper.ssl.truststore.password = null
kafka1 | zookeeper.ssl.truststore.type = null

Actions #9

Updated by Fernando Jose Capeletto Neto over 1 year ago

kafka1 | [2023-04-05 06:54:43,034] INFO [GroupCoordinator 1]: Dynamic Member with unknown member id joins group trackhandler-group in Empty state. Created a new member id TrackHandler-244a8571-9336-428c-9137-e1bb16b3f08f for this member and add to the group. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:54:43,042] INFO [GroupCoordinator 1]: Preparing to rebalance group trackhandler-group in state PreparingRebalance with old generation 0 (_consumer_offsets-25) (reason: Adding new member TrackHandler-244a8571-9336-428c-9137-e1bb16b3f08f with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:54:46,056] INFO [GroupCoordinator 1]: Stabilized group trackhandler-group generation 1 (
_consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:54:46,096] INFO [GroupCoordinator 1]: Assignment received from leader TrackHandler-244a8571-9336-428c-9137-e1bb16b3f08f for group trackhandler-group for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:55:45,802] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka1 | [2023-04-05 06:59:20,316] INFO [GroupCoordinator 1]: Member TrackHandler-244a8571-9336-428c-9137-e1bb16b3f08f in group trackhandler-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:20,318] INFO [GroupCoordinator 1]: Preparing to rebalance group trackhandler-group in state PreparingRebalance with old generation 1 (_consumer_offsets-25) (reason: removing member TrackHandler-244a8571-9336-428c-9137-e1bb16b3f08f on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:20,319] INFO [GroupCoordinator 1]: Group trackhandler-group with generation 2 is now empty (
_consumer_offsets-25) (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:34,830] INFO [GroupCoordinator 1]: Dynamic Member with unknown member id joins group trackhandler-group in Empty state. Created a new member id TrackHandler-ca3a72ea-bc18-4354-8d14-107d340f063a for this member and add to the group. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:34,830] INFO [GroupCoordinator 1]: Preparing to rebalance group trackhandler-group in state PreparingRebalance with old generation 2 (_consumer_offsets-25) (reason: Adding new member TrackHandler-ca3a72ea-bc18-4354-8d14-107d340f063a with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:37,831] INFO [GroupCoordinator 1]: Stabilized group trackhandler-group generation 3 (
_consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 06:59:37,871] INFO [GroupCoordinator 1]: Assignment received from leader TrackHandler-ca3a72ea-bc18-4354-8d14-107d340f063a for group trackhandler-group for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
kafka1 | [2023-04-05 07:00:45,805] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

Actions #10

Updated by Fernando Jose Capeletto Neto over 1 year ago

. _ _ _ _
/\\ / _
' _ _ _()_ _ _ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ ` | \ \ \ \
\\/ _)| |
)| | | | | || (| | ) ) ) )
' |
_| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.3)

[2023-04-05 02:08:02.587] INFO [e.f.c.t.h.TrackHandlerApplication] [55]: Starting TrackHandlerApplication using Java 17.0.1 on macbookpro-i9.attlocal.net with PID 81196 (/Users/fjcapeletto/workspace/TrackHandler/target/classes started by fjcapeletto in /Users/fjcapeletto/workspace/TrackHandler)
[2023-04-05 02:08:02.596] DEBUG [e.f.c.t.h.TrackHandlerApplication] [56]: Running with Spring Boot v2.5.3, Spring v5.3.9
[2023-04-05 02:08:02.598] INFO [e.f.c.t.h.TrackHandlerApplication] [663]: The following profiles are active: localh2
[2023-04-05 02:08:54.340] INFO [e.f.c.t.h.s.KafkaServiceImpl] [80]: TRACKHANDLER STARTED!
[2023-04-05 02:08:54.426] INFO [o.a.k.c.p.ProducerConfig] [279]: ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [144.126.159.51:9092]
buffer.memory = 33554432
client.id = TrackHandler
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 20000
retries = 0
retry.backoff.ms = 500
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class engineer.fernando.cms.tracks.handler.messages.KafkaJsonSerializer

[2023-04-05 02:08:54.508] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bufferpool-wait-time
[2023-04-05 02:08:54.522] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name buffer-exhausted-records
[2023-04-05 02:08:54.538] DEBUG [o.a.k.c.Metadata] [278]: Updated cluster metadata version 1 to Cluster(id = null, nodes = [144.126.159.51:9092 (id: \-1 rack: null)], partitions = [], controller = null)
[2023-04-05 02:08:54.562] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name produce-throttle-time
[2023-04-05 02:08:54.603] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name connections-closed:
[2023-04-05 02:08:54.606] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name connections-created:
[2023-04-05 02:08:54.608] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name successful-authentication:
[2023-04-05 02:08:54.610] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name failed-authentication:
[2023-04-05 02:08:54.612] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-sent-received:
[2023-04-05 02:08:54.614] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-sent:
[2023-04-05 02:08:54.618] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-received:
[2023-04-05 02:08:54.621] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name select-time:
[2023-04-05 02:08:54.624] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name io-time:
[2023-04-05 02:08:54.641] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name batch-size
[2023-04-05 02:08:54.643] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name compression-rate
[2023-04-05 02:08:54.645] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name queue-time
[2023-04-05 02:08:54.647] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name request-time
[2023-04-05 02:08:54.649] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name records-per-request
[2023-04-05 02:08:54.651] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name record-retries
[2023-04-05 02:08:54.653] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name errors
[2023-04-05 02:08:54.655] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name record-size
[2023-04-05 02:08:54.660] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name batch-split-rate
[2023-04-05 02:08:54.667] DEBUG [o.a.k.c.p.i.Sender] [158]: [Producer clientId=TrackHandler] Starting Kafka producer I/O thread.
[2023-04-05 02:08:54.671] INFO [o.a.k.c.u.AppInfoParser] [109]: Kafka version : 2.0.0
[2023-04-05 02:08:54.672] INFO [o.a.k.c.u.AppInfoParser] [110]: Kafka commitId : 3402a8361b734732
[2023-04-05 02:08:54.676] DEBUG [o.a.k.c.p.KafkaProducer] [452]: [Producer clientId=TrackHandler] Kafka producer started
[2023-04-05 02:08:54.714] DEBUG [o.a.k.c.NetworkClient] [1034]: [Producer clientId=TrackHandler] Initialize connection to node 144.126.159.51:9092 (id: -1 rack: null) for sending metadata request
[2023-04-05 02:08:54.715] DEBUG [o.a.k.c.NetworkClient] [862]: [Producer clientId=TrackHandler] Initiating connection to node 144.126.159.51:9092 (id: -1 rack: null)
[2023-04-05 02:08:54.791] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.bytes-sent
[2023-04-05 02:08:54.794] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.bytes-received
[2023-04-05 02:08:54.797] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.latency
[2023-04-05 02:08:54.803] DEBUG [o.a.k.c.n.Selector] [475]: [Producer clientId=TrackHandler] Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 131768, SO_TIMEOUT = 0 to node -1
[2023-04-05 02:08:55.071] DEBUG [o.a.k.c.NetworkClient] [824]: [Producer clientId=TrackHandler] Completed connection to node -1. Fetching API versions.
[2023-04-05 02:08:55.072] DEBUG [o.a.k.c.NetworkClient] [838]: [Producer clientId=TrackHandler] Initiating API versions fetch from node -1.
[2023-04-05 02:08:55.138] DEBUG [o.a.k.c.NetworkClient] [792]: [Producer clientId=TrackHandler] Recorded API versions for node -1: (Produce(0): 0 to 9 [usable: 6], Fetch(1): 0 to 13 [usable: 8], ListOffsets(2): 0 to 7 [usable: 3], Metadata(3): 0 to 12 [usable: 6], LeaderAndIsr(4): 0 to 6 [usable: 1], StopReplica(5): 0 to 3 [usable: 0], UpdateMetadata(6): 0 to 7 [usable: 4], ControlledShutdown(7): 0 to 3 [usable: 1], OffsetCommit(8): 0 to 8 [usable: 4], OffsetFetch(9): 0 to 8 [usable: 4], FindCoordinator(10): 0 to 4 [usable: 2], JoinGroup(11): 0 to 9 [usable: 3], Heartbeat(12): 0 to 4 [usable: 2], LeaveGroup(13): 0 to 5 [usable: 2], SyncGroup(14): 0 to 5 [usable: 2], DescribeGroups(15): 0 to 5 [usable: 2], ListGroups(16): 0 to 4 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 7 [usable: 3], DeleteTopics(20): 0 to 6 [usable: 2], DeleteRecords(21): 0 to 2 [usable: 1], InitProducerId(22): 0 to 4 [usable: 1], OffsetForLeaderEpoch(23): 0 to 4 [usable: 1], AddPartitionsToTxn(24): 0 to 3 [usable: 1], AddOffsetsToTxn(25): 0 to 3 [usable: 1], EndTxn(26): 0 to 3 [usable: 1], WriteTxnMarkers(27): 0 to 1 [usable: 0], TxnOffsetCommit(28): 0 to 3 [usable: 1], DescribeAcls(29): 0 to 3 [usable: 1], CreateAcls(30): 0 to 3 [usable: 1], DeleteAcls(31): 0 to 3 [usable: 1], DescribeConfigs(32): 0 to 4 [usable: 2], AlterConfigs(33): 0 to 2 [usable: 1], AlterReplicaLogDirs(34): 0 to 2 [usable: 1], DescribeLogDirs(35): 0 to 4 [usable: 1], SaslAuthenticate(36): 0 to 2 [usable: 0], CreatePartitions(37): 0 to 3 [usable: 1], CreateDelegationToken(38): 0 to 3 [usable: 1], RenewDelegationToken(39): 0 to 2 [usable: 1], ExpireDelegationToken(40): 0 to 2 [usable: 1], DescribeDelegationToken(41): 0 to 3 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 1, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0)
[2023-04-05 02:08:55.141] DEBUG [o.a.k.c.NetworkClient] [1018]: [Producer clientId=TrackHandler] Sending metadata request (type=MetadataRequest, topics=TACTICAL_ALERT_TOPIC) to node 144.126.159.51:9092 (id: -1 rack: null)
[2023-04-05 02:08:55.178] INFO [o.a.k.c.Metadata] [273]: Cluster ID: TrjbRr0zSbemhBEHXdWVbA
[2023-04-05 02:08:55.179] DEBUG [o.a.k.c.Metadata] [278]: Updated cluster metadata version 2 to Cluster(id = TrjbRr0zSbemhBEHXdWVbA, nodes = [144.126.159.51:9092 (id: 1 rack: null)], partitions = [Partition(topic = TACTICAL_ALERT_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = [])], controller = 144.126.159.51:9092 (id: 1 rack: null))
[2023-04-05 02:08:55.307] DEBUG [o.a.k.c.NetworkClient] [862]: [Producer clientId=TrackHandler] Initiating connection to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:08:55.350] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.bytes-sent
[2023-04-05 02:08:55.370] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.bytes-received
[2023-04-05 02:08:55.372] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.latency
[2023-04-05 02:08:55.391] DEBUG [o.a.k.c.n.Selector] [475]: [Producer clientId=TrackHandler] Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 131768, SO_TIMEOUT = 0 to node 1
[2023-04-05 02:08:55.392] DEBUG [o.a.k.c.NetworkClient] [824]: [Producer clientId=TrackHandler] Completed connection to node 1. Fetching API versions.
[2023-04-05 02:08:55.393] DEBUG [o.a.k.c.NetworkClient] [838]: [Producer clientId=TrackHandler] Initiating API versions fetch from node 1.
[2023-04-05 02:08:55.422] DEBUG [o.a.k.c.NetworkClient] [792]: [Producer clientId=TrackHandler] Recorded API versions for node 1: (Produce(0): 0 to 9 [usable: 6], Fetch(1): 0 to 13 [usable: 8], ListOffsets(2): 0 to 7 [usable: 3], Metadata(3): 0 to 12 [usable: 6], LeaderAndIsr(4): 0 to 6 [usable: 1], StopReplica(5): 0 to 3 [usable: 0], UpdateMetadata(6): 0 to 7 [usable: 4], ControlledShutdown(7): 0 to 3 [usable: 1], OffsetCommit(8): 0 to 8 [usable: 4], OffsetFetch(9): 0 to 8 [usable: 4], FindCoordinator(10): 0 to 4 [usable: 2], JoinGroup(11): 0 to 9 [usable: 3], Heartbeat(12): 0 to 4 [usable: 2], LeaveGroup(13): 0 to 5 [usable: 2], SyncGroup(14): 0 to 5 [usable: 2], DescribeGroups(15): 0 to 5 [usable: 2], ListGroups(16): 0 to 4 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 7 [usable: 3], DeleteTopics(20): 0 to 6 [usable: 2], DeleteRecords(21): 0 to 2 [usable: 1], InitProducerId(22): 0 to 4 [usable: 1], OffsetForLeaderEpoch(23): 0 to 4 [usable: 1], AddPartitionsToTxn(24): 0 to 3 [usable: 1], AddOffsetsToTxn(25): 0 to 3 [usable: 1], EndTxn(26): 0 to 3 [usable: 1], WriteTxnMarkers(27): 0 to 1 [usable: 0], TxnOffsetCommit(28): 0 to 3 [usable: 1], DescribeAcls(29): 0 to 3 [usable: 1], CreateAcls(30): 0 to 3 [usable: 1], DeleteAcls(31): 0 to 3 [usable: 1], DescribeConfigs(32): 0 to 4 [usable: 2], AlterConfigs(33): 0 to 2 [usable: 1], AlterReplicaLogDirs(34): 0 to 2 [usable: 1], DescribeLogDirs(35): 0 to 4 [usable: 1], SaslAuthenticate(36): 0 to 2 [usable: 0], CreatePartitions(37): 0 to 3 [usable: 1], CreateDelegationToken(38): 0 to 3 [usable: 1], RenewDelegationToken(39): 0 to 2 [usable: 1], ExpireDelegationToken(40): 0 to 2 [usable: 1], DescribeDelegationToken(41): 0 to 3 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 1, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0)
[2023-04-05 02:08:55.444] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.records-per-batch
[2023-04-05 02:08:55.445] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.bytes
[2023-04-05 02:08:55.447] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.compression-rate
[2023-04-05 02:08:55.448] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.record-retries
[2023-04-05 02:08:55.449] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.record-errors
[2023-04-05 02:08:55.503] DEBUG [e.f.c.t.h.s.KafkaServiceImpl] [137]: Kafka TACTICAL_ALERT_TOPIC topic message 1 sent successfully, topic-partition=TACTICAL_ALERT_TOPIC-0 offset=3 timestamp=2023-04-05T07:08:55.285Z
[2023-04-05 02:08:58.131] INFO [o.s.a.r.c.CachingConnectionFactory] [638]: Attempting to connect to: [gull.rmq.cloudamqp.com:5671]
[2023-04-05 02:08:58.820] INFO [o.s.a.r.c.CachingConnectionFactory] [589]: Created new connection: rabbitConnectionFactory#3802c799:0/SimpleConnection@6fd06ee3 [delegate=amqp::5671/vfsemzbv, localPort= 60315]
[2023-04-05 02:08:58.834] INFO [o.s.a.r.c.RabbitAdmin] [591]: Auto-declaring a non-durable, auto-delete, or exclusive Queue (systemtrack) durable:false, auto-delete:false, exclusive:false. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
[2023-04-05 02:08:58.835] INFO [o.s.a.r.c.RabbitAdmin] [591]: Auto-declaring a non-durable, auto-delete, or exclusive Queue (nmea) durable:false, auto-delete:false, exclusive:false. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
[2023-04-05 02:08:59.610] DEBUG [e.f.c.t.h.c.SchedulerConfig$CustomTaskScheduler] [181]: Initializing ExecutorService
[2023-04-05 02:08:59.627] INFO [e.f.c.t.h.TrackHandlerApplication] [61]: Started TrackHandlerApplication in 58.066 seconds (JVM running for 80.263)
[2023-04-05 02:09:04.378] INFO [o.a.k.c.c.ConsumerConfig] [279]: ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [144.126.159.51:9092]
check.crcs = true
client.id = TrackHandler
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = trackhandler-group
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 20000
retry.backoff.ms = 500
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class engineer.fernando.cms.tracks.handler.messages.KafkaJsonDeserializer

[2023-04-05 02:09:04.380] DEBUG [o.a.k.c.c.KafkaConsumer] [668]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initializing the Kafka consumer
[2023-04-05 02:09:04.386] DEBUG [o.a.k.c.Metadata] [278]: Updated cluster metadata version 1 to Cluster(id = null, nodes = [144.126.159.51:9092 (id: 1 rack: null)], partitions = [], controller = null)
[2023-04-05 02:09:04.402] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name fetch-throttle-time
[2023-04-05 02:09:04.415] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name connections-closed:
[2023-04-05 02:09:04.416] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name connections-created:
[2023-04-05 02:09:04.417] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name successful-authentication:
[2023-04-05 02:09:04.418] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name failed-authentication:
[2023-04-05 02:09:04.419] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-sent-received:
[2023-04-05 02:09:04.420] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-sent:
[2023-04-05 02:09:04.421] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-received:
[2023-04-05 02:09:04.422] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name select-time:
[2023-04-05 02:09:04.423] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name io-time:
[2023-04-05 02:09:04.459] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name heartbeat-latency
[2023-04-05 02:09:04.461] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name join-latency
[2023-04-05 02:09:04.461] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name sync-latency
[2023-04-05 02:09:04.466] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name commit-latency
[2023-04-05 02:09:04.474] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name bytes-fetched
[2023-04-05 02:09:04.475] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name records-fetched
[2023-04-05 02:09:04.477] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name fetch-latency
[2023-04-05 02:09:04.478] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name records-lag
[2023-04-05 02:09:04.479] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name records-lead
[2023-04-05 02:09:04.480] INFO [o.a.k.c.u.AppInfoParser] [109]: Kafka version : 2.0.0
[2023-04-05 02:09:04.481] INFO [o.a.k.c.u.AppInfoParser] [110]: Kafka commitId : 3402a8361b734732
[2023-04-05 02:09:04.482] DEBUG [o.a.k.c.c.KafkaConsumer] [793]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Kafka consumer initialized
[2023-04-05 02:09:04.483] DEBUG [o.a.k.c.c.KafkaConsumer] [921]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Subscribed to topic(s): TRACKDAO_TOPIC, AISMESSAGE_TOPIC, TACTICAL_ALERT_TOPIC, TRACKDAO_PERFORMANCE_TOPIC
[2023-04-05 02:09:04.484] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [651]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending FindCoordinator request to broker 144.126.159.51:9092 (id: -1 rack: null)
[2023-04-05 02:09:04.491] DEBUG [o.a.k.c.NetworkClient] [862]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating connection to node 144.126.159.51:9092 (id: -1 rack: null)
[2023-04-05 02:09:04.524] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.bytes-sent
[2023-04-05 02:09:04.526] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.bytes-received
[2023-04-05 02:09:04.526] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node--1.latency
[2023-04-05 02:09:04.527] DEBUG [o.a.k.c.n.Selector] [475]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Created socket with SO_RCVBUF = 66608, SO_SNDBUF = 131768, SO_TIMEOUT = 0 to node -1
[2023-04-05 02:09:04.528] DEBUG [o.a.k.c.NetworkClient] [824]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Completed connection to node -1. Fetching API versions.
[2023-04-05 02:09:04.528] DEBUG [o.a.k.c.NetworkClient] [838]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating API versions fetch from node -1.
[2023-04-05 02:09:04.556] DEBUG [o.a.k.c.NetworkClient] [792]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Recorded API versions for node -1: (Produce(0): 0 to 9 [usable: 6], Fetch(1): 0 to 13 [usable: 8], ListOffsets(2): 0 to 7 [usable: 3], Metadata(3): 0 to 12 [usable: 6], LeaderAndIsr(4): 0 to 6 [usable: 1], StopReplica(5): 0 to 3 [usable: 0], UpdateMetadata(6): 0 to 7 [usable: 4], ControlledShutdown(7): 0 to 3 [usable: 1], OffsetCommit(8): 0 to 8 [usable: 4], OffsetFetch(9): 0 to 8 [usable: 4], FindCoordinator(10): 0 to 4 [usable: 2], JoinGroup(11): 0 to 9 [usable: 3], Heartbeat(12): 0 to 4 [usable: 2], LeaveGroup(13): 0 to 5 [usable: 2], SyncGroup(14): 0 to 5 [usable: 2], DescribeGroups(15): 0 to 5 [usable: 2], ListGroups(16): 0 to 4 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 7 [usable: 3], DeleteTopics(20): 0 to 6 [usable: 2], DeleteRecords(21): 0 to 2 [usable: 1], InitProducerId(22): 0 to 4 [usable: 1], OffsetForLeaderEpoch(23): 0 to 4 [usable: 1], AddPartitionsToTxn(24): 0 to 3 [usable: 1], AddOffsetsToTxn(25): 0 to 3 [usable: 1], EndTxn(26): 0 to 3 [usable: 1], WriteTxnMarkers(27): 0 to 1 [usable: 0], TxnOffsetCommit(28): 0 to 3 [usable: 1], DescribeAcls(29): 0 to 3 [usable: 1], CreateAcls(30): 0 to 3 [usable: 1], DeleteAcls(31): 0 to 3 [usable: 1], DescribeConfigs(32): 0 to 4 [usable: 2], AlterConfigs(33): 0 to 2 [usable: 1], AlterReplicaLogDirs(34): 0 to 2 [usable: 1], DescribeLogDirs(35): 0 to 4 [usable: 1], SaslAuthenticate(36): 0 to 2 [usable: 0], CreatePartitions(37): 0 to 3 [usable: 1], CreateDelegationToken(38): 0 to 3 [usable: 1], RenewDelegationToken(39): 0 to 2 [usable: 1], ExpireDelegationToken(40): 0 to 2 [usable: 1], DescribeDelegationToken(41): 0 to 3 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 1, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0)
[2023-04-05 02:09:04.558] DEBUG [o.a.k.c.NetworkClient] [1018]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending metadata request (type=MetadataRequest, topics=TRACKDAO_TOPIC,AISMESSAGE_TOPIC,TACTICAL_ALERT_TOPIC,TRACKDAO_PERFORMANCE_TOPIC) to node 144.126.159.51:9092 (id: -1 rack: null)
[2023-04-05 02:09:04.588] INFO [o.a.k.c.Metadata] [273]: Cluster ID: TrjbRr0zSbemhBEHXdWVbA
[2023-04-05 02:09:04.589] DEBUG [o.a.k.c.Metadata] [278]: Updated cluster metadata version 2 to Cluster(id = TrjbRr0zSbemhBEHXdWVbA, nodes = [144.126.159.51:9092 (id: 1 rack: null)], partitions = [Partition(topic = AISMESSAGE_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = []), Partition(topic = TACTICAL_ALERT_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = []), Partition(topic = TRACKDAO_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = []), Partition(topic = TRACKDAO_PERFORMANCE_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = [])], controller = 144.126.159.51:9092 (id: 1 rack: null))
[2023-04-05 02:09:04.591] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [662]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1680678544590, latencyMs=104, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=2, clientId=TrackHandler, correlationId=0), responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='NONE', error=NONE, node=144.126.159.51:9092 (id: 1 rack: null)))
[2023-04-05 02:09:04.591] INFO [o.a.k.c.c.i.AbstractCoordinator] [677]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Discovered group coordinator 144.126.159.51:9092 (id: 2147483646 rack: null)
[2023-04-05 02:09:04.592] DEBUG [o.a.k.c.NetworkClient] [862]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating connection to node 144.126.159.51:9092 (id: 2147483646 rack: null)
[2023-04-05 02:09:04.596] INFO [o.a.k.c.c.i.ConsumerCoordinator] [462]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Revoking previously assigned partitions []
[2023-04-05 02:09:04.597] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [979]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Disabling heartbeat thread
[2023-04-05 02:09:04.596] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [1002]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Heartbeat thread started
[2023-04-05 02:09:04.597] INFO [o.a.k.c.c.i.AbstractCoordinator] [509]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] (Re
)joining group
[2023-04-05 02:09:04.602] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [517]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending JoinGroup ((type: JoinGroupRequest, groupId=trackhandler-group, sessionTimeout=10000, rebalanceTimeout=300000, memberId=, protocolType=consumer, groupProtocols=org.apache.kafka.common.requests.JoinGroupRequest$ProtocolMetadata@5a541d2f)) to coordinator 144.126.159.51:9092 (id: 2147483646 rack: null)
[2023-04-05 02:09:04.619] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-2147483646.bytes-sent
[2023-04-05 02:09:04.620] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-2147483646.bytes-received
[2023-04-05 02:09:04.621] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-2147483646.latency
[2023-04-05 02:09:04.622] DEBUG [o.a.k.c.n.Selector] [475]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Created socket with SO_RCVBUF = 66608, SO_SNDBUF = 131768, SO_TIMEOUT = 0 to node 2147483646
[2023-04-05 02:09:04.623] DEBUG [o.a.k.c.NetworkClient] [824]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Completed connection to node 2147483646. Fetching API versions.
[2023-04-05 02:09:04.623] DEBUG [o.a.k.c.NetworkClient] [838]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating API versions fetch from node 2147483646.
[2023-04-05 02:09:04.653] DEBUG [o.a.k.c.NetworkClient] [792]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Recorded API versions for node 2147483646: (Produce(0): 0 to 9 [usable: 6], Fetch(1): 0 to 13 [usable: 8], ListOffsets(2): 0 to 7 [usable: 3], Metadata(3): 0 to 12 [usable: 6], LeaderAndIsr(4): 0 to 6 [usable: 1], StopReplica(5): 0 to 3 [usable: 0], UpdateMetadata(6): 0 to 7 [usable: 4], ControlledShutdown(7): 0 to 3 [usable: 1], OffsetCommit(8): 0 to 8 [usable: 4], OffsetFetch(9): 0 to 8 [usable: 4], FindCoordinator(10): 0 to 4 [usable: 2], JoinGroup(11): 0 to 9 [usable: 3], Heartbeat(12): 0 to 4 [usable: 2], LeaveGroup(13): 0 to 5 [usable: 2], SyncGroup(14): 0 to 5 [usable: 2], DescribeGroups(15): 0 to 5 [usable: 2], ListGroups(16): 0 to 4 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 7 [usable: 3], DeleteTopics(20): 0 to 6 [usable: 2], DeleteRecords(21): 0 to 2 [usable: 1], InitProducerId(22): 0 to 4 [usable: 1], OffsetForLeaderEpoch(23): 0 to 4 [usable: 1], AddPartitionsToTxn(24): 0 to 3 [usable: 1], AddOffsetsToTxn(25): 0 to 3 [usable: 1], EndTxn(26): 0 to 3 [usable: 1], WriteTxnMarkers(27): 0 to 1 [usable: 0], TxnOffsetCommit(28): 0 to 3 [usable: 1], DescribeAcls(29): 0 to 3 [usable: 1], CreateAcls(30): 0 to 3 [usable: 1], DeleteAcls(31): 0 to 3 [usable: 1], DescribeConfigs(32): 0 to 4 [usable: 2], AlterConfigs(33): 0 to 2 [usable: 1], AlterReplicaLogDirs(34): 0 to 2 [usable: 1], DescribeLogDirs(35): 0 to 4 [usable: 1], SaslAuthenticate(36): 0 to 2 [usable: 0], CreatePartitions(37): 0 to 3 [usable: 1], CreateDelegationToken(38): 0 to 3 [usable: 1], RenewDelegationToken(39): 0 to 2 [usable: 1], ExpireDelegationToken(40): 0 to 2 [usable: 1], DescribeDelegationToken(41): 0 to 3 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 1, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0)
[2023-04-05 02:09:04.710] DEBUG [o.a.k.c.NetworkClient] [1018]: [Producer clientId=TrackHandler] Sending metadata request (type=MetadataRequest, topics=TRACKDAO_TOPIC,TACTICAL_ALERT_TOPIC) to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:04.738] DEBUG [o.a.k.c.Metadata] [278]: Updated cluster metadata version 3 to Cluster(id = TrjbRr0zSbemhBEHXdWVbA, nodes = [144.126.159.51:9092 (id: 1 rack: null)], partitions = [Partition(topic = TACTICAL_ALERT_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = []), Partition(topic = TRACKDAO_TOPIC, partition = 0, leader = 1, replicas = [1], isr = [1], offlineReplicas = [])], controller = 144.126.159.51:9092 (id: 1 rack: null))
[2023-04-05 02:09:04.749] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.records-per-batch
[2023-04-05 02:09:04.751] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.bytes
[2023-04-05 02:09:04.752] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.compression-rate
[2023-04-05 02:09:04.753] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.record-retries
[2023-04-05 02:09:04.753] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.record-errors
[2023-04-05 02:09:04.783] DEBUG [e.f.c.t.h.s.KafkaServiceImpl] [137]: Kafka TRACKDAO_TOPIC topic message 1 sent successfully, topic-partition=TRACKDAO_TOPIC-0 offset=547 timestamp=2023-04-05T07:09:04.748Z
[2023-04-05 02:09:06.660] DEBUG [e.f.c.t.h.s.KafkaServiceImpl] [137]: Kafka TRACKDAO_TOPIC topic message 1 sent successfully, topic-partition=TRACKDAO_TOPIC-0 offset=548 timestamp=2023-04-05T07:09:06.625Z
[2023-04-05 02:09:07.769] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [532]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Received successful JoinGroup response: JoinGroupResponse(throttleTimeMs=0, error=NONE, generationId=7, groupProtocol=range, memberId=TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2, leaderId=TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2, members=TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2)
[2023-04-05 02:09:07.771] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [407]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Performing assignment using strategy range with subscriptions {TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2=Subscription(topics=[TRACKDAO_TOPIC, AISMESSAGE_TOPIC, TACTICAL_ALERT_TOPIC, TRACKDAO_PERFORMANCE_TOPIC])}
[2023-04-05 02:09:07.775] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [444]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Finished assignment for group: {TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2=Assignment(partitions=[TRACKDAO_TOPIC-0, AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0])}
[2023-04-05 02:09:07.777] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [597]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending leader SyncGroup to coordinator 144.126.159.51:9092 (id: 2147483646 rack: null): (type=SyncGroupRequest, groupId=trackhandler-group, generationId=7, memberId=TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2, groupAssignment=TrackHandler-ddc17ea4-7b58-4e9b-9081-cc7e3f0814c2)
[2023-04-05 02:09:07.811] INFO [o.a.k.c.c.i.AbstractCoordinator] [473]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Successfully joined group with generation 7
[2023-04-05 02:09:07.812] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [970]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Enabling heartbeat thread
[2023-04-05 02:09:07.814] INFO [o.a.k.c.c.i.ConsumerCoordinator] [280]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Setting newly assigned partitions [TRACKDAO_TOPIC-0, AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0]
[2023-04-05 02:09:07.816] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [891]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetching committed offsets for partitions: [TRACKDAO_TOPIC-0, AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0]
[2023-04-05 02:09:07.847] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [941]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Found no committed offset for partition TRACKDAO_TOPIC-0
[2023-04-05 02:09:07.847] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [941]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Found no committed offset for partition AISMESSAGE_TOPIC-0
[2023-04-05 02:09:07.848] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [941]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Found no committed offset for partition TACTICAL_ALERT_TOPIC-0
[2023-04-05 02:09:07.848] DEBUG [o.a.k.c.c.i.ConsumerCoordinator] [941]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Found no committed offset for partition TRACKDAO_PERFORMANCE_TOPIC-0
[2023-04-05 02:09:07.850] DEBUG [o.a.k.c.c.i.Fetcher] [728]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending ListOffsetRequest (type=ListOffsetRequest, replicaId=-1, partitionTimestamps={TRACKDAO_TOPIC-0=-2, AISMESSAGE_TOPIC-0=-2, TACTICAL_ALERT_TOPIC-0=-2, TRACKDAO_PERFORMANCE_TOPIC-0=-2}, isolationLevel=READ_UNCOMMITTED) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.853] DEBUG [o.a.k.c.NetworkClient] [862]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating connection to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.882] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.bytes-sent
[2023-04-05 02:09:07.884] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.bytes-received
[2023-04-05 02:09:07.886] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name node-1.latency
[2023-04-05 02:09:07.887] DEBUG [o.a.k.c.n.Selector] [475]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Created socket with SO_RCVBUF = 66608, SO_SNDBUF = 131768, SO_TIMEOUT = 0 to node 1
[2023-04-05 02:09:07.887] DEBUG [o.a.k.c.NetworkClient] [824]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Completed connection to node 1. Fetching API versions.
[2023-04-05 02:09:07.888] DEBUG [o.a.k.c.NetworkClient] [838]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Initiating API versions fetch from node 1.
[2023-04-05 02:09:07.917] DEBUG [o.a.k.c.NetworkClient] [792]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Recorded API versions for node 1: (Produce(0): 0 to 9 [usable: 6], Fetch(1): 0 to 13 [usable: 8], ListOffsets(2): 0 to 7 [usable: 3], Metadata(3): 0 to 12 [usable: 6], LeaderAndIsr(4): 0 to 6 [usable: 1], StopReplica(5): 0 to 3 [usable: 0], UpdateMetadata(6): 0 to 7 [usable: 4], ControlledShutdown(7): 0 to 3 [usable: 1], OffsetCommit(8): 0 to 8 [usable: 4], OffsetFetch(9): 0 to 8 [usable: 4], FindCoordinator(10): 0 to 4 [usable: 2], JoinGroup(11): 0 to 9 [usable: 3], Heartbeat(12): 0 to 4 [usable: 2], LeaveGroup(13): 0 to 5 [usable: 2], SyncGroup(14): 0 to 5 [usable: 2], DescribeGroups(15): 0 to 5 [usable: 2], ListGroups(16): 0 to 4 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 7 [usable: 3], DeleteTopics(20): 0 to 6 [usable: 2], DeleteRecords(21): 0 to 2 [usable: 1], InitProducerId(22): 0 to 4 [usable: 1], OffsetForLeaderEpoch(23): 0 to 4 [usable: 1], AddPartitionsToTxn(24): 0 to 3 [usable: 1], AddOffsetsToTxn(25): 0 to 3 [usable: 1], EndTxn(26): 0 to 3 [usable: 1], WriteTxnMarkers(27): 0 to 1 [usable: 0], TxnOffsetCommit(28): 0 to 3 [usable: 1], DescribeAcls(29): 0 to 3 [usable: 1], CreateAcls(30): 0 to 3 [usable: 1], DeleteAcls(31): 0 to 3 [usable: 1], DescribeConfigs(32): 0 to 4 [usable: 2], AlterConfigs(33): 0 to 2 [usable: 1], AlterReplicaLogDirs(34): 0 to 2 [usable: 1], DescribeLogDirs(35): 0 to 4 [usable: 1], SaslAuthenticate(36): 0 to 2 [usable: 0], CreatePartitions(37): 0 to 3 [usable: 1], CreateDelegationToken(38): 0 to 3 [usable: 1], RenewDelegationToken(39): 0 to 2 [usable: 1], ExpireDelegationToken(40): 0 to 2 [usable: 1], DescribeDelegationToken(41): 0 to 3 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 1, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0 to 2, UNKNOWN: 0 to 1, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0, UNKNOWN: 0)
[2023-04-05 02:09:07.948] DEBUG [o.a.k.c.c.i.Fetcher] [784]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Handling ListOffsetResponse response for TRACKDAO_TOPIC-0. Fetched offset 0, timestamp -1
[2023-04-05 02:09:07.950] DEBUG [o.a.k.c.c.i.Fetcher] [784]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Handling ListOffsetResponse response for AISMESSAGE_TOPIC-0. Fetched offset 0, timestamp -1
[2023-04-05 02:09:07.950] DEBUG [o.a.k.c.c.i.Fetcher] [784]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Handling ListOffsetResponse response for TACTICAL_ALERT_TOPIC-0. Fetched offset 0, timestamp -1
[2023-04-05 02:09:07.951] DEBUG [o.a.k.c.c.i.Fetcher] [784]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Handling ListOffsetResponse response for TRACKDAO_PERFORMANCE_TOPIC-0. Fetched offset 0, timestamp -1
[2023-04-05 02:09:07.952] INFO [o.a.k.c.c.i.Fetcher] [583]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Resetting offset for partition TRACKDAO_TOPIC-0 to offset 0.
[2023-04-05 02:09:07.953] INFO [o.a.k.c.c.i.Fetcher] [583]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Resetting offset for partition AISMESSAGE_TOPIC-0 to offset 0.
[2023-04-05 02:09:07.953] INFO [o.a.k.c.c.i.Fetcher] [583]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Resetting offset for partition TACTICAL_ALERT_TOPIC-0 to offset 0.
[2023-04-05 02:09:07.954] INFO [o.a.k.c.c.i.Fetcher] [583]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Resetting offset for partition TRACKDAO_PERFORMANCE_TOPIC-0 to offset 0.
[2023-04-05 02:09:07.960] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.961] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.962] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.962] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_PERFORMANCE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:07.963] DEBUG [o.a.k.c.FetchSessionHandler] [199]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 4 partition(s).
[2023-04-05 02:09:07.966] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED FullFetchRequest(AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:08.203] DEBUG [o.a.k.c.FetchSessionHandler] [404]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Node 1 sent a full fetch response that created a new incremental fetch session 419554687 with 4 response partition(s)
[2023-04-05 02:09:08.205] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 0 for partition AISMESSAGE_TOPIC-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = 0, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
[2023-04-05 02:09:08.207] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 0 for partition TACTICAL_ALERT_TOPIC-0 returned fetch data (error=NONE, highWaterMark=4, lastStableOffset = 4, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=849)
[2023-04-05 02:09:08.207] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 0 for partition TRACKDAO_TOPIC-0 returned fetch data (error=NONE, highWaterMark=549, lastStableOffset = 549, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=212258)
[2023-04-05 02:09:08.207] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 0 for partition TRACKDAO_PERFORMANCE_TOPIC-0 returned fetch data (error=NONE, highWaterMark=196, lastStableOffset = 196, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=201385)
[2023-04-05 02:09:08.211] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name AISMESSAGE_TOPIC-0.records-lag
[2023-04-05 02:09:08.213] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name AISMESSAGE_TOPIC-0.records-lead
[2023-04-05 02:09:08.321] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TACTICAL_ALERT_TOPIC-0.records-lag
[2023-04-05 02:09:08.323] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TACTICAL_ALERT_TOPIC-0.records-lead
[2023-04-05 02:09:08.659] DEBUG [e.f.c.t.h.s.KafkaServiceImpl] [137]: Kafka TRACKDAO_TOPIC topic message 1 sent successfully, topic-partition=TRACKDAO_TOPIC-0 offset=549 timestamp=2023-04-05T07:09:08.623Z
[2023-04-05 02:09:08.981] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TRACKDAO_TOPIC-0.records-lag
[2023-04-05 02:09:08.982] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TRACKDAO_TOPIC-0.records-lead
[2023-04-05 02:09:08.983] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:08.984] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 4 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:08.985] DEBUG [o.a.k.c.FetchSessionHandler] [252]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built incremental fetch (sessionId=419554687, epoch=1) for node 1. Added 0 partition(s), altered 1 partition(s), removed 2 partition(s) out of 2 partition(s)
[2023-04-05 02:09:08.985] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(TACTICAL_ALERT_TOPIC-0), toForget=(TRACKDAO_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0), implied=(AISMESSAGE_TOPIC-0)) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.199] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.bytes-fetched
[2023-04-05 02:09:09.200] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_TOPIC.records-fetched
[2023-04-05 02:09:09.201] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.AISMESSAGE_TOPIC.bytes-fetched
[2023-04-05 02:09:09.201] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.AISMESSAGE_TOPIC.records-fetched
[2023-04-05 02:09:09.202] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.bytes-fetched
[2023-04-05 02:09:09.203] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TACTICAL_ALERT_TOPIC.records-fetched
[2023-04-05 02:09:09.204] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_PERFORMANCE_TOPIC.bytes-fetched
[2023-04-05 02:09:09.204] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name topic.TRACKDAO_PERFORMANCE_TOPIC.records-fetched
[2023-04-05 02:09:09.205] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TRACKDAO_PERFORMANCE_TOPIC-0.records-lag
[2023-04-05 02:09:09.206] DEBUG [o.a.k.c.m.Metrics] [414]: Added sensor with name TRACKDAO_PERFORMANCE_TOPIC-0.records-lead
[2023-04-05 02:09:09.517] DEBUG [o.a.k.c.FetchSessionHandler] [423]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Node 1 sent an incremental fetch response for session 419554687 with 0 response partition(s), 2 implied partition(s)
[2023-04-05 02:09:09.518] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.519] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 4 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.519] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_TOPIC-0 at offset 549 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.520] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_PERFORMANCE_TOPIC-0 at offset 196 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.521] DEBUG [o.a.k.c.FetchSessionHandler] [252]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built incremental fetch (sessionId=419554687, epoch=2) for node 1. Added 2 partition(s), altered 0 partition(s), removed 0 partition(s) out of 4 partition(s)
[2023-04-05 02:09:09.522] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(TRACKDAO_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0), toForget=(), implied=(AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0)) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.551] DEBUG [o.a.k.c.FetchSessionHandler] [423]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Node 1 sent an incremental fetch response for session 419554687 with 2 response partition(s), 2 implied partition(s)
[2023-04-05 02:09:09.552] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 549 for partition TRACKDAO_TOPIC-0 returned fetch data (error=NONE, highWaterMark=550, lastStableOffset = 550, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=186)
[2023-04-05 02:09:09.553] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 196 for partition TRACKDAO_PERFORMANCE_TOPIC-0 returned fetch data (error=NONE, highWaterMark=196, lastStableOffset = 196, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
[2023-04-05 02:09:09.555] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.555] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 4 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.556] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_PERFORMANCE_TOPIC-0 at offset 196 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.556] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_TOPIC-0 at offset 550 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:09.557] DEBUG [o.a.k.c.FetchSessionHandler] [252]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built incremental fetch (sessionId=419554687, epoch=3) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 4 partition(s)
[2023-04-05 02:09:09.557] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(TRACKDAO_TOPIC-0), toForget=(), implied=(AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0)) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.123] DEBUG [o.a.k.c.FetchSessionHandler] [423]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Node 1 sent an incremental fetch response for session 419554687 with 0 response partition(s), 4 implied partition(s)
[2023-04-05 02:09:10.124] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.124] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 4 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.125] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_PERFORMANCE_TOPIC-0 at offset 196 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.125] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_TOPIC-0 at offset 550 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.126] DEBUG [o.a.k.c.FetchSessionHandler] [252]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built incremental fetch (sessionId=419554687, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 4 partition(s)
[2023-04-05 02:09:10.126] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0)) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.654] DEBUG [o.a.k.c.FetchSessionHandler] [423]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Node 1 sent an incremental fetch response for session 419554687 with 1 response partition(s), 3 implied partition(s)
[2023-04-05 02:09:10.654] DEBUG [e.f.c.t.h.s.KafkaServiceImpl] [137]: Kafka TRACKDAO_TOPIC topic message 1 sent successfully, topic-partition=TRACKDAO_TOPIC-0 offset=550 timestamp=2023-04-05T07:09:10.623Z
[2023-04-05 02:09:10.655] DEBUG [o.a.k.c.c.i.Fetcher] [227]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Fetch READ_UNCOMMITTED at offset 550 for partition TRACKDAO_TOPIC-0 returned fetch data (error=NONE, highWaterMark=551, lastStableOffset = 551, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=186)
[2023-04-05 02:09:10.657] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition AISMESSAGE_TOPIC-0 at offset 0 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.657] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TACTICAL_ALERT_TOPIC-0 at offset 4 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.658] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_PERFORMANCE_TOPIC-0 at offset 196 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.658] DEBUG [o.a.k.c.c.i.Fetcher] [882]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Added READ_UNCOMMITTED fetch request for partition TRACKDAO_TOPIC-0 at offset 551 to node 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.659] DEBUG [o.a.k.c.FetchSessionHandler] [252]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Built incremental fetch (sessionId=419554687, epoch=5) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 4 partition(s)
[2023-04-05 02:09:10.659] DEBUG [o.a.k.c.c.i.Fetcher] [202]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(TRACKDAO_TOPIC-0), toForget=(), implied=(AISMESSAGE_TOPIC-0, TACTICAL_ALERT_TOPIC-0, TRACKDAO_PERFORMANCE_TOPIC-0)) to broker 144.126.159.51:9092 (id: 1 rack: null)
[2023-04-05 02:09:10.814] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [833]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Sending Heartbeat request to coordinator 144.126.159.51:9092 (id: 2147483646 rack: null)
[2023-04-05 02:09:10.846] DEBUG [o.a.k.c.c.i.AbstractCoordinator] [846]: [Consumer clientId=TrackHandler, groupId=trackhandler-group] Received successful Heartbeat response

Actions #11

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • % Done changed from 0 to 30

After changing configurations on TrackHandler and debugging those configurations until be corrected (using ones that was logger during kafka service initialization in broker and spring boot application), we succeed to produce messages to kafka broker on lab.fernando.engineer and consume those messages using TrackHandler (tested with TrackHandler running locally 'profile localh2' and CMSDisplay running on VStudio binding to that local TrackHandler).
We create and manoeuvre tracks and works, after rebooting TrackHandler we noticed the 'replay' effect over the messages for tracks no longer existing on H2. This is because still missing to configure the retention/deletion time for the kafka messages.

Actions #12

Updated by Fernando Jose Capeletto Neto over 1 year ago

Still Missing to Configure authentication.

Preliminar docker-compose file:

ersion: '2.1'

services:
zoo1:
image: confluentinc/cp-zookeeper:7.3.2
hostname: zoo1
container_name: zoo1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: zoo1:2888:3888

kafka1:
image: confluentinc/cp-kafka:7.3.2
hostname: kafka1
container_name: kafka1
ports:
- "9092:9092"
- "29092:29092"
- "9999:9999"
environment:
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:19092,EXTERNAL://144.126.159.51:9092,DOCKER://host.docker.internal:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_HOSTNAME: 144.126.159.51
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
depends_on:
- zoo1
Actions #13

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #14

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #15

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #16

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
  • % Done changed from 30 to 40
Actions #17

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #18

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #19

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #20

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #21

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #22

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #23

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
  • % Done changed from 40 to 70
Actions #24

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #25

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Description updated (diff)
Actions #26

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from In Progress to Testing
  • % Done changed from 70 to 90
Actions #27

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from Testing to Merging
  • % Done changed from 90 to 100
Actions #28

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from Merging to Testing
Actions #29

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from Testing to Deploy
Actions #30

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from Deploy to Done
Actions #31

Updated by Fernando Jose Capeletto Neto over 1 year ago

  • Status changed from Done to Closed
Actions

Also available in: Atom PDF