Kafka Optimization

Apache Kafka was made by a team of engineers in Linkedin but then later they made the software open source. So now almost anybody who wishes to use an excellent message broker can start using Apache Kafka and its vast tools. Business organizations mainly use Kafka to effectively communicate asynchronously. The user inference and ease of usage are what make Apache Kafka popular but sometimes we commit some mistakes while using it. If you want to learn more about what is apache kafka then read here.

Today we will talk about those.

  1. Default Settings:

Many users when they are just starting to use Kafka what they do is use the default settings in Kafka. But do note that the default settings define a single partition and a replication factor of 1. These are not at all acceptable for any sort of production usage due to two reasons. One, there is a slight chance of data loss, and two, the limiting scalability may prevent you from deploying Kafka on a wider platform. On a side note if you need to learn apache kafka then read here.

To change the default settings firstly disable the auto.create.topics.enable on the broker level. This will make every topic to be created explicitly and manually. You can alternatively also use the default.replication.factor and num.partitions command on the broker level to change this too.

Photo by Philipp Katzenberger on Unsplash

Photo by Philipp Katzenberger on Unsplash

  1. Making Do With Only 1 Partition:

Often companies using Kafka make the mistake of using only 1 partition. Partitions are what define the maximum number of consumers that can participate from a single consumer group. Meaning if there are two partitions then two consumers from the same group can consume those messages. So if you create fewer partitions then the cluster utilization will be nonuniform. This means that the maximal consuming speed and the producing speed will be influenced by the lower bound of the range of the partition numbers. Ok so what if you use a higher partition number, then what will happen is the publishers will buffer bigger batches of messages and then send them to Kafka partitions. So the best way to deal with this problem is to run some performance tests and simulate the load so that you get an idea of how many partitions you need to create.

  1. Default Publisher Configurations:

The Kafka producer only needs 3 configuration keys to work actually. Those are bootstrap servers, key, and value serializers. But many times you must have noticed that it is not enough. Kafka includes a lot of settings that may influence the messaging ordering, performance, or probability of data loss. Also the underneath producer groups the messages into batches and then sends them in order of instruction. This means that the message will be sent the send callback gets called or appropriate Future finishes.

Photo by Bench Accounting

Photo by Bench Accounting on Unsplash

  1. Basic Java Consumer:

The Kafka Java client is actually very powerful but the Application Protocol Interface (API) which it uses is not so powerful in terms of usage. For example- the KafkaConsumer class can only be used by a single thread. Then it is required to define an ‘infinite’ loop and that will poll broker for messages. The key thing which you need to look out for is how the timeouts and heartbeats work. Heartbeats are handled by using an additional thread that actually alerts the broker that Kafka is working. Kafka also needs by default at least a 5 minutes interval between poll invocations. If it is not maintained then the consumer will leave the group and the entire system will break down, so keep this in mind.

  1. Semantics:

Apache Kafka supports exactly one delivery/processing semantics. On the producer side, you need to enable special settings using commands like enable.idempotence. This will require specific values for max.in.flight.requests.per.connection, retries, and acks. On the consumer side, this process is more difficult and due to the database failing, you can only fix it by doing a data upsert. Another solution is to use Kafka streams which explicitly defines a setting ‘exactly one’.

  1. Lack of Monitoring:

Apache Kafka is a distributed system. In architecture like these, a lot of things can go wrong very fast. So you should always monitor and observe every system alert. Kafka metrics is a good way to see what exactly is going wrong or good. It is actually quite easy to use Kafka metrics so tell your engineering team to start using those and protect your data. A data disaster can be prevented if you know the metrics data in time.

Conclusion

Apache Kafka is an excellent tool but due to some common mistakes the entire architecture falls and as a result, a data disaster can happen. So use the solution to the most common mistakes as laid out in the article and use Apache Kafka the right way today.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.