Posts

Showing posts from March, 2022

Improving performance of Kafka Producer

Image
  As we know, Kafka uses an asynchronous publish/subscribe model. While our producer calls the send() command, the result returned is a future. That future offers methods to check the status of the information in the process. Moreover, as the batch is ready, the producer sends it to the broker. Basically, the broker waits for an event, then, receives the result, and further responds that the transaction is complete. For latency and throughput, two parameters are particularly important for Kafka performance Tuning: Batch Size Instead of the number of messages, batch.size measures batch size in total bytes. That means it controls how many bytes of data to collect, before sending messages to the Kafka broker. So, without exceeding available memory, set this as high as possible. Make sure the default value is 16384. However, it might never get full, if we increase the size of our buffer. On the basis of other triggers, such as linger time in milliseconds, the Producer sends the informa...

Decorator design pattern in Java

Image
  The intent of the Decorator Design Pattern is to attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to sub-classing for extending functionality. The Decorator Pattern is used to extend the functionality of an object dynamically without having to change the original class source or using inheritance. This is accomplished by creating an object wrapper referred to as a Decorator around the actual object. The Decorator object is designed to ave the same interface as the underlying object. This allows a client object to interact with the Decorator object in exactly the same manner as it would with the underlying ctual object. The Decorator object contains a reference to the actual object. The Decorator object receives all requests (calls) from a client. In turn, it forwards these calls to the underlying object. The Decorator object adds some additional functionality before or after forwarding requests to the underlying object. This ...

CRUD Goes Even Easier With JPABuddy

Image
Create, Read, Update and Delete are the four basic operations of persistence storage. We can say these operations collectively as an acronym CRUD. These operations can be implemented in JPA. JPA is a standard for ORM. It is an API layer that maps Java objects to the database tables.  ORM stands for Object Relational Mapping. It converts data between incompatible type systems in object-oriented programming languages.  JPA Buddy is an IntelliJ IDEA plugin that helps developers work efficiently. JPA Buddy is a tool that is supposed to become your faithful coding assistant for projects with JPA and everything related. It is an advanced plugin for IntelliJ IDEA intended to simplify and accelerate everything related to JPA and surrounding mainstream technology. In fact, you can develop an entire CRUD application or a simple microservice by spending nearly zero time writing boilerplate code. The video demonstrates the features of JPA Buddy by creating a simple CRUD application from ...

Multi-threaded Apache Kafka Consumer

Image
Why do we need multi-thread consumer model?  Suppose we implement a notification module which allow users to subscribe for notifications from other users, other applications. Our module reads messages which will be written by other users, applications to a Kafka clusters. In this case, we can get all notifications of the others written to a Kafka topic and our module will create a consumer to subscribe to that topic.   Everything seems to be fine at the beginning. However, what will happen if the number of notifications produced by other applications, users is increased fast and exceed the rate that can be processed by our module?   All the messages/notifications that haven’t been processed by our module, are still in the Kafka topic. However, things get more danger when the number of messages is too much. Some of them will be lost when the retention policy is met (Note that Kafka retention policy can be time-based, partition size-based, key-based). And more im...