Kafka With Java, Spring, and Docker — Asynchronous Communication Between Microservices | by Pedro Luiz | Apr, 2022

Photograph by Jefferson Santos on Unsplash
  1. What we’re going to construct
  2. Kafka in a nutshell
  3. Matters
  4. Partitions
  5. Establishing tasks
  6. Docker surroundings for Kafka
  7. Producer Microservice
  8. Client Microservice
  9. Superior options
  10. Conclusion

On this article, we will probably be discussing how Kafka as a message dealer works and how you can use it to speak between microservices, by creating two spring microservices.

The concept is to create a Producer Microservice that receives Meals Orders to be created and cross them alongside to the Client Microservice by way of Kafka to be continued in database.

A Kafka cluster is very scalable and fault-tolerant, that means that if any of its servers fails, the opposite servers will take over their work to make sure steady operations with none knowledge loss.

An occasion information the truth that one thing occurred, carrying a message, that may be just about something, for instance, a string, an array or a JSON object. Whenever you learn or write knowledge to Kafka, you do that within the type of these occasions.

Producers are people who publish (write) occasions to Kafka, and customers are people who subscribe (learn and course of) these occasions.

Occasions are organized and durably saved in matters. A subject is just like a folder, and these occasions are the recordsdata in that folder. Matters are multi-producer and multi-subscriber, that means that we are able to have zero, one or a lot of them each.

Occasions might be learn as many instances as wanted, not like conventional messaging programs, occasions are usually not deleted after consumption. As a substitute, you possibly can outline for the way lengthy Kafka ought to retain these occasions.

Matters are partitioned, that means {that a} matter is unfold over quite a lot of buckets. When a brand new occasion is revealed to a subject, it’s truly appended to one of many matter’s partitions. Occasions with the identical occasion key are written to the identical partition. Kafka ensures that any client of a given topic-partition will all the time learn that partition’s occasions in the very same order as they had been written.

To make your knowledge fault-tolerant and high-available, each matter might be replicated, even throughout areas or knowledge facilities, in order that there are all the time a number of brokers which have a replica of the info simply in case issues go fallacious (they are going to).

Go to start.spring.io and create the tasks with the next dependencies

Producer Microservice:

Client Microservice

On the basis of one of many mission, it doesn’t matter which one (or each), create a docker-compose.yml file, containing the required configurations to run Kafka, Kafdrop, and Zookeeper in Docker containers.

Being within the root folder of one of many tasks, you possibly can run within the terminal docker-compose up. You possibly can entry Kafdrop, which is an online interface for managing Kafka, in http://localhost:9000.

There you possibly can see your matters, create them, delete them, and plenty of extra.

Structure:

Steps

  • Create configuration beans
  • Create a Meals Order matter
  • Create Meals Order Controller, Service, and Producer
  • Convert orders into messages in a string format to ship to the dealer

Setting variables and port to our API to run:

ConfigChargeable for creating the KafkaTemplate bean, which will probably be used to ship the message, and creating the meals order matter.

Right here’s the mannequin class for FoodOrder:

FoodOrderControllerChargeable for receiving a meals order request, and passing it alongside to the service layer.

FoodOrderServiceChargeable for receiving the meals order and passing it alongside to the producer.

ProducerChargeable for receiving the meals order and publishing it as a message to Kafka.

In line 18 we convert the FoodOrder object right into a string in JSON format, so it may be acquired as a string within the client microservice.

In line 19 we truly ship the message, passing the subject during which to publish (referred in line 6 because the surroundings variable) and the order as a message.

When working the appliance, we must always be capable to see the subject created in Kafdrop. And when sending a meals order, we must always be capable to see within the logs that the message was despatched.

Now if underneath the Matters part in Kafdrop we entry the t.meals.order matter created, we must always be capable to see the message.

Structure:

Steps

  • Create a configuration for beans and group-id
  • Create database entry
  • Create Meals Order Client and Service
  • Create a Meals Entry Repository

We are going to begin configuring the port for our API to run, the subject to hearken to, a group-id for our client, and the database configurations

ConfigChargeable for configuring the ModelMapper bean which is a library used for mapping an object to a different, when utilizing the DTO sample for instance, that we are going to make use of right here

Listed here are the Mannequin lessons:

ClientChargeable for listening to the meals order matter and when any message is revealed to it, devour it. We are going to convert the listened messages to a FoodOrderDto object that doesn’t include every little thing associated to the entity that will probably be continued, just like the ID.

FoodOrderServiceChargeable for receiving the consumed order right into a FoodOrder object to be and passing it alongside to the persistence layer to be continued.

The code for the FoodOrderRepository is:

Now solely by working the Client Microservice, the already revealed messages will probably be consumed from the order matter

And an essential element to note right here is that if we go to Kafdrop and test the message that we simply consumed, it’s going to nonetheless be there. And that’s one thing that wouldn’t occur with RabbitMQ, for instance.

We are able to ship scheduled messages, by making use of Scheduling.

Allow it by including the @EnableScheduling annotation within the configuration class within the Producer Microservice.

Scheduler is accountable for sending the messages at a sure fee, we will probably be sending them at a hard and fast fee of 1000 milliseconds.

The subject will probably be created routinely, however we might outline the bean like we outlined beforehand.

The output could be

The primary thought right here was to make an introduction to utilizing Kafka with Java and Spring, so you possibly can implement this resolution inside a way more complicated system.

In case this text helped you in any approach, contemplate giving it a clap, following me and sharing it.

The mission on GitHub might be discovered here.

References

  1. Apache Kafka Documentation
  2. Kafka The Definitive Guide, O’Reilly
  3. Apache Kafka, Matthias J. Sax

More Posts