Chinese Yellow Pages | Classifieds | Knowledge | Tax | IME

How kafka client get notification?

Kafka clients (both producers and consumers) interact with the Kafka server (brokers) primarily through polling mechanisms. Here’s how it works for both producers and consumers:

Kafka Producers

  1. Sending Messages: Producers send messages to Kafka topics. They use the send method to asynchronously send records to the specified topic. The producer typically batches records for efficiency and sends them to the broker when the batch is full or a specified timeout is reached.
  2. Acknowledgments: Depending on the acks configuration, the producer may wait for acknowledgments from the broker. This is also a form of polling, where the producer waits for a response indicating whether the record has been successfully written to the broker.

Kafka Consumers

  1. Polling for Messages: Consumers use the poll method to fetch records from Kafka. This method requests records from the Kafka brokers, and the consumer blocks until records are available or a specified timeout occurs.
  2. Fetching Records: When the poll method is called, the consumer contacts the brokers to fetch records from the specified topic partitions.
  3. Heartbeat Mechanism: Consumers also send heartbeats to the Kafka brokers to maintain their membership in the consumer group and prevent their sessions from expiring. This is another form of polling that happens in the background to ensure the consumer is still active.

Key Concepts

  • Batching: Both producers and consumers use batching to improve efficiency. Producers batch records before sending them, and consumers fetch batches of records in a single poll call.
  • Offsets: Consumers keep track of their position in each topic partition by maintaining offsets. After processing a batch of records, the consumer commits the offsets, which can be done automatically or manually.
  • Asynchronous Operations: Many of the interactions between clients and the Kafka server are asynchronous, especially on the producer side, where sending records and receiving acknowledgments can happen independently of the main application logic.

Example Workflow for a Consumer

  1. Polling: The consumer calls poll to request records from the broker.
  2. Processing: The consumer processes the records fetched from Kafka.
  3. Committing Offsets: After processing, the consumer commits the offsets to mark the records as consumed.

Example Workflow for a Producer

  1. Sending: The producer sends a record using the send method.
  2. Batching: Records are batched together and sent to the broker.
  3. Acknowledgment: The broker sends an acknowledgment back to the producer based on the acks configuration.

In summary, both Kafka producers and consumers interact with the Kafka server using a polling mechanism. Producers send messages and may poll for acknowledgments, while consumers regularly poll the Kafka brokers to fetch new messages.

does that poll a good design? any other alternatives?

Polling in Kafka, while effective and efficient for many use cases, has its own set of advantages and drawbacks. Let’s examine the design of polling and some potential alternatives.

Advantages of Polling

  1. Control over Consumption Rate: Polling allows consumers to control how many records they fetch and process at a time, which can be useful for managing load and ensuring the application can handle the incoming data rate.
  2. Batch Processing: Consumers can process messages in batches, which can improve efficiency and performance.
  3. Backpressure Handling: Polling allows consumers to implement backpressure, slowing down consumption when the processing load is high.
  4. Scalability: Kafka’s polling model scales well, as each consumer can independently poll messages at its own pace, and consumer groups can be used to distribute load.

Disadvantages of Polling

  1. Latency: Polling introduces latency, as consumers periodically check for new messages rather than being notified immediately when new messages arrive.
  2. Complexity: Handling poll intervals, timeouts, and ensuring the consumer is correctly managing offsets and processing can add complexity to the application.
  3. Resource Utilization: Constant polling can lead to inefficient resource utilization, especially if the polling intervals are too short or the broker is queried frequently with no new data available.

Alternatives to Polling

  1. Push Model (Event-Driven):
    • Description: In a push model, the server (broker) pushes messages to the clients as soon as they become available. This is common in systems like message queues (e.g., RabbitMQ) where the broker sends messages to consumers without the consumers having to poll.
    • Advantages: Lower latency since consumers receive messages as they arrive. Potentially more efficient resource utilization.
    • Disadvantages: Requires more complex flow control and backpressure mechanisms to prevent consumers from being overwhelmed. Can be harder to scale compared to the pull model.
  2. Long Polling:
    • Description: Long polling is a variant of polling where the server holds the request open until new data is available or a timeout occurs. When new data arrives, the server immediately responds to the held request.
    • Advantages: Reduces the number of empty responses and minimizes latency compared to simple polling. More efficient resource utilization.
    • Disadvantages: More complex to implement on both client and server sides. Still involves some level of polling.
  3. WebSockets or Server-Sent Events (SSE):
    • Description: WebSockets provide a full-duplex communication channel over a single TCP connection, allowing servers to send messages to clients in real-time. SSE allows servers to push updates to clients over an HTTP connection.
    • Advantages: Real-time updates with low latency and efficient use of network resources. Suitable for applications needing instant updates.
    • Disadvantages: Requires additional infrastructure and handling for maintaining persistent connections. More complex than traditional HTTP-based polling.

Kafka’s Choice of Polling

Kafka’s design prioritizes high throughput, fault tolerance, and scalability. Polling fits well with these goals because:

  • High Throughput: Polling allows batching of messages, which is crucial for achieving high throughput.
  • Scalability: Polling decouples the rate of message production from consumption, enabling better scalability.
  • Fault Tolerance: Consumers manage their own state and offsets, contributing to Kafka’s robust fault tolerance.


While polling has its downsides, it is a well-suited design choice for Kafka given its requirements for high throughput, scalability, and fault tolerance. Alternatives like the push model, long polling, or WebSockets might be better suited for scenarios where low latency and real-time updates are more critical than the other factors Kafka optimizes for. However, each alternative comes with its own trade-offs in terms of complexity, resource utilization, and scalability.

can you explain more on the long polling? how it works? how it different from normal poll?

Long polling is an alternative to the traditional polling mechanism, designed to reduce latency and improve resource utilization when clients need to receive updates from a server. Let’s delve into how long polling works and how it differs from normal polling.

How Long Polling Works

  1. Client Request: The client sends a request to the server asking for updates. Unlike normal polling, the server doesn’t immediately respond if no new data is available.
  2. Server Wait: The server holds the request open until new data becomes available or a specified timeout is reached.
  3. Server Response: Once new data is available, the server immediately responds to the client with the data.
  4. Client Handling: Upon receiving the data, the client processes it and typically sends another long poll request to the server to wait for more updates.
  5. Timeout Handling: If no new data is available within the timeout period, the server responds to the client with an empty response or a message indicating no new data. The client then reissues the request.

Normal Polling vs. Long Polling

Normal Polling

  • Request-Response Cycle: The client periodically sends requests to the server at regular intervals, asking for new data.
  • Immediate Response: The server responds immediately, either with new data or an indication that no new data is available.
  • Resource Utilization: Frequent requests can lead to inefficient use of server and network resources, especially if there is little or no new data to send.
  • Latency: There is inherent latency as clients wait for the next polling interval to check for new data.

Long Polling

  • Request-Response Cycle: The client sends a request to the server and waits. The server holds the request until new data is available or a timeout occurs.
  • Delayed Response: The server only responds when new data is available, reducing the number of empty responses.
  • Resource Utilization: More efficient as the server is not constantly handling requests with no new data to send. The number of requests is reduced compared to normal polling.
  • Latency: Lower latency for receiving new data since the server responds immediately when new data is available, rather than waiting for the next polling interval.

Example Workflow for Long Polling

  1. Initial Request: The client sends a request to the server.

    Client -> Server: GET /updates
  2. Server Holds Request: The server holds the request open, checking for new data.
  3. New Data Arrives: When new data is available, the server responds to the client.

    Server -> Client: HTTP/1.1 200 OK
    Content: {new data}
  4. Client Processes Data: The client processes the received data and immediately sends another request to wait for more updates.

    Client -> Server: GET /updates
  5. Timeout Scenario: If no new data is available within the timeout period, the server responds with an empty or “no new data” message.

    Server -> Client: HTTP/1.1 200 OK
    Content: {no new data}
  6. Client Reissues Request: The client processes the response and reissues the request.

    Client -> Server: GET /updates

Advantages and Use Cases of Long Polling

  • Reduced Server Load: By reducing the number of requests that return no new data, long polling can significantly lower the load on the server and network.
  • Lower Latency: Clients receive updates as soon as they are available, reducing the time they wait for new data compared to normal polling.
  • Simplicity: While more complex than normal polling, long polling is still relatively simple to implement compared to WebSockets or other real-time communication mechanisms.
  • Compatibility: Long polling works with existing HTTP/1.1 infrastructure, making it easy to integrate into systems that may not support WebSockets or other protocols.

Disadvantages of Long Polling

  • Connection Overhead: Holding connections open can increase the number of concurrent connections the server must manage, which can be a limitation for some servers.
  • Complexity: More complex to implement than simple polling, requiring careful handling of timeouts and reissues of requests.
  • Latency for Timeout: If the timeout is reached without new data, there can still be some latency as the client reissues the request.


Long polling provides a more efficient and lower-latency alternative to normal polling by holding client requests open until new data is available. It reduces unnecessary network traffic and server load while providing timely updates to clients. However, it comes with increased complexity and potential overhead in managing long-lived connections.

Leave a Reply

Your email address will not be published. Required fields are marked *