국내 No.1 에너지 IT기업 ‘해줌’의 컨플루언트 클라우드 도입 스토리 | 알아보고 등록하기

What is Kafka Message Size Limit?

The Kafka message size limit is the maximum size a message can be to be successfully produced to a Kafka topic. By default, Apache Kafka® is optimized for small messages, but it also provides configuration options to adjust message sizes. Kafka's default message size limit is set to 1 MB. This limit aims to maintain high throughput by reducing network congestion and preserving system resources.

Kafka messages and keys are stored on disk; therefore, excessively large keys increase storage demands. This can also impact throughput, as large keys require more bandwidth and can also lead to performance degradation.

It’s best to keep keys within a manageable size, ideally below a few KB. Use only essential identifiers (e.g., customer ID, order ID) as keys, and avoid complex structures or large metadata as message keys. This keeps the Kafka ecosystem efficient and cost-effective in terms of storage, processing, and bandwidth usage.

Table of Contents

Default Kafka message size limit

Apache Kafka key’s default maximum message size is 1 MB, a limit designed to help brokers manage memory effectively. While Kafka can process larger batches if there’s sufficient disk space, this often reduces throughput and is generally seen as inefficient—very large messages are considered an anti-pattern in Kafka. Although you can increase this message size limit by adjusting default settings, many recommend against it to preserve platform stability.

Kafka Configuration Parameters for Message Size

Kafka provides several configuration parameters that influence message size limits at both the broker and producer levels. Here’s a look at the key parameters:

  1. Broker-Level Parameters:

    • message.max.bytes: Sets the maximum size for a message that can be accepted by the broker. The default is 1 MB.

    • replica.fetch.max.bytes: Controls the maximum size of a message that a broker replica can fetch. This setting should be at least as large as message.max.bytes to prevent issues during replication.

    • max.message.bytes (Confluent Cloud): This is the message size limit set in Confluent Cloud, typically up to 10 MB.

  2. Producer-Level Parameters:

    • max.request.size: Defines the largest message size a producer can send. Setting this appropriately prevents producers from encountering message rejection errors when sending large messages.

    • buffer.memory: Controls the total amount of memory available to a producer, indirectly impacting how large messages are handled.

  3. Consumer-Level Parameters:

    • fetch.max.bytes: Determines the maximum size of data a consumer can fetch in a single request. It’s useful for consumers expecting large messages.

    • max.partition.fetch.bytes: This parameter sets the maximum size of data that can be fetched per partition, which can prevent issues when consuming large messages.

Configuring these parameters correctly is essential when handling larger-than-average message sizes in Kafka.

How Kafka Handles Large Messages

By default, Kafka is designed to handle small to moderate message sizes. When messages exceed the default limit of 1 MB, Kafka will reject them unless configurations are modified.

  • Compression: Kafka supports compression methods (e.g., gzip, snappy, lz4) that can reduce message sizes, allowing larger payloads to fit within set limits.

  • Batched Messages: Kafka producers can send multiple small messages in a batch to optimize for throughput rather than increasing individual message sizes.

  • Error Handling: If a message exceeds configured limits, Kafka will log an error, and the producer can be configured to retry or skip problematic messages.

Increasing Kafka’s Message Size Limit

Increasing Kafka’s message size limit involves modifying both the producer and broker configuration parameters to accept larger messages. Here’s a step-by-step guide:

  1. Adjust Broker Settings: Update message.max.bytes to a value greater than the default 1 MB. This should reflect the largest message size anticipated in your Kafka cluster. Ensure replica.fetch.max.bytes is equal to or larger than message.max.bytes to allow brokers to replicate larger messages.

  2. Configure Producer Settings: Set max.request.size to a size larger than the anticipated message payload. This allows producers to send larger messages without rejection.

  3. Update Consumer Settings: Adjust fetch.max.bytes and max.partition.fetch.bytes on the consumer side to handle larger messages during data consumption.

  4. Restart Kafka Services: After updating configuration parameters, restart the Kafka brokers, producers, and consumers for changes to take effect.

Message Size Limits in Confluent Cloud

Confluent Cloud, a managed Kafka service, has specific message size limits to support various use cases without extensive configurations. The default limit in Confluent Cloud is typically up to 2 MB per message for all types of clusters. The maximum size for basic and standard clusters is 8MB and for enterprise-level clusters, the maximum size is 20MB. For clients managing large messages in Confluent Cloud, adjusting application logic, partition strategies, and potential compression are recommended to optimize usage.

To increase message size limits in Confluent Cloud, users must contact Confluent support, as custom configurations require verification of compatibility with the managed environment’s setup.

Best Practices for Handling Large Messages in Kafka

  1. Optimize Message Payloads: Keep messages small by breaking large messages into smaller chunks or compressing payloads to fit within configured limits.

  2. Use Compression: Enable compression at the producer level to reduce payload sizes. Kafka supports several compression types, including gzip, snappy, and lz4.

  3. Implement Message Splitting: For especially large messages, consider splitting them at the application layer, then reassembling on the consumer side.

  4. Monitor Resource Usage: Larger messages can increase memory usage and network load, so monitor resources to prevent bottlenecks.

  5. Leverage Topic Partitioning: Distribute large messages across multiple partitions to balance load, improving performance and processing speed.

Monitoring and Troubleshooting Kafka Message Size Issues

Monitoring is essential to proactively identify and troubleshoot message size issues. Kafka provides several tools and metrics to help:

  1. Metrics:

    • RecordsPerRequestAvg: Tracks average records per request, indicating if large messages are creating bottlenecks.

    • RequestSizeAvg and RequestSizeMax: Monitor request sizes to ensure they remain within acceptable ranges.

    • FetchSizeAvg and FetchSizeMax: Measure consumer fetch sizes to identify potential issues when consuming large messages.

  2. Logging and Alerting:

    • Set up alerts for failed or rejected messages due to size limits.

    • Enable detailed logging to capture errors related to message size.

  3. Third-Party Monitoring Tools:

    • Use tools like Prometheus, Grafana, or Confluent Control Center to monitor message sizes and resource usage across the Kafka cluster.

Use Cases for Kafka Message Size Limits

  1. Log Aggregation: Logs often have varying sizes, so message size limits help ensure that Kafka can handle diverse log formats efficiently.

  2. Event Streaming for Large Payloads: For applications where events have large payloads, such as video or audio data, message size configurations allow Kafka to accommodate these data streams.

  3. IoT and Sensor Data: IoT applications might produce high-frequency, variable-size messages, and setting appropriate size limits helps Kafka manage this data flow without loss.

  4. E-Commerce Transactions: Order and transaction data may contain sizable records (e.g., JSON objects with product details), and configuration adjustments ensure reliable message processing.

  5. Real-Time Analytics: Analytics pipelines that process real-time data (e.g., user behavior or clickstream data) can produce varying message sizes. Setting message size limits ensures that Kafka can efficiently handle these high-throughput streams, optimizing latency and allowing for seamless data processing.

  6. Social Media and User-Generated Content: Platforms that capture user-generated content, like posts, images, or metadata, can have variable-sized messages depending on the content type. Configuring message limits in Kafka helps manage this data variety effectively, especially in high-traffic applications.

  7. Financial Data Streams: Financial institutions handle transaction records, market data, and updates with diverse message sizes. Configuring Kafka to accommodate specific message sizes ensures reliable and efficient data flow, especially for critical transactions requiring guaranteed delivery.

  8. Healthcare and Medical Records: Healthcare data, including patient records or diagnostic images, can vary significantly in size. Adjusting Kafka message limits allows for secure, complaint handling of larger payloads, which is essential in healthcare data streaming where data integrity is critical.

  9. Machine Learning Model Serving and Predictions: In AI-driven applications, Kafka is often used to stream both input data and predictions for machine learning models. Predictions and features can sometimes produce sizable messages; configuring Kafka with optimal size limits supports scalable model serving without compromising data throughput.

Alternatives to Handling Large Messages in Kafka

When Kafka’s message size limits or architecture are not ideal for very large payloads, there are several alternative approaches:

  1. External Storage with Kafka Pointers: Store large payloads in an external system (e.g., S3, HDFS) and use Kafka messages to contain pointers to the data. This keeps Kafka’s messaging lightweight and suitable for high-throughput.

  2. Data Serialization Formats: Use efficient serialization formats like Apache Avro or Protocol Buffers to reduce payload size. These formats are compact and can handle schema evolution effectively.

  3. Hybrid Approach with Stream Processing: Use stream processing platforms like Apache Flink or Kafka Streams to pre-process and split large messages into smaller, manageable parts before ingesting into Kafka.

  4. Data Chaining: Split a large message into parts, send each part as a separate Kafka message, and link them with metadata (such as unique message IDs) for reassembly on the consumer side.

  5. Chunking Large Files: When dealing with large files, consider chunking them into smaller segments before uploading to external storage. Each chunk can then be sent as a separate message in Kafka, facilitating easier management and ensuring compliance with message size limits. This approach also enhances parallel processing during the consumption phase. 

  6. Metadata Management: Along with pointers to external storage, include comprehensive metadata in Kafka messages. This metadata can include details about the data's origin, size, content type, and timestamps. Such information aids consumers in efficiently retrieving and processing the associated large payloads.

  7. Event Sourcing: In event-driven architectures, consider adopting an event sourcing pattern where state changes are logged as a series of events. For large data payloads, only state change events can be sent through Kafka, while the complete data can reside in external storage. This keeps Kafka's message sizes minimal while still providing a full history of changes.

Conclusion

Understanding and managing Kafka message size limits is crucial for building a scalable and efficient Kafka-based streaming architecture. By adjusting configuration parameters, applying best practices, and monitoring message flows, organizations can optimize Kafka for various data processing needs, including handling larger payloads.

To get started with Kafka or to learn more about advanced Kafka configuration techniques, sign up for free and explore our resources on Kafka streaming solutions and data management.