Queues for Kafka

When (Not) to Use Queues for Kafka?

Apache Kafka has long been the backbone for real-time data streaming at scale. But for years, it lacked native support for traditional queuing patterns that power backend processing and system integration in many enterprises. That limitation has now been addressed. With the general availability of Queues for Kafka (QfK) in Apache Kafka 4.2, developers and architects can use a single platform for both stream processing and message queuing. This brings new flexibility without sacrificing performance, scalability, or governance.

This blog explains what a message queue is, how Kafka differs from traditional MQ systems, and how QfK expands Kafka’s role in modern integration architectures. Whether you are modernizing legacy middleware, scaling backend services, or building real-time data products, this guide helps you understand when to use QfK and when to choose a different approach.

Join the data streaming community and stay informed about new blog posts by subscribing to my newsletter and follow me on LinkedIn or X (former Twitter) to stay in touch. And make sure to download my free book about data streaming use cases, including various Kafka pattern and scenarios for IT modernization, data sharing, microservices, and many more.

1. What Is a Message Queue?

Message queues (MQ) are the foundation of many integration and backend systems. They enable asynchronous communication between systems and services using a point-to-point model.

Key characteristics of MQ:

  • One producer, one consumer per message
  • Strong delivery guarantees (at least once or once-and-only-once)
  • Destructive consumption removes messages from the queue after delivery to a consumer
  • Ideal for load distribution across worker pools or transactional backend services

Overview of Traditional MQ Solutions (Cloud vs. Self-managed)

Each traditional messaging system offers different trade-offs in terms of operational complexity, cloud support, and use cases.

  • IBM MQ: Enterprise-grade, deeply embedded in industries like banking. Strong support for XA transactions and mainframes. Fully self-managed or cloud service.

  • TIBCO EMS: Common in large enterprise SOA (service-oriented architecture) environments. Focuses on integration and transaction handling. Self-managed or hybrid setups. Often tied to legacy architectures.

  • RabbitMQ: Lightweight and widely adopted in microservice environments. Limited enterprise features. Self-managed by default, but also available as a managed service via cloud providers.

  • Amazon SQS: Fully managed, simple queuing for AWS environments. No infrastructure management required. Ideal for basic task queues, but lacks pub/sub and advanced enterprise features.

  • Solace supports multiple protocols such as JMS, MQTT, and AMQP and is available as a cloud PaaS or a self-managed deployment. In its cloud offering, customers deploy a dedicated event broker, while version upgrades and scaling to higher service tiers remain a customer responsibility.

MQTT Brokers Are Not Part of This Discussion

MQTT brokers like Eclipse Mosquitto, HiveMQ, and EMQX are also called message brokers. However, their role is very different from systems like RabbitMQ or Kafka.

MQTT is a lightweight pub/sub protocol built for:

  • Poor or unstable network conditions
  • Tens of thousands of device connections
  • Low-power sensors and embedded devices (common in IoT)

These brokers are built for last-mile delivery in edge and OT environments. They are intentionally lightweight and do not provide:

  • Event log for replayability or historical access, which limits support for patterns like microservices, data mesh, and auditability
  • Built-in data processing, such as stateless filtering and routing, stateful stream processing for pattern detection, or AI-based inference for real-time decision-making
  • Native integration with enterprise systems and cloud platforms, which is often required for business workflows
  • Compliance, security, and governance features, which are critical for IT-managed workloads and regulated environments

That’s why MQTT solutions and data streaming platforms are complementary.

MQTT handles telemetry collection at the edge. Kafka-powered event streaming becomes the central nervous system in the backend, powering integration, processing, and analytics across the IT landscape.

For real-world examples and architecture patterns, see the Kafka and MQTT blog series.

What Is Apache Kafka — and How Is It Different from a Message Queue?

Kafka is a distributed platform for building real-time data pipelines and streaming applications. Unlike traditional queues, Kafka is built around a publish/subscribe model and durable logs.

A key difference between Kafka and traditional message queues lies in how they handle message consumption. In most MQ systems, messages are consumed destructively. Once a consumer processes a message, it is removed from the queue. Kafka uses a durable event log where messages remain available even after consumption. Consumers move along partitions of a Kafka Topic by tracking their own offset, which determines where to resume reading. This allows multiple consumers to independently read the same data at different times and provides a stronger form of decoupling across systems and teams.

How Kafka differs from MQ:

  • Messages can be consumed by many consumers
  • Data is stored for a defined retention period, not deleted on consumption
  • Horizontal scalability through partitioning and replication
  • Supports stream processing through Kafka Streams and Apache Flink
  • Provides long-term storage, data replay, and event sourcing capabilities
  • Uses a pull-based consumption model, giving consumers full control over how and when they read messages

Kafka was not originally designed for queuing. That changed with the release of Queues for Kafka (QfK), now available as part of Apache Kafka. QfK became generally available in the open source Kafka 4.2 release.

Vendors providing Kafka products and cloud services: Confluent, WarpStream, Amazon MSK, Cloudera, Aiven. For more details and vendors, look at the Data Streaming Landscape 2026.

Queues for Kafka adds a new capability to the Kafka ecosystem: a single platform for both queuing and streaming.

Introducing Queues for Kafka (QfK)

Kafka traditionally handled stream processing and publish/subscribe workloads. With QfK, Kafka can now natively support queue-like semantics, opening it up to classic integration scenarios that relied on MQs.

What is “Queues for Kafka”?

Queues for Kafka (QfK) is an enhancement to Apache Kafka, standardized via a Kafka Improvement Proposal (KIP). It introduces Share Groups: a new type of group (as alternative to traditional Consumer Groups) where each message is delivered to exactly one consumer, independent of partitions.

Source: Confluent

Kafka Topics become task queues, while maintaining the Kafka ecosystem, durability, and scalability.

Kafka Topics support a combination of queueing and publish/subscribe patterns. Multiple consumer groups and share groups can coexist on the same topic, allowing different consumers to process the same data in parallel or independently, depending on the use case.

Open Source Framework and Product Enhancements

Queues for Kafka (QfK) is part of open source Apache Kafka, adding native queue semantics to the core platform.

As a strong example of building on top of QfK, Confluent Platform and Confluent Cloud enhance the open-source foundation with enterprise features, including:

  • Centralized governance and security
  • RBAC, data lineage, and audit logging
  • Share Group metrics for visibility and monitoring
  • Integration with Stream Catalog and Control Center for platform-wide management

This gives teams the best of both worlds: native queuing capabilities within Kafka, and the enterprise-grade tooling required to operate it at scale in production.

Confluent Cloud provides QfK as a fully managed, serverless service. It offers consumption-based pricing, automatic scaling, and built-in operational SLAs. This lets teams focus on solving business problems instead of running infrastructure.

When to Use Queues for Kafka?

Queues for Kafka (QfK) brings native queuing capabilities to Kafka. This functionality enables classic messaging patterns within a modern data streaming platform. It is especially well-suited for operational workloads that require reliable, distributed task handling without the complexity of partition-based scaling:

  • Task queues / Worker pool processing: Ideal for backend jobs where messages are distributed across many workers. One message, one consumer.
  • Parallel processing with dynamic scaling: Add or remove consumers without rebalancing partitions. This solves a major Kafka limitation. Multiple consumers can process the same partition in parallel, which is not possible with traditional Consumer Groups, where each partition can be assigned to only one consumer of a Consumer Group at a time.
  • Event bus for lightweight point-to-point delivery: When you don’t need fan-out or replay, but still want Kafka’s durability and ecosystem.

It’s worth noting that the use case must allow messages to be consumed independently, meaning ordering guarantees are not required.

When NOT to Use Queues for Kafka?

Queues for Kafka addresses a wide range of operational queueing needs, especially around task distribution and parallel processing. The initial GA release focuses on these high-impact use cases. Other workloads may still be better handled by different Kafka patterns or external tools, depending on the specific requirements.

The roadmap for QfK is active and evolving, with many of today’s limitations already targeted for future enhancements.

With the current Kafka release 4.2, don’t use QfK when:

  • Strict message ordering is required: QfK doesn’t support ordering guarantees. Use Kafka’s Consumer Groups where order is critical.
  • Exactly-once semantics (EOS) are required: QfK doesn’t support transactions or EOS. Use Kafka’s transaction APIs for that or a dedicated message broker like IBM MQ with its Two-Phase-Commit (XA) Transaction capabilities.
  • Request/reply communication is needed: Kafka can support this with some design effort, but MQ tools do it more naturally and efficiently. With Kafka, you can either use its REST/HTTP interfaces or build the communication via a reply id meta data (similarly like MQ does it under the hood).
  • You need to use legacy protocols and APIs (JMS, AMQP, MQTT): QfK doesn’t support these directly. Use Kafka Connect connectors to bridge systems if necessary, or choose a message broker that supports the required protocols.
  • The use case is analytical (windowing, enrichment, transformation): QfK is for operational workloads. Use Kafka Streams, Apache Flink, or any other engine like Databricks with Apache Spark for analytics.

In short, QfK is a strong fit for operational workloads like task distribution and background processing. For use cases that require complex transactions, advanced analytics, or tight system coupling, other patterns within the broader Kafka ecosystem may be a better fit. And if not, choose the right tool for the job – whether it’s a traditional message queue, an API gateway, or an integration platform.

Kafka as the Next-Generation Middleware for Integration Scenarios

Kafka has evolved into a cloud-native integration backbone, supporting not only real-time streaming but also a broad set of middleware-style use cases across modern and legacy environments.

A complete data streaming platform powered by Kafka and Flink enables:

  • Real-time event broker, combining high-throughput messaging with persistent storage for true decoupling
  • Unified stream and batch processing through Apache Flink’s single API for both workloads
  • Connectivity to any system, including message queues, databases with change data capture, SaaS platforms like SAP, Salesforce, and ServiceNow, as well as ESBs, iPaaS, ETL tools, or custom applications using Kafka connectors or HTTP interfaces
  • APIs and webhooks, powered by the Kafka REST Proxy and connector ecosystem
  • Queuing workloads, supported natively through Queues for Kafka
  • Integration into data lakes and AI platforms, enabled by native Apache Iceberg and Delta Lake support for managing large-scale analytical datasets

This is all backed by enterprise features such as governance, security, compliance, and operational SLAs to make the platform ready for production-grade integration at scale.

A key enabler for modern integration architectures is the ability to expose data as a product. With Schema Registry and data contracts, teams can apply schema validation, routing policies, encryption, dead letter queues (DLQ), and access controls. This lays the foundation for scalable microservices, secure data sharing, and domain-driven data mesh architectures. Event-driven data products provide consistency, traceability, and quality across all producers and consumers.

Kafka’s Role in the Integration Ecosystem

Kafka is becoming the central nervous system of enterprise IT. It takes on many responsibilities that were traditionally handled by different middleware tools and services, unifying them into one scalable platform.

Instead of relying on fragile point-to-point connections between systems, organizations can use Kafka to create an event-driven architecture. Applications publish and subscribe to data streams, enabling decoupled communication that is easier to scale, evolve, and monitor. I explored in 2018 how Apache Kafka differentiates from traditional middleware like an Enterprise Service Bus (ESB). The article is still accurate today, even if you think more about cloud-based Integration Platform as a Service (iPaaS) offerings.

Kafka does NOT aim to replace every integration tool. Many remain complementary (for some use cases).

For example:

  • API gateways still handle authentication, rate limiting, and developer portal management
  • iPaaS platforms are useful for complex SaaS integrations, domain-specific protocols like EDIFACT, and no-code or low-code development
  • Workflow and orchestration (BPM) engines remain essential for long-running process orchestration and human-in-the-loop scenarios

Kafka integrates with all of these. It connects to ESBs, iPaaS platforms, and traditional MQ systems, making it possible to modernize integration without disruption. Over time, many workloads can be shifted onto Kafka to reduce complexity, unify operations, and build a foundation for event-driven business. This is a perfect scenario for the Strangler Fig Design pattern to lift and shift legacy middleware over time.

Kafka is not just another tool in the stack. It is the backbone of a modern, cloud-native, real-time integration strategy.

Kafka with QfK as the Foundation for Modern Integration Architecture

Queues for Kafka expands Kafka’s role in the enterprise. It adds native support for queuing patterns, in addition to Kafka’s proven capabilities for real-time streaming, event processing, and analytics. This makes Kafka suitable for an even wider range of integration scenarios that were traditionally handled by separate messaging systems.

For architects and platform teams, this enables a simpler and more unified stack:

  • Use Kafka for stream processing, event routing, and task queue workloads
  • Consolidate infrastructure by standardizing on one platform for both streaming and messaging
  • Reuse built-in governance, security, and observability across all data flows

This leads to measurable benefits for enterprise architecture:

  • Lower operational complexity and improved cost efficiency by consolidating legacy MQ systems where appropriate
  • Greater agility with dynamic scaling and self-service integration patterns
  • A consistent platform designed for modern integration needs, across cloud, hybrid, and on-prem environments

Kafka is no longer just a streaming engine. It is the backbone of cloud-native, event-driven, and data-centric architectures. Queues for Kafka plays a strategic role in this shift, enabling teams to simplify the middleware landscape and move toward a unified platform for all critical data flows.

Join the data streaming community and stay informed about new blog posts by subscribing to my newsletter and follow me on LinkedIn or X (former Twitter) to stay in touch. And make sure to download my free book about data streaming use cases, including various Kafka pattern and scenarios for IT modernization, data sharing, microservices, and many more.

Kai Waehner

bridging the gap between technical innovation and business value for real-time data streaming, processing and analytics

Share
Published by
Kai Waehner

Recent Posts

Diskless Kafka at FinTech Robinhood for Cost-Efficient Log Analytics and Observability

Diskless Kafka is transforming how fintech and financial services organizations handle observability and log analytics.…

6 days ago

Shift Left in Automotive: Real-Time Intelligence from Vehicle Telemetry with Data Streaming at Rivian

Rivian and Volkswagen, through their joint venture RV Tech, process high-frequency telemetry from connected vehicles…

2 weeks ago

Etihad Airways Makes Airline Operations Real-Time with Data Streaming

Airlines face constant pressure to deliver reliable service while managing complex operations and rising customer…

3 weeks ago

Stream Processing on the Mainframe with Apache Flink: Genius or a Glitch in the Matrix?

Running Apache Flink on a mainframe may sound surprising, but it is already happening and…

1 month ago

10 FinTech Predictions That Depend on Real Time Data Streaming

Financial services companies are moving from batch processing to real time data flow. A data…

1 month ago

Top Trends for Data Streaming with Apache Kafka and Flink in 2026

Each year brings new momentum to the data streaming space. In 2026, six key trends…

2 months ago