Read More

The Data Streaming Landscape 2026

Data streaming is now a core software category in modern data architecture. It powers real-time use cases like fraud prevention, personalization, supply chain optimization, and AI automation. What started with open source Apache Kafka and Flink has grown into a critical layer for business operations. The 2026 Data Streaming Landscape shows the most relevant Data Streaming Platform evolution. These platforms connect systems, process data in motion, enforce governance, and support mission-critical workloads at scale. Kafka is the standard protocol, but protocol support alone is not enough. Enterprises need full feature compatibility, 24/7 support, and expert guidance for security, resilience, and cloud strategy.
Read More
Amazon MSK Forces a Kafka Cluster Migration from ZooKeeper to KRaft
Read More

In-Place Kafka Cluster Upgrades from ZooKeeper to KRaft are Not Possible with Amazon MSK

The Apache Kafka community introduced KIP-500 to remove ZooKeeper and replace it with KRaft, a new built-in consensus layer. This was a major milestone. It simplified operations, improved scalability, and reduced complexity. Importantly, Kafka supports smooth, zero downtime migrations from ZooKeeper to KRaft, even for large, business critical clusters. But NOT with Amazon MSK…
Read More
Mainframe Modernization and Integration with Data Streaming using Apache Kafka IBM MQ IIDR CDC Precisely Qlik
Read More

Mainframe Integration with Data Streaming: Architecture, Business Value, Real-World Success

The mainframe is evolving—not fading. With cloud-native features, AI acceleration, and quantum-safe encryption, platforms like IBM z16 and z17 remain central to critical industries. But modern demands require real-time data access and system agility. Apache Kafka and Flink make this possible by streaming data bi-directionally between DB2, IMS, and MQ and cloud analytics platforms. This enables event-driven architectures without disrupting core systems. This post outlines proven strategies—offloading, integration, and replacement—and includes real-world examples across industries. The result: lower costs, faster innovation, and smarter use of legacy systems.
Read More
Data Streaming with Confluent Meets SAP and Databricks for Agentic AI at Sapphire in Madrid
Read More

Data Streaming Meets the SAP Ecosystem and Databricks – Insights from SAP Sapphire Madrid

SAP Sapphire 2025 in Madrid brought together global SAP users, partners, and technology leaders to showcase the future of enterprise data strategy. Key themes included SAP’s Business Data Cloud (BDC) vision, Joule for Agentic AI, and the deepening SAP-Databricks partnership. A major topic throughout the event was the increasing need for real-time integration across SAP and non-SAP systems—highlighting the critical role of event-driven architectures and data streaming platforms like Confluent. This blog shares insights on how data streaming enhances SAP ecosystems, supports AI initiatives, and enables industry-specific use cases across transactional and analytical domains.
Read More
Enterprise Application Integration with Confliuent and Databricks for Oracle SAP Salesforce Servicenow et al
Read More

Databricks and Confluent in the World of Enterprise Software (with SAP as Example)

Enterprise data lives in complex ecosystems—SAP, Oracle, Salesforce, ServiceNow, IBM Mainframes, and more. This article explores how Confluent and Databricks integrate with SAP to bridge operational and analytical workloads in real time. It outlines architectural patterns, trade-offs, and use cases like supply chain optimization, predictive maintenance, and financial reporting, showing how modern data streaming unlocks agility, reuse, and AI-readiness across even the most SAP-centric environments.
Read More
Shift Left Architecture with Confluent Data Streaming and Databricks Lakehouse Medallion
Read More

Shift Left Architecture for AI and Analytics with Confluent and Databricks

Confluent and Databricks enable a modern data architecture that unifies real-time streaming and lakehouse analytics. By combining shift-left principles with the structured layers of the Medallion Architecture, teams can improve data quality, reduce pipeline complexity, and accelerate insights for both operational and analytical workloads. Technologies like Apache Kafka, Flink, and Delta Lake form the backbone of scalable, AI-ready pipelines across cloud and hybrid environments.
Read More
Confluent and Databricks for Data Integration and Stream Processing
Read More

Confluent Data Streaming Platform vs. Databricks Data Intelligence Platform for Data Integration and Processing

This blog explores how Confluent and Databricks address data integration and processing in modern architectures. Confluent provides real-time, event-driven pipelines connecting operational systems, APIs, and batch sources with consistent, governed data flows. Databricks specializes in large-scale batch processing, data enrichment, and AI model development. Together, they offer a unified approach that bridges operational and analytical workloads. Key topics include ingestion patterns, the role of Tableflow, the shift-left architecture for earlier data validation, and real-world examples like Uniper’s energy trading platform powered by Confluent and Databricks.
Read More
Data Streaming and Lakehouse - Comparison of Confluent with Apache Kafka and Flink and Databricks with Spark
Read More

The Past, Present, and Future of Confluent (The Kafka Company) and Databricks (The Spark Company)

Confluent and Databricks have redefined modern data architectures, growing beyond their Kafka and Spark roots. Confluent drives real-time operational workloads; Databricks powers analytical and AI-driven applications. As operational and analytical boundaries blur, native integrations like Tableflow and Delta Lake unify streaming and batch processing across hybrid and multi-cloud environments. This blog explores the platforms’ evolution and how, together, they enable enterprises to build scalable, data-driven architectures. The Michelin success story shows how combining real-time data and AI unlocks innovation and resilience.
Read More
Shift Left Architecture at Siemens with Stream Processing using Apache Kafka and Flink
Read More

Shift Left Architecture at Siemens: Real-Time Innovation in Manufacturing and Logistics with Data Streaming

Industrial enterprises face increasing pressure to move faster, automate more, and adapt to constant change—without compromising reliability. Siemens Digital Industries addresses this challenge by combining real-time data streaming, modular design, and Shift Left principles to modernize manufacturing and logistics. This blog outlines how technologies like Apache Kafka, Apache Flink, and Confluent Cloud support scalable, event-driven architectures. A real-world example from Siemens’ Modular Intralogistics Platform illustrates how this approach improves data quality, system responsiveness, and operational agility.
Read More
The Importance of Focus for Software and Cloud Vendors - Data Streaming with Apache Kafka and Flink
Read More

The Importance of Focus: Why Software Vendors Should Specialize Instead of Doing Everything (Example: Data Streaming)

As real-time technologies reshape IT architectures, software vendors face a critical decision: specialize deeply in one domain or build a broad, general-purpose stack. This blog examines why a focused approach—particularly in the world of data streaming—delivers greater innovation, scalability, and reliability. It compares leading platforms and strategies, from specialized providers like Confluent to generalist cloud ecosystems, and highlights the operational risks of fragmented tools. With data streaming emerging as its own software category, enterprises need clarity, consistency, and deep expertise. In this post, we argue that specialization—not breadth—is what powers mission-critical, real-time applications at global scale.
Read More