Read More

The Data Streaming Landscape 2025

Data streaming is a new software category. It has grown from niche adoption to becoming a fundamental part of modern data architecture, leveraging open source technologies like Apache Kafka and Flink. With real-time data processing transforming industries, the ecosystem of tools, platforms, and cloud services has evolved significantly. This blog post explores the data streaming landscape of 2025, analyzing key players, trends, and market dynamics shaping this space.
Read More
Data Streaming Trends for 2025 - Leading with Apache Kafka and Flink
Read More

Top Trends for Data Streaming with Apache Kafka and Flink in 2025

Apache Kafka and Apache Flink are leading open-source frameworks for data streaming that serve as the foundation for cloud services, enabling organizations to unlock the potential of real-time data. Over recent years, trends have shifted from batch-based data processing to real-time analytics, scalable cloud-native architectures, and improved data governance powered by these technologies. Looking ahead to 2025, the data streaming ecosystem is set to undergo even greater changes. Here are the top trends shaping the future of data streaming for businesses.
Read More
Data Streaming with Apache Kafka and Flink in the Cloud in Healthcare and Pharma at Cardinal Health
Read More

Data Streaming in Healthcare and Pharma: Use Cases and Insights from Cardinal Health

This blog delves into Cardinal Health’s journey, exploring how its event-driven architecture and data streaming power use cases like supply chain optimization, and medical device and equipment management. By integrating Apache Kafka with platforms like Apigee, Dell Boomi and SAP, Cardinal Health sets a benchmark for IT modernization and innovation in the healthcare and pharma sectors.
Read More
Dynamic Pricing with Data Streaming using Apache Kafka and Flink
Read More

A New Era in Dynamic Pricing: Real-Time Data Streaming with Apache Kafka and Flink

In the age of digitization, the concept of pricing is no longer fixed or manual. Instead, companies increasingly use dynamic pricing — a flexible model that adjusts prices based on real-time market changes to enable real-time responsiveness, giving companies the tools they need to respond instantly to demand, competitor prices, and customer behaviors. This blog post explores the fundamentals of dynamic pricing, its link to data streaming, and real-world examples across different industries such as retail, logistics, gaming and the energy sector.
Read More
Fraud Prevention with Apache Kafka in Real Time in Financial Services and Banking
Read More

Fraud Prevention in Under 60 Seconds with Apache Kafka: How A Bank in Thailand is Leading the Charge

In the fast-paced world of finance, the ability to prevent fraud in real-time is not just a competitive advantage – it is a necessity. For one of the largest banks in Thailand Krungsri (Bank of Ayudhya), with its vast assets, loans, and deposits, the challenge of fraud prevention has taken center stage. This blog post explores how the bank is leveraging data streaming with Apache Kafka to detect and block fraudulent transactions in under 60 seconds to ensure the safety and trust of its customers.
Read More
Serverless Data Streaming on Azure Cloud with Apache Kafka Event Hubs Confluent Cloud for OneLake and Microsoft Fabric
Read More

When to Choose Apache Kafka vs. Azure Event Hubs vs. Confluent Cloud for a Microsoft Fabric Lakehouse

Choosing between Apache Kafka, Azure Event Hubs, and Confluent Cloud for data streaming is critical when building a Microsoft Fabric Lakehouse. Each option caters to different needs, and this blog post will guide you in selecting the right data streaming solution for your use case.
Read More
Apache Iceberg Open Table Format for Data Lake Lakehouse Streaming wtih Kafka Flink Databricks Snowflake AWS GCP Azure
Read More

Apache Iceberg – The Open Table Format for Lakehouse AND Data Streaming

An open table format framework like Apache Iceberg is essential in the enterprise architecture to ensure reliable data management and sharing, seamless schema evolution, efficient handling of large-scale datasets and cost-efficient storage. This blog post explores market trends, adoption of table format frameworks like Iceberg, Hudi, Paimon, Delta Lake and XTable, and the product strategy of leading vendors of data platforms such as Snowflake, Databricks (Apache Spark), Confluent (Apache Kafka / Flink), Amazon Athena and Google BigQuery.
Read More
The Shift Left Architecture
Read More

The Shift Left Architecture – From Batch and Lakehouse to Real-Time Data Products with Data Streaming

Data integration is a hard challenge in every enterprise. Batch processing and Reverse ETL are common practices in a data warehouse, data lake or lakehouse. Data inconsistency, high compute cost, and stale information are the consequences. This blog post introduces a new design pattern to solve these problems: The Shift Left Architecture enables a data mesh with real-time data products to unify transactional and analytical workloads with Apache Kafka, Flink and Iceberg. Consistent information is handled with streaming processing or ingested into Snowflake, Databricks, Google BigQuery, or any other analytics / AI platform to increase flexibility, reduce cost and enable a data-driven company culture with faster time-to-market building innovative software applications.
Read More
Data Streaming with Apache Kafka for Industrial IoT in the Automotive Industry at Brose
Read More

Apache Kafka in Manufacturing at Automotive Supplier Brose for Industrial IoT Use Cases

Data streaming unifies OT/IT workloads by connecting information from sensors, PLCs, robotics and other manufacturing systems at the edge with business applications and the big data analytics world in the cloud. This blog post explores how the global automotive supplier Brose deploys a hybrid industrial IoT architecture using Apache Kafka in combination with Eclipse Kura, OPC-UA, MuleSoft and SAP.
Read More
Snowflake with Apache Kafka and Iceberg Connector
Read More

Snowflake Data Integration Options for Apache Kafka (including Iceberg)

The integration between Apache Kafka and Snowflake is often cumbersome. Options include near real-time ingestion with a Kafka Connect connector, batch ingestion from large files, or leveraging a standard table format like Apache Iceberg. This blog post explores the alternatives and discusses its trade-offs. The end shows how data streaming helps with hybrid architectures where data needs to be ingested from the private data center into Snowflake in the public cloud.
Read More