Intelligent Traffic System for Tolling with Dynamic Pricing and Enforcement with Apache Kafka
Read More

IoT and Data Streaming with Kafka for a Tolling Traffic System with Dynamic Pricing

In the rapidly evolving landscape of intelligent traffic systems, innovative software provides real-time processing capabilities, dynamic pricing and new customer experiences, particularly in the domains of tolling, payments and safety inspection. This blog post explores success stories from Quarterhill and DKV Mobility providing traffic and payment systems for tolls. Data streaming powered by Apache Kafka has been pivotal in the journey towards building intelligent traffic systems in the cloud.
Read More
Serverless Data Streaming on Azure Cloud with Apache Kafka Event Hubs Confluent Cloud for OneLake and Microsoft Fabric
Read More

When to Choose Apache Kafka vs. Azure Event Hubs vs. Confluent Cloud for a Microsoft Fabric Lakehouse

Choosing between Apache Kafka, Azure Event Hubs, and Confluent Cloud for data streaming is critical when building a Microsoft Fabric Lakehouse. Each option caters to different needs, and this blog post will guide you in selecting the right data streaming solution for your use case.
Read More
Microsoft Fabric and OneLake Azure Lakehouse vs Databricks and Snowflake Cloud
Read More

What is Microsoft Fabric for Azure Cloud (Beyond the Buzz) and how it Competes with Snowflake and Databricks

If you ask your favorite large language model, Microsoft Fabric appears to be the ultimate solution for any data challenge you can imagine. That’s also the impression many people get from Microsoft’s sales teams. But is it really the silver bullet it’s made out to be? This article takes a closer look exploring the glossy marketing and sales definition of the platform and then deconstructing it from a more practical perspective. Learn what Microsoft Fabric is truly built for, and how it fits into the wider data landscape, especially in comparison to other major players in the data analytics market like Databricks and Snowflake.
Read More
Real-Time AI ML Model Inference Predictive AI and Generative AI with Data Streaming using Apache Kafka and Flink
Read More

Real-Time Model Inference with Apache Kafka and Flink for Predictive AI and GenAI

Artificial Intelligence (AI) and Machine Learning (ML) are transforming business operations by enabling systems to learn from data and make intelligent decisions for predictive and generative AI use cases. Two essential components of AI/ML are model training and inference. This blog post explores how data streaming with Apache Kafka and Flink enhances the performance and reliability of model predictions. Whether for real-time fraud detection, smart customer service applications or predictive maintenance, understanding the value of data streaming for model inference is crucial for leveraging AI/ML effectively.
Read More
Apache Kafka Deployment Options - Serverless vs Self-Managed vs BYOC Bring Your Own Cloud
Read More

Deployment Options for Apache Kafka: Self-Managed, Fully-Managed / Serverless and BYOC (Bring Your Own Cloud)

BYOC (Bring Your Own Cloud) is an emerging deployment model for organizations looking to maintain greater control over their cloud environments. Unlike traditional SaaS models, BYOC allows businesses to host applications within their own VPCs to provide enhanced data privacy, security, and compliance. This approach leverages existing cloud infrastructure. It offers more flexibility for custom configurations, particularly for companies with stringent security needs. In the data streaming sector around Apache Kafka, BYOC is changing how platforms are deployed. Organizations get more control and adaptability for various use cases. But it is clearly NOT the right choice for everyone!
Read More
Unified Commerce with Data Streaming using Apache Kafka and Flink at the Edge and in the Cloud
Read More

Unified Commerce in Retail and eCommerce with Apache Kafka and Flink for Real-Time Customer 360

Delivering a seamless and personalized customer experience across all touchpoints is essential for staying competitive in today’s rapidly evolving retail and eCommerce landscape. Unified commerce integrates all sales channels and backend systems into a single platform to ensure real-time consistency in customer interactions, inventory management, and order fulfillment. This blog post explores how Apache Kafka and Flink can be pivotal in achieving real-time Customer 360 in the unified commerce ecosystem and how it differs from traditional omnichannel approaches.
Read More
One Apache Kafka Cluster Type Does NOT Fit All Use Cases
Read More

Apache Kafka Cluster Type Deployment Strategies

Organizations start their data streaming adoption with a single Apache Kafka cluster to deploy the first use cases. The need for group-wide data governance and security but different SLAs, latency, and infrastructure requirements introduce new Kafka clusters. Multiple Kafka clusters are the norm, not an exception. Use cases include hybrid integration, aggregation, migration, and disaster recovery. This blog post explores real-world success stories and cluster strategies for different Kafka deployments across industries.
Read More
How I Trained a Chatbot K.AI of Myself Without Coding Evaluating OpenAI Custom GPT Chatbase Botsonic LiveChatAI
Read More

Hello, K.AI – How I Trained a Chatbot of Myself Without Coding Evaluating OpenAI Custom GPT, Chatbase, Botsonic, LiveChatAI

Generative AI (GenAI) enables many new use cases for enterprises and private citizens. While I work on real-time enterprise scale AI/ML deployments with data streaming, big data analytics and cloud-native software applications in my daily business life, I also wanted to train a conversational chatbot for myself. This blog post introduces my journey without coding to train K.AI, a personal chatbot that can be used to learn in a conversational pace format about data streaming and the most successful use cases in this area. Yes, this is also based on my expertise, domain knowledge and opinion, which is available as  public internet data, like my hundreds of blog articles, LinkedIn shares, and YouTube videos.
Read More
The Shift Left Architecture
Read More

The Shift Left Architecture – From Batch and Lakehouse to Real-Time Data Products with Data Streaming

Data integration is a hard challenge in every enterprise. Batch processing and Reverse ETL are common practices in a data warehouse, data lake or lakehouse. Data inconsistency, high compute cost, and stale information are the consequences. This blog post introduces a new design pattern to solve these problems: The Shift Left Architecture enables a data mesh with real-time data products to unify transactional and analytical workloads with Apache Kafka, Flink and Iceberg. Consistent information is handled with streaming processing or ingested into Snowflake, Databricks, Google BigQuery, or any other analytics / AI platform to increase flexibility, reduce cost and enable a data-driven company culture with faster time-to-market building innovative software applications.
Read More
Data Streaming with Apache Kafka for Industrial IoT in the Automotive Industry at Brose
Read More

Apache Kafka in Manufacturing at Automotive Supplier Brose for Industrial IoT Use Cases

Data streaming unifies OT/IT workloads by connecting information from sensors, PLCs, robotics and other manufacturing systems at the edge with business applications and the big data analytics world in the cloud. This blog post explores how the global automotive supplier Brose deploys a hybrid industrial IoT architecture using Apache Kafka in combination with Eclipse Kura, OPC-UA, MuleSoft and SAP.
Read More