JavaScript Node JS Apache Kafka for Full Stack Data Streaming in Event Driven Architecture
Read More

JavaScript, Node.js and Apache Kafka for Full-Stack Data Streaming

JavaScript is a pivotal technology for web applications. With the emergence of Node.js, JavaScript became relevant for both client-side and server-side development, enabling a full-stack development approach with a single programming language. Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible for real-time data streaming. This blog post explores open-source JavaScript Clients for Apache Kafka and discusses the trade-offs and limitations of JavaScript Kafka producers and consumers compared to stream processing technologies such as Kafka Streams or Apache Flink.
Read More
Data Streaming with Apache Kafka and ARM CPU at the Edge and in the Cloud
Read More

ARM CPU for Cost-Effective Apache Kafka at the Edge and Cloud

ARM CPUs often outperform x86 CPUs in scenarios requiring high energy efficiency and lower power consumption. These characteristics make ARM preferred for edge and cloud environments. This blog post discusses the benefits of using Apache Kafka alongside ARM CPUs for real-time data processing in edge and hybrid cloud setups, highlighting energy-efficiency, cost-effectiveness, and versatility. A wide range of use cases are explored across industries, including manufacturing, retail, smart cities and telco.
Read More
ESG and Sustainability powered by Data Streaming with Apache Kafka and Flink
Read More

Green Data, Clean Insights: How Kafka and Flink Power ESG Transformations

This blog post explores the synergy between Environmental, Social, and Governance (ESG) principles and Kafka and Flink’s real-time data processing capabilities, unveiling a powerful alliance that transforms intentions into impactful change. Beyond just buzzwords, real-world deployments architectures across industries show the value of data streaming for better ESG ratings.
Read More
GenAI Demo with Kafka, Flink, LangChain and OpenAI
Read More

GenAI Demo with Kafka, Flink, LangChain and OpenAI

Generative AI (GenAI) enables automation and innovation across industries. This blog post explores a simple but powerful architecture and demo for the combination of Python, and LangChain with OpenAI LLM, Apache Kafka for event streaming and data integration, and Apache Flink for stream processing. The use case shows how data streaming and GenAI help to correlate data from Salesforce CRM, searching for lead information in public datasets like Google and LinkedIn, and recommending ice-breaker conversations for sales reps.
Read More
How Intersport uses Apache Kafka as Database with Compacted Topic in Retail
Read More

How the Retailer Intersport uses Apache Kafka as Database with Compacted Topic

Compacted Topic is a feature of Apache Kafka to persist and query the latest up-to-date event of a Kafka Topic. The log compaction and key/value search is simple, cost-efficient and scalable. This blog post shows in a success story of Intersport how some use cases store data long term in Kafka with no other database. The retailer requires accurate stock info across the supply chain, including the point of sale (POS) in all international stores.
Read More
Real Time Customer Loyalty and Reward Platform with Apache Kafka
Read More

Customer Loyalty and Rewards Platform with Apache Kafka

Loyalty and rewards platforms are crucial for customer retention and revenue growth for many enterprises across industries. Apache Kafka provides context-specific real-time data and consistency across all applications and databases for a modern and flexible enterprise architecture. This blog post looks at case studies from Albertsons (retail), Globe Telecom (telco), Virgin Australia (aviation), Disney+ Hotstar (sports and gaming), and Porsche (automotive) to explain the value of data streaming for improving the customer loyalty.
Read More
SAP Datasphere and Apache Kafka as Data Fabric for ERP Integration
Read More

SAP Datasphere and Apache Kafka as Data Fabric for S/4HANA ERP Integration

SAP is the leading ERP solution across industries around the world. Data integration with other data platforms, applications, databases, and APIs is one of the hardest challenges in the IT and software landscape. This blog post explores how SAP Datasphere in conjunction with the data streaming platform Apache Kafka enables a reliable, scalable and open data fabric for connecting SAP business objects of ECC and S/4HANA ERP with other real-time, batch, or request-response interfaces.
Read More
Data Streaming Landscape 2024 around Kafka Flink and Cloud
Read More

The Data Streaming Landscape 2024

The research company Forrester defines data streaming platforms as a new software category in a new Forrester Wave. Apache Kafka is the de facto standard used by over 100,000 organizations. Plenty of vendors offer Kafka platforms and cloud services. Many complementary open source stream processing frameworks like Apache Flink and related cloud offerings emerged. And competitive technologies like Pulsar, Redpanda, or WarpStream try to get market share leveraging the Kafka protocol. This blog post explores the data streaming landscape of 2024 to summarize existing solutions and market trends. The end of the article gives an outlook to potential new entrants in 2025.
Read More
MQTT Market Trends for 2024 including Sparkplug Data Governance Kafka Cloud
Read More

MQTT Market Trends for 2024: Cloud, Unified Namespace, Sparkplug, Kafka Integration

The lightweight and open IoT messaging protocol MQTT gets adopted more widely across industries. This blog post explores relevant market trends for MQTT: cloud deployments and fully managed services, data governance with unified namespace and Sparkplug B, MQTT vs. OPC-UA debates, and the integration with Apache Kafka for OT/IT data processing in real-time.
Read More
Tiered Storage for Apache Kafka - Use Cases Architecture Benefits.png
Read More

Why Tiered Storage for Apache Kafka is a BIG THING…

Apache Kafka added Tiered Storage to separate compute and storage. The capability enables more scalable, reliable and cost-efficient enterprise architectures. This blog post explores the architecture, use cases, benefits, and a case study for storing Petabytes of data in the Kafka commit log. The end discusses why Tiered Storage does NOT replace other databases and how Apache Iceberg might change future Kafka architectures even more.
Read More