JavaScript Node JS Apache Kafka for Full Stack Data Streaming in Event Driven Architecture
Read More

JavaScript, Node.js and Apache Kafka for Full-Stack Data Streaming

JavaScript is a pivotal technology for web applications. With the emergence of Node.js, JavaScript became relevant for both client-side and server-side development, enabling a full-stack development approach with a single programming language. Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible for real-time data streaming. This blog post explores open-source JavaScript Clients for Apache Kafka and discusses the trade-offs and limitations of JavaScript Kafka producers and consumers compared to stream processing technologies such as Kafka Streams or Apache Flink.
Read More
GenAI Demo with Kafka, Flink, LangChain and OpenAI
Read More

GenAI Demo with Kafka, Flink, LangChain and OpenAI

Generative AI (GenAI) enables automation and innovation across industries. This blog post explores a simple but powerful architecture and demo for the combination of Python, and LangChain with OpenAI LLM, Apache Kafka for event streaming and data integration, and Apache Flink for stream processing. The use case shows how data streaming and GenAI help to correlate data from Salesforce CRM, searching for lead information in public datasets like Google and LinkedIn, and recommending ice-breaker conversations for sales reps.
Read More
How Intersport uses Apache Kafka as Database with Compacted Topic in Retail
Read More

How the Retailer Intersport uses Apache Kafka as Database with Compacted Topic

Compacted Topic is a feature of Apache Kafka to persist and query the latest up-to-date event of a Kafka Topic. The log compaction and key/value search is simple, cost-efficient and scalable. This blog post shows in a success story of Intersport how some use cases store data long term in Kafka with no other database. The retailer requires accurate stock info across the supply chain, including the point of sale (POS) in all international stores.
Read More
Data Streaming Landscape 2024 around Kafka Flink and Cloud
Read More

The Data Streaming Landscape 2024

The research company Forrester defines data streaming platforms as a new software category in a new Forrester Wave. Apache Kafka is the de facto standard used by over 100,000 organizations. Plenty of vendors offer Kafka platforms and cloud services. Many complementary open source stream processing frameworks like Apache Flink and related cloud offerings emerged. And competitive technologies like Pulsar, Redpanda, or WarpStream try to get market share leveraging the Kafka protocol. This blog post explores the data streaming landscape of 2024 to summarize existing solutions and market trends. The end of the article gives an outlook to potential new entrants in 2025.
Read More
Read More

Apache Kafka + Vector Database + LLM = Real-Time GenAI

Generative AI (GenAI) enables advanced AI use cases and innovation but also changes how the enterprise architecture looks like. Large Language Models (LLM), Vector Databases, and Retrieval Augmentation Generation (RAG) require new data integration patterns. Data streaming with Apache Kafka and Apache Flink processes incoming data sets in real-time at scale, connects various platforms, and enables decoupled data products.
Read More
Kafka versus HTTP REST API
Read More

Request-Response with REST/HTTP vs. Data Streaming with Apache Kafka – Friends, Enemies, Frenemies?

Request-response communication with REST / HTTP is simple, well understood, and supported by most technologies, products, and SaaS cloud services. Contrarily, data streaming with Apache Kafka is a fundamental change to process data continuously. HTTP and Kafka complement each other in various ways. This post explores the architectures and use cases to leverage request-response together with data streaming in the control plane for management or in the data plane for producing and consuming events.
Read More
JMS Message Queue vs Apache Kafka Comparison
Read More

Comparison: JMS Message Queue vs. Apache Kafka

Comparing JMS-based message queue (MQ) infrastructures and Apache Kafka-based data streaming is a widespread topic. Unfortunately, the battle is an apple-to-orange comparison that often includes misinformation and FUD from vendors. This blog post explores the differences, trade-offs, and architectures of JMS message brokers and Kafka deployments. Learn how to choose between JMS brokers like IBM MQ or RabbitMQ and open-source Kafka or serverless cloud services like Confluent Cloud.
Read More
Apache Kafka Transactions API vs Big Data Lake and Batch Analytics
Read More

Analytics vs. Transactions in Data Streaming with Apache Kafka

Workloads for analytics and transactions have very unlike characteristics and requirements. Many people think that Apache Kafka is not built for transactions and should only be used for big data analytics. This blog post explores when and how to use Kafka in resilient, mission-critical architectures and when to use the built-in Transaction API.
Read More
De Facto Standard API - Amazon S3 for Object Storage and Apache Kafka for Event Streaming
Read More

Kafka API is the De Facto Standard API for Event Streaming like Amazon S3 for Object Storage

Real-time beats slow data in most use cases across industries. The rise of event-driven architectures and data in motion powered by Apache Kafka enables enterprises to build real-time infrastructure and applications. This blog post explores why the Kafka API became the de facto standard API for event streaming like Amazon S3 for object storage, and the tradeoffs of these standards and corresponding frameworks, products, and cloud services.
Read More
Innovation in Financial Services and Open Banking with Apache Kafka
Read More

Apache Kafka in the Financial Services Industry

The rise of event streaming in financial services is growing like crazy. Continuous real-time data integration and processing are mandatory for many use cases. Apache Kafka is deployed across the financial services business departments for mission-critical transactional workloads and big data analytics. High scalability, high reliability, and an elastic open infrastructure are the key reasons for the success of Kafka. This blog post explores different use cases, architectures, and real-world examples in the FinServ sector.
Read More