ARM CPUs often outperform x86 CPUs in scenarios requiring high energy efficiency and lower power consumption. These characteristics make ARM preferred for edge and cloud environments. This blog post discusses the benefits of using Apache Kafka alongside ARM CPUs for real-time data processing in edge and hybrid cloud setups, highlighting energy-efficiency, cost-effectiveness, and versatility. A wide range of use cases are explored across industries, including manufacturing, retail, smart cities and telco.
This blog post explores the synergy between Environmental, Social, and Governance (ESG) principles and Kafka and Flink’s real-time data processing capabilities, unveiling a powerful alliance that transforms intentions into impactful change. Beyond just buzzwords, real-world deployments architectures across industries show the value of data streaming for better ESG ratings.
Generative AI (GenAI) enables automation and innovation across industries. This blog post explores a simple but powerful architecture and demo for the combination of Python, and LangChain with OpenAI LLM, Apache Kafka for event streaming and data integration, and Apache Flink for stream processing. The use case shows how data streaming and GenAI help to correlate data from Salesforce CRM, searching for lead information in public datasets like Google and LinkedIn, and recommending ice-breaker conversations for sales reps.
Loyalty and rewards platforms are crucial for customer retention and revenue growth for many enterprises across industries. Apache Kafka provides context-specific real-time data and consistency across all applications and databases for a modern and flexible enterprise architecture. This blog post looks at case studies from Albertsons (retail), Globe Telecom (telco), Virgin Australia (aviation), Disney+ Hotstar (sports and gaming), and Porsche (automotive) to explain the value of data streaming for improving the customer loyalty.
SAP is the leading ERP solution across industries around the world. Data integration with other data platforms, applications, databases, and APIs is one of the hardest challenges in the IT and software landscape. This blog post explores how SAP Datasphere in conjunction with the data streaming platform Apache Kafka enables a reliable, scalable and open data fabric for connecting SAP business objects of ECC and S/4HANA ERP with other real-time, batch, or request-response interfaces.
The research company Forrester defines data streaming platforms as a new software category in a new Forrester Wave. Apache Kafka is the de facto standard used by over 100,000 organizations. Plenty of vendors offer Kafka platforms and cloud services. Many complementary open source stream processing frameworks like Apache Flink and related cloud offerings emerged. And competitive technologies like Pulsar, Redpanda, or WarpStream try to get market share leveraging the Kafka protocol. This blog post explores the data streaming landscape of 2024 to summarize existing solutions and market trends. The end of the article gives an outlook to potential new entrants in 2025.
Do you wonder about my predicted TOP 5 data streaming trends with Apache Kafka and Flink in 2024 to set data in motion? Discover new technology trends and best practices for event-driven architectures, including data sharing, data contracts, serverless stream processing, multi-cloud architectures, and GenAI.
This blog post explores the state of data streaming for the healthcare industry in 2023 powered by Apache Kafka and Apache Flink. IT modernization and innovation with pioneering technologies like sensors, telemedicine, or AI/machine learning are explored. I look at enterprise architectures and customer stories from Humana, Recursion, BHG (former Bankers Healthcare Group), and more. A complete slide deck and on-demand video recording are included.
This blog post explores the state of data streaming for the gaming industry in 2023, including customer stories from Kakao Games, Mobile Premier League (MLP), Demonware / Blizzard, and more. A complete slide deck and on-demand video recording are included.
Good data quality is one of the most critical requirements in decoupled architectures, like microservices or data mesh. Apache Kafka became the de facto standard for these architectures. But Kafka is a dumb broker that only stores byte arrays. The Schema Registry enforces message structures. This blog post looks at enhancements to leverage data contracts for policies and rules to enforce good data quality on field-level and advanced use cases like routing malicious messages to a dead letter queue.