Stream Processing on the IBM Mainframe with Apache Flink - Genius or a Glitch in the Matrix
Read More

Stream Processing on the Mainframe with Apache Flink: Genius or a Glitch in the Matrix?

Running Apache Flink on a mainframe may sound surprising, but it is already happening and for good reason. As modern mainframes like IBM z17 evolve to support Linux, Kubernetes, and AI workloads, they are becoming a powerful platform for real-time stream processing. This blog explores why enterprises are deploying Apache Flink on IBM LinuxONE, how it works in practice, and what business value it brings. With Kafka providing the data backbone, Flink enables intelligent processing close to where business-critical data lives. The result is a modern hybrid architecture that connects core systems with cloud-based innovation without needing to fully migrate off the mainframe.
Read More
The Strangler Fig Design Pattern - Migration and Replacement of Legacy IT Applications with Data Streaming using Apache Kafka
Read More

Replacing Legacy Systems, One Step at a Time with Data Streaming: The Strangler Fig Approach

Modernizing legacy systems doesn’t have to mean a risky big-bang rewrite. This blog explores how the Strangler Fig Pattern, when combined with data streaming, enables gradual, low-risk transformation—unlocking real-time capabilities, reducing complexity, and supporting scalable, cloud-native architectures. Discover how leading organizations are using this approach to migrate at their own pace, stay compliant, and enable new business models. Plus, why Reverse ETL falls short and streaming is the future of IT modernization.
Read More