Closed Big Data Loop: 1) Finding Insights with R, H20, Apache Spark MLlib, PMML and TIBCO Spotfire. 2) Putting Analytic Models into Action via Event Processing and Streaming Analytics.
Slide deck from OOP 2016: Comparison of Frameworks and Products for Big Data Log Analytics and ITOA, e.g. Open Source ELK, TIBCO LogLogic / Unity, Splunk, Papertrail; Relation to Hadoop is also discussed.
See how stream processing / streaming analytics frameworks (e.g. Apache Spark, Apache Flink, Amazon Kinesis) and products (e.g. TIBCO StreamBase, Software AG’s Apama, IBM InfoSphere Streams) are categorized and compared. Besides, understand how stream processing is related to Big Data platforms such as Apache Hadoop and machine learning (e.g. R, SAS, MATLAB).
Data Warehouses have existed for many years in almost every company. While they are still as good and relevant for the same use cases as they were 20 years ago, they cannot solve new, existing challenges and those sure to come in a ever-changing digital world. The upcoming sections will clarify when to still use a Data Warehouse and when to use a modern Live Datamart instead.
In 2015, the middleware world focuses on two buzzwords: Docker and Microservices. Software vendors still sell products such as an Enterprise Service Bus (ESB) or Complex Event Processing (CEP) engines. How is this related? This session discusses the requirements, best practices and challenges for creating a good Microservices architecture, and if this spells the end of the Enterprise Service Bus (ESB).
The following slide deck shows plenty of different technologies (e.g. REST, WebSockets), frameworks (e.g. Apache CXF, Apache Camel, Puppet, Docker) or tools (e.g. TIBCO BusinessWorks, API Exchange) to realize Microservices.