Panel Discussion about Kafka, Edge, Networking and 5G in Oil and Gas and Mining Industry

The oil & gas and mining industries require edge computing for low latency and zero trust use cases. Most IT architectures are hybrid with big data analytics in the cloud and safety-critical data processing in disconnected and often air-gapped environments. This blog post shares a panel discussion that explores the challenges, use cases, and hardware/software/network technologies to reduce cost and innovate. A key focus is on the open-source framework Apache Kafka, the de facto standard for processing data in motion at the edge and in the cloud.

The oil & gas and mining industries require edge computing for low latency and zero trust use cases. Most IT architectures are hybrid with big data analytics in the cloud and safety-critical data processing in disconnected and often air-gapped environments. This blog post shares a panel discussion that explores the challenges, use cases, and hardware/software/network technologies to reduce cost and innovate. A key focus is on the open-source framework Apache Kafka, the de facto standard for processing data in motion at the edge and in the cloud.

Apache Kafka and Edge Networks in Oil and Gas and Mining

Apache Kafka at the Edge and in Hybrid Cloud

I explored the usage of event streaming at the edge and in hybrid cloud scenarios in lots of detail in the past. Hence, instead of yet another description, check out the following posts to learn about use cases and architectures:

Panel Discussion: Kafka, Network Infrastructure, Edge, and Hybrid Cloud in Oil and Gas

Here is the panel discussion. The conversation includes the software and the hardware/infrastructure/networking perspective. It is a great mix of exploring use cases from the oil&gas and mining industries for processing data in motion and technical facts about communication/radio/telco infrastructures. I think it was really a great mix of topics that are heavily related and depend on each other to deploy a project successfully.

Speakers:

  • Andrew Duong (Confluent): Moderator
  • Kai Waehner (Confluent): Expert on hybrid software architectures and data in motion
  • Dion Stevenson (Tait Communications): Expert on hardware and network infrastructure
  • Sohan Domingo (Tait Communications): Expert on hardware and network infrastructure

Now enjoy the discussion and feel free to share any thoughts or feedback:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Kafka in the Energy Sector including Oil and Gas, Mining, Smart Grids

An example architecture for hybrid event streaming in the oil and gas industry can look like the following:

Data in Motion for Energy Production - Upstream Midstream Downstream - at the Edge with Kafka in Oil and Gas and Mining

 

If you want to learn more about event streaming with Apache Kafka in the energy industry (including oil and gas, mining, smart grids), check out the following blog post:

Notes about the Kafka, Edge, Oil, and Gas, Mining Conversation

If you prefer reading or just listening to a few of the sections, here are some notes about the flow of the panel discussion:

0-4:20– Introduction to Tait Communications
4:45- 7:20– Introduction to Confluent and high-level definition of Edge & IoT
7:30- 10:10– Voice communication discussion about connectivity, the importance of context at the point of time through data so the right response can be determined sooner. No matter where they are, what they’re doing, we get communication’s at the edge to suit the needs of a modern workforce.
10:15-12:10 ML/AI at the Edge. Continuous monitoring of all infrastructure and sensors for safety purposes. Event streaming to help send alerts in the real and also for post-event analysis too. There’s a process to get into AI- Infra, then pipeline, then AI, not the other way around.
12:15- 14:42– 5G can’t solve all problems- security, privacy, compliance considerations as to where to process the data, and beyond this, the cost is also a factor. Considerations for Cloud and on-premise.
14:50 – 16:03– 5G discussion. There are real-world limitations like cell towers. You also need contextual awareness at the Edge and making decisions there (local awareness)- e.g., Gas on a vehicle that’s disconnected on the backend.
16:15 – 20:10– Manufacturing & Supply chain, radios & communications today and what’s possible in the future. Having IoT at the Edge manufacturing optimizations with low latency requirements where cloud-first doesn’t make sense. On the flip side, if it’s not safety-critical or things like an ERP system, this can be pushed into the cloud.
20:10 -23:35– Mining side of things, lacking connectivity and preference for edge-based usage. Autonomous trucks, decisions on the edge rather than delays or even milliseconds by going to the cloud. Doing it locally at the edge is more efficient in some cases. Collecting all sensors on the trucks, temperatures, etc., even whilst disconnected, but once the connection is re-established at the base, that data can be uploaded. ‘Last mile’ analytics. Confluent is IT, not OT- we integrate with IT systems, but the OT world is separated.
23:38- 26:25: Digital mobile radios and voice communications, but with autonomous trucks, you don’t have that. This is where our Unified Vehicle comes in where it’s a combination of Digital Mobile Radio(DMR) and LTE and intelligent algorithms help with failover from DMR to LTE if there are connectivity issues. Voice is still important despite the amount of technology being in use and data exploration.
27:03 – 31:15– Where to start with data exploration- Start with your requirements. Does it really need computing at the edge to solve the problems, or can Cloud work? Event streaming at the edge and use case where it makes sense. How customers get started, simple use cases to be solved first before the more advanced ones (building the foundations, data pipelines, simple rules, and test). AWS Wavelength team collaboration and edge-making sense with low latency and security requirements.
31:15- 32:54– Need to consider your bandwidth & latency as to whether edge computing makes sense. Driverless cars.
33:15 – 37:49- Where to go from here with existing customers. How do they upgrade, what customers are coming to Tait for, and the use of video as part of all this for public safety.
Health & safety, monitoring driver alertness in NZ. Truck performance, driver performance, and when to take a break. That decision needs to be made as a combination between edge and cloud.
37:50 – 40:55- Connected vehicles and cars- it’s not as hard as it looks. Gas stations with edge computing, loyalty systems, etc., and the importance of after-sales for connected vehicles. GDPR and compliance by aggregation of data instead of as some countries have high privacy issues.
41:00-44:10- Joint project with Tait in the law enforcement space. Voice to text, use of metadata, and combining voice + video with event streaming.

Kafka for Next-Generation Edge Computing

The energy industry, including oil&gas and mining, is super interesting from a technical perspective. It requires edge and cloud computing. Upstream, midstream, and downstream is a complex and safety-critical supply chain. Processing data in motion with Apache Kafka leveraging various network infrastructures is a great opportunity to innovate and reduce costs across various use cases.

Do you already leverage Apache Kafka for processing data in motion in the oil and gas, mining, or any other industry? How does your (future) edge or hybrid architecture look like? Let’s connect on LinkedIn and discuss it! Stay informed about new blog posts by subscribing to my newsletter.

 

Dont‘ miss my next post. Subscribe!

We don’t spam! Read our privacy policy for more info.
If you have issues with the registration, please try a private browser tab / incognito mode. If it doesn't help, write me: kontakt@kai-waehner.de

Leave a Reply
You May Also Like
How to do Error Handling in Data Streaming
Read More

Error Handling via Dead Letter Queue in Apache Kafka

Recognizing and handling errors is essential for any reliable data streaming pipeline. This blog post explores best practices for implementing error handling using a Dead Letter Queue in Apache Kafka infrastructure. The options include a custom implementation, Kafka Streams, Kafka Connect, the Spring framework, and the Parallel Consumer. Real-world case studies show how Uber, CrowdStrike, Santander Bank, and Robinhood build reliable real-time error handling at an extreme scale.
Read More