Streaming Innovation in Manufacturing: Building the Future
“With this remote world now, the guys that built it and designed it can, in real time,” provide feedback, says Magyari, such as “‘Okay, you’ve got to nudge that over, shim that, speed that up a little bit’.”
This level of remote expertise is only available via video streaming, as the remote subject matter expert needs to see what the die-stamping machine is doing during an actual production run in order to make the best recommendations for equipment optimization.
Looking Beyond Audio and Video
Here at Streaming Media, we tend to think only about the audio- and video-streaming aspects, but if we step back a bit and think about all the other elements that go into the delivery of even just a simple audio stream, it’s clear that we’re streaming more than just the content itself. For instance, metadata about the album name, track length, and even the album cover are all part of an Apple Music or Spotify stream.
The same is true in manufacturing: It’s not just the finished product that’s critical (the “content” in our streaming-media speak) but also the information about the health of the overall assembly process that generates the content (the “metadata” in streaming-media speak).
When it comes to manufacturing, there are a number of machine types on the shop floor: assembly, extrusion, packaging, etc. Each of these devices is key to the overall product assembly workflow, and each generates its own version of “metadata” to indicate its own relative health. As such, there’s a need to capture that disparate information and make sense of it in real time.
The terminology for manufacturing, and the way the industry defines the real-time delivery of this metadata, is “streaming analytics” as a part of the larger “intelligent maintenance” approach being integrated via application platforms such as SAS. While at first blush the manufacturing term “streaming analytics” sounds a bit backward from our way of analyzing streams, it’s actually a category worth exploring when it comes to maintenance of the video production and delivery pipeline.
SAS is a well-known enterprise resource planning toolset, and its company (also called SAS) has an initiative around event-stream processing (ESP).
“Usually in an operation there are a variety of machines connected together, an entire system or process, where, when one important component goes down, the whole entire system fails, causing a very expensive downtime,” says Evan Guarnaccia, solutions architect at SAS, in a video published by the company (which, unfortunately, is no longer available).
“With ESP,” says Guarnaccia, “you can stream sensor data in, perform a root-cause analysis on it offline, and then discover which pieces of the operation are most critical in preventing a failure later.”
It’s not just enough to do the offline analysis, Guarnaccia notes, because the goal is to “determine a problem downstream well in advance” which requires real-time monitoring of event information for critical components.
“[They] may be big; they may be small,” Guarnaccia says of the components that ESP would monitor. “It could be a small electric motor which, when it has a problem, will trigger a greater failure down the line, or it could be a very expensive component.”
The terminology of “live streaming data” is also used by SAS and other companies in the ESP space. Why is this topic important to those of us in the streaming audio and video space? It all boils down to the Internet of Things (IoT) and its corollary of Industrial IoT. What the manufacturing sector has done is lay a groundwork of cross-device communication and health monitoring, which all other industries will eventually mimic.
SAS is a well-known enterprise resource planning toolset, and the company has an initiative around event-stream processing (ESP).
One approach being used is a “pattern of interest” that tracks multiple patterns across multiple devices, retaining data and analyzing it to verify whether a multi-pattern anomaly has occurred.
In many ways, this ESP approach is similar to the “artificial intelligence” or “machine learning” approach that Sangeeta Ramakrishnan, a distinguished engineer in the office of the CTO at Cisco, presented in my interview with her at Streaming Media West 2018. I asked about the difference between computer vision and machine learning.
“Computer vision is about replicating [the] sort of vision that humans have, but from a computer perspective,” said Ramakrishnan. “Can a computer detect a cat versus a dog? Can it detect a soccer ball versus a basketball, those kinds of things.
“Machine learning is broader,” said Ramakrishnan. “You can use machine learning to identify anomalies. If there is some problem with your traffic, problem with users abandoning content. Any kind of data.”
While ESP monitors machines using a rules-based approach—i.e., send an alert when unused capacity reaches 10%—machine learning for streaming video delivery looks at patterns to both predict those thresholds before they are crossed and to request corrective action before problems become critical.
Combining Computer Vision and Machine Learning
There’s even a combination of traditional streaming with computer vision—think picture and video data used as the basis of computer vision, which may be transmitted with low frequency, but contains large file sizes—as well as a continuous-data stream delivering content at a higher-than-average frequency of occurrence but is a lower data rate per unit. The overall data size, though, could be very large if all sensor data is streamed and captured.
Xiaochang Wu, part of the Intel Big Data engineering team in Shanghai, spoke on this topic at an October 2017 Spark Summit, in a presentation titled “Apache Spark Structured Streaming Helps Smart Manufacturing."
Wu said the idea is to use an Apache Spark “structured streaming framework to intelligently transform the production lines in the manufacturing industry” employing Big Data that may share somewhat disparate properties: bursty, disordered (noncontiguous), real-time, and volatile.
The motivation for Intel, according to Wu, is to build an Apache Spark-centric IoT data platform for printed circuit board (PCB) manufacturers, whose needs in assembly and manufacturing are both high volume and complex, especially given the nature of having to attach a number of components to a PCB using a variety of soldering and attachment techniques from ball-grid array to wave-flow soldering.
Given the streaming nature of Wu’s approach, there is stateless and stateful processing in much the same way as the streaming media industry deals with state when it comes to setting up a streaming delivery session. There’s even discussion of buffers, but the requirement in manufacturing isn’t to deliver enough bits into the buffer to keep the video playing smoothly, but rather to keep the component hopper full so that the automated PCB assembly line does not falter.
The computer vision portion of the Intel solution is the automated optical inspection (AOI), which, if a PCB fails the AOI, will be reflowed or desoldered. At this point, there will be a post-reflow AOI “to check if the reflow is correct” according to Wu.
It feels like we’ve only just reached the first station in the assembly line of streaming video’s use in manufacturing. There’s quite a bit more to consider, from video format choices in industrial gear that is expected to last several decades to the basics of how to deal with training in acoustic environments that are often hard.
The bottom line is that streaming—whether it’s video for training, just-in-time expertise, computer vision, or even repair and maintenance—is firmly entrenched in the new reality of manufacturing processes. And that’s good news for both the streaming industry and the up-and-coming Industrial IoT that will require computer vision, machine learning, and a healthy dose of streaming media delivery to enhance a smart and lean approach to building and shipping products.
[This article appears in the Autumn 2019 issue of Streaming Media Europe Magazine as "Build It: Streaming in Manufacturing."]