Adapted from the writer's talk at the Connected Plant Conference on real-world applications of AI in the process industry.
AI is good at solving problems at scale, but you can't always trust it.
Take ChatGPT, for instance. If you're planning a trip, it can work well, and there aren’t consequences if it gets something wrong. But when you need facts, or are in a situation that has high stakes, trusting it can be a bit…perilous.
ChatGPT is currently the hot topic in headlines, but the issue of being able to trust your AI is universal to wherever it’s being used. Without trusting your solutions, how are you supposed to realize value? A majority of executives aren’t seeing any value from their AI investments, so clearly something is amiss in the process of implementing and adopting AI.
Let's talk about an alternative to AI: engineers. They're good at solving problems, but not at scale. They're limited by the tools they have available.
Most folks at a plant will be able to identify one or two places in the process where optimization would help the bottom line, but Excel and visualization tools can't handle the complexity. Maybe a separation unit is running below its expected efficiency and you’re experiencing expensive raw material loss, but you don't know how to stop it, or even how to figure out how to stop it. Maybe you have process steps in your batch reaction that require a lot of idle time. You know that if you sped them up, you could increase capacity, but you don't know how to speed them up without risking the quality of the final product.
These are problems that require both subject matter expertise and cutting-edge, custom machine learning elements. Often, we'll see data scientists working with engineers to try and incorporate both. But when data scientists implement AI solutions, while they're very good at getting the most optimal model in place for a given problem, they aren’t typically trained in knowing exactly how a factory operates. That means you get months of extensive back-and-forth between the data scientist and subject matter expert to make sure everyone is solving the correct problem and that they can trust the solution. Then you have to figure out how to make that solution actionable.
When the data scientist and engineer partnership works well, it can solve problems and make solutions, but comes at the cost of a lot of time and effort. In practice, we see this partnership working well when it comes to extremely high ROI use cases at a corporation level, or use cases that represent a true competitive advantage. If the ROI or complexity is too low, then you can actually lose money from the amount of time and computing you throw at the issue, making a lot of lower-ROI use cases best handled locally with things like Excel or visualization tools. The same story holds true for many consultant-led efforts.
The truth is that you don't have to be an expert in machine learning to get value out of it. There's no need for convoluted back-and-forth between data scientists and subject matter experts, a customary step in a traditional machine learning project that can add weeks to a project, even if done efficiently.
Our approach to these challenges is to scale at the fundamental limiting step for realizing value for industrial use cases: the engineer. We think it makes the most sense to empower those people that understand the problem the best to solve it themselves. At the large corporations where we roll Fero out, they’ve opened it up to those engineers who have those 1-2 use cases that can make a big impact at the plant level. They are the people who see and know the problems, and they’re the ones who can tackle them in the moment.
It’s easy to get carried away when working with machine learning and think about how accurate your model is. But you don’t realize any value from a model with perfect accuracy if it has no way of interacting with the world. Know what actions your analysis enables you to do—that will ultimately determine the value you realize.