By Chad Kymal
Artificial intelligence, or AI, can be found almost everywhere these days. College students use tools like ChatGPT to write term papers. Law enforcement takes advantage of automated facial recognition systems to fight crime. Investment firms rely on algorithmic trading programs to make buy, sell, or hold market decisions. The list of applications seems endless, and it’s only getting longer every day.
Although many of these uses for AI are well known, you may not be aware that a form of the technology is also being used alternatively to improve management system compliance and the auditing function.
Last year, before they hired us, one of our current clients was fined more than a million dollars because they were late in showing conformance to product-development standards delivering critical components. Making matters worse, a new industry-wide quality framework added to the complexity by requiring complete process assessments that were more stringent than in the past.
Given the complexity of the processes, this company had many inputs and outputs to inspect and conform—the vast amount of data involved, the multiple software tools that did not integrate well, and the need for ongoing human intervention to track everything created an unwieldly system that was just primed to have a quality issue—and so they did. They were concerned that all their future projects would share the same fate.
This is where an AI-powered enterprise-wide quality and integrated management system like the one we offer via Omnex Systems takes focused intent. What is required first and foremost is a clear understanding of the possibilities and limitations of AI for a complex application such as this. Unlike more general and open forms of AI, this is more of a closed-loop system that is focused on retrieving and assessing data from existing software tools and comparing it to process frameworks, including the client organization’s documented procedures.
To better understand how this works in the real world, consider the following example.
The company’s performance data is uploaded into the system every few minutes in real time. AI components then evaluate it against established standards. For example, suppose the requirements for the system are written poorly or the system requirements do not link to customer requirements. AI will then catch every such instance using machine learning and write it up as a finding. It also evaluates whether the problems are closed out in time, whether the process has been audited at the right frequency, and so on.
AI can also help with what’s called a recommendation system. When data is entered, the AI system can provide simultaneous feedback on whether the given data is correct or not. And then, in the end, they can do verification on the data, which tells them what is wrong. In this way, the system helps people do their jobs better.
To be precise, AI is continuously learning from these broad frameworks and standards. Through the recommendation and verification systems, AI can help steer the organization to a higher level of performance that is better aligned with requirements.
Those in the auditing space are using what can loosely be considered software tools to do their work—whether they are managing requirements, using a functional safety tool, or doing a failure modes and effects analysis (FMEA). AI uses machine learning rules to analyze the data entered into the software tool, and then it considers the data through the prism of several questions. For example, for corrective action or problem solving, these questions may include:
- Are problems described well?
- Are short-term corrective actions taken?
- Are root causes being identified?
- Are there three different types of root causes?
- Has corrective action been taken?
- Does the corrective action have responsibility and a due date?
- Has it been implemented?
- Is it paying benefit?
Whatever we can think logically, we can train AI to do for us. Note that this system does not search externally for answers like ChatGPT. We give it the parameter for what is good and what is bad, and the system then comes back and tells you if your data meets the requirements. It boils down to more efficiently evaluating how well the organization executes critical functions.
It’s great to see how these developments are possible in today’s environment, and even more exciting to realize that we are just beginning to scratch the surface on AI applications for conformity assessment. These tools will only get better and better in the years to come.
Will work change? Absolutely, because it always has. But AI tools like the ones I describe in this column won’t put management system standard auditors and other professionals out of work. Instead, it will change the work that they are doing, moving it from a lot of relatively mindless tasks to higher-level and higher-value processes. Humans can do many things that AI cannot, like make value judgments guided by data. Experienced people will need to look at the information produced by AI and make decisions to help take the organization to the next stage of its development. It’s an exciting time to be working in industry and learning about these new methods of doing the best possible work. Personally, I can’t wait to see what the future holds!
And speaking of the future, the good people at Exemplar Global have given me this monthly column to talk about many developments that will touch on our professional and personal lives: Things like changes to standards to accommodate carbon neutrality and the forthcoming changes to the automotive industry as we move from combustion engines to electric motors. What topics would you like me to explore in this space in future months? Please comment below.
About the author