by Peter Holtmann
Auditor knowledge is fundamental. Auditors use knowledge of industry, process, standards, and theory every day. But how do we determine if an auditor is knowledgeable and if he or she truly understands an industry?
Working recently with a key client of ours from Japan has brought me to an understanding that people have different understandings of the word “understanding.” Sound cryptic? Let me explain.
We were preparing a set of exams for a round of public assessment for a group of engineers. The feedback from our Japanese colleagues was that the test questions were becoming too easy, especially for senior engineers. They asked us to make the test much harder to ensure only the best passed it.
It was at this point that our team of experts almost leapt from their seats in the rush to explain that such subjective and qualitative terms have different meaning and application around the world. A bewildered but interested look ensued by many around the table. This caused me to consider that the concept of knowledge and understanding could do with some further explanation in our industry. In this way we can all work together to build a level playing field of knowledgeable auditors.
The first thing we must do is remove subjectivity and replace it with a series of measurements of cognitive processes. For this I am borrowing some well-written text from RABQSA International’s resident psychometrician, Mary Rehm.
Mary writes here on Bloom’s taxonomy:
Here is the typical view of Bloom’s taxonomy for the cognitive processes. It is presented as an inverted pyramid. The pyramid is used to represent the fact that each cognitive process builds on the others. Remembering is when memory is used to recall facts, develop lists, or recite material. This type of knowledge is often tested with simply multiple choice or fill-in-the-blank-type items that seek definitions or terms.
The “understand” level forms its basis on memory because facts may be recalled and their meaning described. Another example may be when a list is developed and then summarized. The summarization process is an example of the “understand” level of Bloom’s taxonomy.
Applying knowledge entails procedures or taking learned material and using it. As an example, one might understand the process of audit interviews intellectually but it’s critical that this knowledge is also applied appropriately.
Analyzing information entails breaking concepts into parts or determining interrelationships of a whole. To analyze, one needs to have a foundation of knowledge and some experience applying it. A good example of the analyze level of Bloom’s taxonomy is a control chart. An individual must understand the purpose of a specific control chart, the data that are displayed on it, and perhaps even how that data are collected and how the control chart is developed to analyze its result to determine if any of the data points are out of control.
Evaluating entails making judgments based on criteria and standards. If we were to take the control chart example one step further, not only would an individual be required to read the resulting control chart to determine if any data points are out of control, but the individual would also be able to tell you whether the process is out of control even when no data points are outside the upper and lower control limits. This application of knowledge would tend toward the evaluation level of Bloom’s taxonomy because this analysis requires comparing the result to other criteria, such as if all data points are above or below the center line.
The create level of the taxonomy is the most difficult level and also the most difficult level to measure. To create is to put elements together to form a whole. An example might be finding a nonconforming product and determining the root cause and finding a correction for that product so it can be re-worked.
If we apply this to the field of auditing and the typical knowledge competencies auditors must possess, we should begin to break down these competencies against a cognitive hierarchy. For example, when we look at ISO 19011, how much of the standard must be remembered vs. utilizing evaluation skills? I would say that most of the standard would align with the application cognitive process as it sets out how to perform the task.
What about industry standards such as ISO 14001? Is this moving toward the “evaluate” and create” end of the scale? I would venture a guess to say yes. Then comes the industry knowledge and this is where it gets interesting. I would say that industry-specific knowledge falls high on the pyramid with creative cognitive processes being applied at much higher rates than evaluation or even analyzing skills. I say this because much of what auditors bring to the field is work experience in a particular sector. This is why they’re engaged to perform an audit against a particular standard, utilizing the audit process standard for a unique customer process.
They must be able to apply the “taught” to the “experienced” processes and make rational decisions on what is correct or compliant. Interestingly, more of this process happens in a subconscious manner than you may give credit to.
The brain is hardwired to seek out logic and ties it to strong feelings of emotions of safety, comfort, happiness, and recognition. When we balance the knowledge of process and the experience of work-related tasks, the brain tries to compare what’s taught to what’s been experienced and weeds out the undesirable. In this way we conclude that a decision is the correct decision.
This can cause trouble when someone else’s logic arrives at a very different answer to your own. This is called an argument. It may not cause a stand-up yelling contest, but it will definitely be a series of constructed parameters that differ greatly in their origins and end points.
How do we know which is the right set of parameters? How do we resolve an argument of logic offered by one or more parties? We return again to retained knowledge. We measure how much knowledge has been retained and what type of knowledge can be accessed by that person to win his or her point, quo vadis.
We have all experienced arguments that seemingly have no end when one person’s path to logic (quo vadis) is not a strong enough winner of another’s. We must then rely on a third-party point of reference. Enter the examination. Such a tool can quickly isolate the point of logic and determine the amount of knowledge that can be regurgitated on a set of questions.
Writing an exam is much harder than it first appears. Sure, “Let’s write a series of questions on a topic and see how many are answered correctly.” What question style do you use? Multiple choice, true or false, short answer, long answer, open questions, closed questions, etc.? How many questions should be examined on the topic? How should you group a series of questions into a final result of being knowledgeable or not? Should you set the passing mark at 100 percent or is a lower passing grade acceptable?
Finally, how do you ensure that people globally have access to the same exam conditions and arrive at the same result regardless of their experiential backgrounds? The answer is, by using tools such as Bloom’s taxonomy.
Ask yourself, “When I know something, how do I know this is right in the situation to which I am applying it?” “How do I know that another person will do the same as me?” “How do I know that I am accessing the same knowledge as another?”
About the author
Peter Holtmann is president and CEO of Exemplar Global (formerly RABQSA International Inc.) and has more than 10 years of experience in the service and manufacturing industries. He received his bachelor’s degree in chemistry from the University of Western Sydney in Australia and has worked in industrial chemicals, surface products, environmental testing, pharmaceutical, and nutritional products. Holtmann has served on various international committees for the National Food Processors Association in the United States and on the Safe Quality Foods auditor certification review board.