by Denise Robitaille
Clients are often surprised when I inquire about their quality objectives. They expect to be asked if they have established objectives but they usually aren’t prepared for any further discussion about them.
They tend to ignore the whole pack of other “shalls” that relate to organizational objectives. And they are reluctant to discuss the relationship between the objectives, continual improvement, gathering, and analysis of data and management review.
Auditors are tasked not only with ensuring that an organization conforms to defined requirements, but that the quality management system is also effectively implemented.
Most companies have the two ubiquitous objectives: 100 percent on-time delivery and zero defects. There’s nothing wrong with these objectives. To the contrary, they are often the most visible manifestations of a customer-focused organization. But how does an organization measure its success in achieving these goals?
According to clause 5.4.1 of ISO 9001:2008, quality objectives must be measurable. What are you measuring? And what are the data telling you? Clause 8.4 of ISO 9001 specifies requirements for gathering and analysis of data; clause 8.2.3 relates to monitoring of processes. In order to effectively fulfill requirements relating to quality objectives, the organization must not only establish the objectives but it must also define the metrics, establish the method of monitoring and gathering information about the metrics, and analyze the information in order to identify opportunities for improvement—requirement 8.5.1 of the standard which actuality mentions the use of quality objectives.
The requirements—when viewed sequentially—reflect the plan-do-check-model found in ISO 9001:2008: Plan the objectives, implement monitoring and data gathering activities, check the results, and act on the results. Here’s an example: An organization has established an objective for 100 percent on-time delivery. In conducting the surveillance audit, I begin by assessing the management review records. The company has multiple colorful charts displaying delivery stats by region, product line, and customer. I ask, “How do you define on-time delivery?” Invariably, I get at least two different answers. Sometimes I’ll get three or four. On-time delivery can mean:
- Product shipped out to meet customer initial delivery request
- Product shipped out to meet agreed to promised delivery, based on order acknowledgment
- Product shipped out to meet revised promised date, based on delays communicated to the customer
- Product arrives on customer’s dock on date requested
- Product is received by the customer on promised date
If you haven’t defined what “on-time delivery” means, you can’t know what the data are telling you. Various individuals in the organization are gathering data in different ways, based upon their own interpretation of the definition of “on-time delivery.” The result is that the data gathered are inconsistent and the resulting analyses flawed by the skewed data. Additionally, if the customers’ receiving processes have any delays, their analyses of your on-time performance may vary significantly from your own data. So, if on-time delivery to your customers is an objective, it might be beneficial to consider your customers’ definitions of your objective. Because you have multiple customers, the process could become complicated. You might find it necessary to analyze your delivery performance based on more than one of the criteria mentioned above.
This is only important if you’re going to use the data to identify opportunities to improve—which is the intent. Returning to my example from the management review records, I’ve often seen cases where data relating to the delivery objective are supposedly reviewed. And I make note of the fact that the data show a decline over two reviews from 93 percent down to 81 percent. But there is no evidence from the output of the meetings that any actions have been identified. I ask about the outcome of their review of the delivery data. And I hear: “Yes, we know it’s a problem, but we can’t do anything about it. Our customers complain about it all the time.”
At this point, since achieving established objectives is not a specific requirement of ISO 9001, I generally resort to raising a concern—an OFI (opportunity for improvement). I point out that the output of the management review does not provide evidence addressing a negative trend of an established quality objective and that anecdotal evidence suggests that due to varying definitions, the measurability of the objectives is inconsistent, affecting any chance of continual improvement.
If the objective is 100 percent and your organization improves from 85 percent to 92 percent, there is evidence of improvement even if the ultimate objective has not been reached. If the trend is negative and no action is taken, the question an auditor should ask is, “Why is this an objective if you have no intention of doing anything about it?”
About the author
Denise E. Robitaille is an active member of the U.S. TAG to ISO/TC 176, the committee responsible for updating the ISO 9000 family of standards. She is also principal of Robitaille Associates, committed to making your quality system meaningful. Through training, Robitaille helps you turn audits, corrective actions, management reviews, and processes of implementing ISO 9001 into value-added features of your company. She’s a Exemplar Global-certified lead assessor, ASQ-certified quality auditor, and ASQ Fellow. She’s the author of numerous articles and many books, including The Corrective Action Handbook and The Preventive Action Handbook, and a co-author of The Insider’s Guide to ISO 9001:2008, all published by Paton Professional.