Supplier Quality and the Total Cost of Poor Quality; Understand the Relationship and Avoid the Shortcuts!

During my two decade career, including a decade of consulting, I have integrated with a multitude of organizations, a wide variety of cultures, and a broad spectrum of projects, each with unique objectives, team members, and needs.  This diversity of experience has been its own reward and continues to incentivize me to forego the obvious advantages of a traditional organization, and to remain a consultant.

 

However, even in the face of such diversity, there are particular areas in which I continue to recognize unfortunate similarities.  One of those is the common misapplication of the components that together, define audit programs.

 

Industries regulated by Title 21 are federally obligated to audit their own programs (commonly referred to as internal audits, compliance audits, or first party audits; 21 CFR 820.22).  They are also federally obligated to institute programs that control purchasing of goods and services (21 CFR 820.50).

 

While it is clear that the industries regulated by Title 21 universally recognize the requirements for, and the value of, auditing functions, it is equally clear that many of the systems currently in use have not been designed with the fundamental differences between an internal audit and a vendor audit in mind.

 

While both involve critical assessment, there are fundamental differences:
  •  820.22[1]specifically uses the word “audit” when defining requirements for evaluating internal (consumer centric) effectivenessof internal systems, while
  • 820.50[2]instead uses the word “evaluate” to define requirements for instituting control over the quality (with regard to fitness for use, reliability and sustainability) of vended services or purchased product

The difference between the words is both purposeful and relevant.  Each of these assessment tools is a component of a larger program, programs whose objectives should fundamentally differ, as follows:

Internal Audit versus Vendor Evaluation
While these concepts appear simple presented in a small comparative table, programs currently in use throughout our industry often do not differentiate and have not been designed with these objectives in mind.

 

Throughout my consulting career I have frequently seen:
  • Organizations that combine the responsibilities for conducting internal audits and evaluating vendors into a single role
  • Programs that utilize the same tools and techniques for both activities
  • Programs that diminish the act of vendor evaluation by reducing it to a verification of liability insurance
  • Resources conducting either or both of these functions, who are unable to communicate the objective of either function
  • Programs that exist to comply with the laws that require them, but that have not been developed to be of significant use
Based on experiences, legitimate debates can be had over which is the most common misapplication of the fundamentals, and which produces the most significant challenges.  From my perspective the situation whose implemented resolution would realize the greatest returns, with the smallest investment, is without doubt:

 

Programs that utilize the same tools and techniques for both activities
We’ve all seen this model.  Compliance checklists, prepared to measure controls against the requirements of the applicable regulations and/or internal standards that require them.  The questions on the checklist are taken directly from the associated regulations, and require the auditor to document whether or not these controls exist, or not, and in a few cases, supplement that affirmation with a few comments.

 

For instance, checklists of this nature may contain steps that resemble the following:
Q1: 820.20 Management Responsibilities
Has Executive Management:

 

  • Communicated the importance of meeting customer, statutory and regulatory requirements?
  • Ensured that quality objectives are established?

 

Q2: 820.20(a) Quality Policy
  • Is there a quality policy?
  • Is it communicated and understood at all levels of the organization?
  • Is it reviewed and updated regularly?
Q3: 820.20(d) Quality Planning
  • Are Quality Planning steps taken to determine and document how quality requirements are met?

 

It is clear when encountering a checklist of this type, that it was developed to measure controls in place within a regulated organization – as the areas of verification are made directly relevant to the requirements of governing regulations.  When I see these applied to vendor audits, I generally assume that it is the same checklist the organization developed to internally audit their own programs.

 

It is evident that the tool was developed to measure the controls put in place to comply with specific Title 21 regulations, regulations that do not apply to non-regulated suppliers.  The examples above may seem innocuous; after all, if vendors have pursued ISO certification, they may indeed have Quality Policies, and they may communicate them, and the policy may have been designed to hold management accountable.

 

However, what happens when we take this a little further, to other areas of Title 21?  Will the vendor have every SOP that companies regulated under Title 21 are required to have?  Will every process control that is required of us, be implemented by non-regulated vendors?

 

  • Most likely not
Does that mean that if we have implemented this type of checklist, that our self imposed procedure requires us to disposition this vendor with a failing status?

 

  • Yes

 

And here is where we do ourselves damage.
When we use this type of tool, based on an internal auditing tool, to evaluate a non-regulated, external and perhaps critical supplier, and we find that they do not comply with the totality of Title 21, do we actually fail them?

 

  • NO
Why?
Because deep down we all realize that although qualifying our vendors is our obligation under Title 21, we also realize that nothing forces our non-regulated vendors to construct their systems to meet the requirements of these federal laws.

 

As a result, our decision making process regarding the suitability of a non-regulated vendor naturally remains based on logical and practical supply side-metrics, not on their ability to meet these regulations.  In this situation, the majority of regulated companies using these methods will also use the leeway provided by their reporting steps, to bypass their own programs, effectively negating the acceptance criteria implied by the measurements taken during the audit.

 

The reporting process is how companies attempt to justify the use of the non-reregulated vendor, even though the vendor did not meet the bar imposed by the process used to audit them.  These reports provide summaries completely dedicated to highlighting that this is not a vendor that holds a license under Title 21, and therefore, has no obligation to that set of regulations.

 

In order to support this position we cite controls that are in place that provide us confidence (in addition to our history with the vendor and their delivery solutions) that the goods/services:

 

  • Are consistently supplied
  • Continually meet quality and release specifications
  • Are inspected upon receipt as part of our incoming inspection programs
We also include in the report evidence that the:
  • Workforce is well trained
  • The supplier is responsive to our needs
  • The business is financially sustainable
  • The programs in place are suitable for the nature of their non-regulated situation
It is clear, that no matter how our program has been designed to evaluate and measure acceptability, the decision making/dispositioning process continues to come down to what is relevant.  When we use the reporting tools in this way, we continue to point out that we feel our vendor auditing program is not relevant.

 

So if we realize this, why don’t we adapt our tools pro-actively?  Why don’t we adapt the program to serve relevant interests, to evaluate the supply chain metrics that our decision making progress applied as criteria to disposition vendors?

 

That is the $64,000 dollar question!

 

There are serious consequences to allowing a situation like this to continue to exist.  They include, but are definitely not limited to:
  • Investment of time and money into a program that is not designed to evaluate relevant areas of risk, and therefore, would not systematically identify relevant risk factors
  • Demonstration of a willingness to fail to comply with our own internal and self imposed standards
    • the very act of choosing which metrics to evaluate during the audit implies that we feel these are the critical concern
    • ignoring these metrics during the dispositioning process implies that we know our vendor evaluation program is deficient
  • Demonstration of an inability to measure the effectiveness of our own internal systems
    • continuing to issue reports that highlight the inappropriateness of our own measurement techniques, without also modifying the inappropriate techniques, implies that either we don’t care to make improvements, or we don’t understand the need to standardize our own processes
  • Demonstration of a willingness to create a system only to meet a federal requirement, instead of developing the program that adds value
In addition to the consequences above, which may have an impact on our relationship with regulators, situations of this type also result in lost business management opportunities.  Investing time and resources executing this type of process while evaluating the acceptability of non-regulated vendors will predictably result in a process that:

 

  • Is not able to routinely measure meaningful aspects of particular vendors’ processes
  • Is not responsive to the criticality of the vended product to the supply chain
  • Is not responsive to the risk factors presented to our own products, by the externally supplied product
  • Does not provide a method of discovering errors or trending errors or performance
  • Does not provide a meaningful method of measuring audit results
  • Does not result in a meaningful listing of potentially valuable improvements the vendor could make
  • Does not provide any meaningful control of the impact of supplied product
So where does the solution lie?

 

Implementing a program that is capable of successfully minimizing the risk to a regulated supply chain that may be posed by a non-regulated supplier is not a complicated endeavor.  The solution lies in construction of a program that focuses on the metrics that are relevant to the product supplied, and that utilizes the program tools in the appropriate order.

 

Effective Supplier Quality programs should include planning steps that take into account the type of service being provided, the nature of the vendor, and the criticality of the product vended.  While developing a program of this type, consider incorporating tools that speak to each logical phase of a comprehensive process, including the following:

 

Planning Phase/System Review:
  • Proactively define Risk Criteria:
    • first assess the risk a poor quality product would pose to the final product
    • then assess the risk a shortage of the vended product would post to your supply chain
  • Define acceptance criteria that are meaningful to the risk factors identified, and the nature of the supplied product and the nature of the vendor
Execution Phase:
  •  Include a method of inspection that allows the inspector to discover and measure the vendors’ ability to comply with their own internal quality systems
  • Include an inspection plan that has considered results of previous audits, states trends of interest, and provides inspectors methods to track trend movement and to recognize variations in performance
Measurement Phase/Input Evaluation:
  • Include a method of comparatively analyzing one vendor against another
    • The program provides a vendor-based priority schema to establish supply redundancy of critical components
  • Allow the results to be measured within the context of the severity of the risk factors defined during the planning phase
  • Include measurements of trend movement
Reporting Phase:
  • Include a description of the vendor’s quality system, with respect to their operation processes, highlighting the relevant control provided
  • Include a final summary of the results as they pertain to defined risk factors
  • Include a clear and concise final disposition, with limitations of use if applicable
  • Include a listing of relevant corrective actions assigned to the vendor, with clear and measureable expectations such that follow up audits can adequately measure progress and success
The Total Cost of Poor Quality

 

It is clear to me that many companies regulated by Title 21 use the same tools to internally audit themselves and to assess their non-regulated vendors.

 

In addition to the regulatory risk associated with this practice, and the missed business opportunities, these organizations also fail to take advantage of an opportunity to accurately measure the total cost of poor supplier quality (CoPQ).

 

The cost of poor quality is not insignificant; industry studies reveal that the CoPQ is often equal to 10% of an organization’s gross revenue.  Coda previously published a blog dedicated to the criticality of measuring The Total Cost of Quality (CoQ).  Equally important, and intrinsically related, is the need to measure the total Cost of Poor Quality.

 

Unfortunately, our experience indicates that tracking supplier-driven CoPQ is frequently limited to measuring scrap, waste, and returned inventory.  However, current economic profiles make it clear that direct material costs account for less than 50% of the total COPQ.

 

The following presents a contemporary listing of factors that also contribute significantly to the total CoPQ:
  • Scrap, rework, sorting, and inventory management overages
  • Inspection failure investigations
  • Line shutdowns
  • Capacity constraints: dedicating equipment to rework reduces the optimal utilization of equipment and FTEs that was critical to the success of profit planning
  • Elevated freight costs: expediting shipments to customers or downstream plants
  • Recall expenses
Title 21 requires that we establish and maintain control of our supplied products.  If we perceive this requirement as a regulatory burden, we might choose to minimally invest in the development of our supplier quality programs.  Choices of that nature often result in using our internal auditing tools to execute supplier quality programs because they already exist, and it seems easier and less expensive than developing new tools.

 

Alternatively, we could embrace this requirement as an opportunity to meet supplier quality requirements by proactively investing in the development of a program that is directly relevant to the interests of our own production realities, and to the role that each vendor plays in those processes.

 

We at Coda encourage all of our clients to embrace the opportunity to develop a meaningful supplier quality program that can be used to effectively to:
  • Assess and mitigate the risks associated with each vendor
  • Trend and track performance of each supplier
  • Establish redundancy in our supply chain
  • Reduce internal CoPQ
  • Work with our suppliers to improve their quality and reduce their CoPQ
Developed and implemented correctly, supplier quality programs can enable us to minimize our costs, and foster collaborative and productive relationships with our vendors. Taking advantage of this opportunity will serve our organizations, our vendors, and our customers.  This in any industry and by any standard would be considered a WIN-WIN-WIN…

 

…Now, go forth, evaluate, measure, control and excel!

 


[1] Harmonized with 211.22
[2] Harmonized with 211.22(a) and 211.34
© Coda Corp USA 2014.  All rights reserved.

 

Authored By:

Gina Guido-Redden
Chief Operating Officer
Coda Corp USA
(p) 716.638.4180
[email protected]
https://www.codacorpusa.com.previewdns.com/“Quality is never an accident; it is the result of high intention, sincere effort, intelligent direction and skillful execution. It represents the wisest of many alternatives.”
Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>