Design and Validation – Drawing the Fine Line

Recently, several questions were submitted by readers, seeking clarity on the fine line between the stages of design and validation.  While preparing individual answers to each question, we noted that dealing with them as a group presented an opportunity to address fundamental concepts comprehensively.

This month’s blog is dedicated to highlighting the very real differences, as well as the critically necessary interaction, of these product life cycle stages.

Where to start?  The beginning…

The Primary Directive – Operate in a State of Control

The obvious end game of the product life cycle is to achieve, and operate within, a state of constant control.  The underlying assumption?  If we are always in control of our process, we are always in fact in direct control of the quality of our product.


Control: Get It, Keep It, Prove It 

Validation programs provide us the tools we need to demonstrate that we have in fact, established control.  The product life cycle, however, begins with design and does not end once control is initially demonstrated.  Control can be lost; our challenge is to ensure it isn’t, and to ensure that we can prove it wasn’t.

The Product Life Cycle; Three Simple Stages

Note that the diagram is presented as a circle for 2 reasons;


  • Stages have hand off periods –  subsequent stages cannot begin until the previous stage ends
  • A circle is infinite – stages can and should occur more than once – if assessment of data shows room for, or a need for, improvement, design begins again.



Stage 1: Design
Stage 1 is all about design and design is a big space; from early development (post research/feasibility) straight through to scale up for commercial manufacturing.  During this stage, we define the commercial manufacturing processes we intend to routinely use.  We define these processes based on knowledge gained throughout process development, facility planning, and scale-up.  During this stage we also generate output that will be critical input to subsequent stages.

Stage 2: Qualification and Process Validation
Stage 2 is all about confirmation.  During this stage, we define and qualify the facilities, equipment and processes that we will use to routinely manufacture product.  This is where we execute all types of qualification and validation activities.

Stage 3: Continued Monitoring and Assessment 
Stage 3 is about monitoring routine manufacturing.  Throughout this stage we will implement and utilize process monitoring procedures and technologies to continually verify that we are operating in a state of control.  This phase begins with marketing authorization and uses input from all of the supporting Quality Systems to constantly assess the state of control and the performance of the equipment and processes.  The assessment activities increase the knowledge base and seek opportunities for improvement. Systems that provide input to the Continual Improvement assessments include but not limited to data from in-process technologies, complaint and investigation systems, trending and monitoring, change control, deviation investigation, and internal auditing systems.

Many of the questions we received were relative to the finer points of the design stage, and more precisely, were relevant to differentiating between design and validation stages.  Organizations, systems, and programs often have a difficult time drawing this line clearly because while validation is not the focus of stage 1, Validation and Quality groups must be deeply involved during stage 1 in order to appropriately plan for (and execute) stages 2 and 3.

In the illustration that follows, we use the build of a new facility as an example – although design of a new process would follow the same path.

Stage 1: DESIGN


 Design: Facilities, Utilities, and Equipment

The design stage must be executed for any process or product that is being developed for commercial use.  When developing a product or a process, specifically designed experiments are generally conducted first on a lab scale, then on a pilot scale, and eventually on a large scale, leading to a deepened and significant understanding of the process.

Moving through the process of developing an advanced understanding will in itself show us the limitations of the process, and the potential sources of process variability.  These experiences will likely be iterative; each repetition will include modifications to the process engineered to reduce the variability identified in the previous iteration.

The act of defining a robust process, that can be considered a suitable target for validation, must include consideration of the following;

  • Proposed process steps (unit operations) and process variables (operating parameters)
  • Identification of the sources of variability each unit operation is likely to encounter or may contribute
  • The possible range of variability for each input into the operation (e.g., tolerance and range)
  • Evaluation of process steps and variables for potential criticality
  • Selection of process steps and variables for evaluation during representative runs
  • Identification of product characteristics that will be directly relevant to processing parameters (i.e., Critical Quality Attributes (CQAs)/Critical Processing Parameters (CPPs)), and/or a direct measure of the quality of the finished product (i.e., release specifications)
  • Characteristics of facility and equipment design required to execute the proposed process
  • Upper and lower limits of process capability

Development output may include (but may not be limited to);

  • Facility requirements
  • Equipment types and usage/operating parameters
  • Required materials and components
  • Process capabilities and limits
  • Processing parameters and instructions (sequence of operations)
  • Product characteristics (in-process and final)
  • Definition of critical information and documents designed to control routine manufacturing, including master batch records, release specifications, CPPs, CQAs, in process monitoring methods, output of production control records, and all associated Standard Operating Procedures (SOPs)

At the end of the day, design will not conclude until the outcome is a robust process, with acceptable limits and minimal variability.

Stage 2: CONFIRM

The design activities conducted in stage 1 will establish the basis of control for routine use of a process and the systems used for process control.  The output of stage 1 will be used as the input for all activities conducted in stage 2 (e.g., the acceptance criteria applied to confirmatory testing must come from approved definitions contained within requirements/specifications, batch records, or product specifications).

Stage 2 activities effectively confirm what we think we have come to know by the end of stage 1.  The output of stage 2 (qualification and validation data) is used to demonstrate the effectiveness of that control, and thereby justify commercial release of a process.

Additionally, the testing and measurement methods utilized during qualification/ validation become the baseline standard used to measure ongoing control throughout routine execution of the process.


Validation proves (directly measures) quality in a time and space, while the preservation of the control demonstrated in that time and space assures the continued achievement of the same level of quality.

Given that fact, it follows that conducting Stage 2 qualification/validation efforts before design is complete (before the process is well understood) will dramatically compromise future efforts to preserve control and assure quality.

Simply put: moving targets cannot be reliably confirmed.

The confirmation stage has 2 qualification elements;


  • design of the facility and qualification of the equipment and utilities (IQ,OQ)
  • performance qualification (PQ) of the equipment and/or process (process validation)


Installation and operational qualification activities must be conducted in a logical and step-wise manner,  noting that operational abilities of a piece of equipment cannot be adequately qualified until the acceptability of the build and install have first been confirmed.

Once build and install have been confirmed, we ensure that the results we obtain during an operational qualification are at least valid, and we successfully eliminate installation issues as a potential source of operational variability prior to conducting operational testing.

It goes then without saying that we need to resolve and remediate any deviations encountered during testing, prior to moving onto the next level of testing.  Any deviation presents the potential for corrective change, and as such, we cannot consider the target static until there are no outstanding deviations or pending correctives.

Once we have confirmed appropriate build/installation and operation, we then move into Performance Qualification (PQ) and Process Validation (PV) in order to demonstrate the reliability of the combination of the facilities, utilities, equipment, operators, raw materials, components, and procedures.  This stage confirms that the equipment and ancillary systems, as connected together (i.e. the PROCESS), can perform effectively and reproducibly based on the approved process method and specifications. 

As the objective of the PQ/PV is the demonstration of consistency, PQ/PV testing requires the evaluation of multiple commercial scale process runs.

As I mentioned earlier, within the context of stage 2 activities, we can see that it would not be possible to successfully conduct them without first having developed a thorough understanding of the process during stage 1.

As I noted, one of the tasks we dealt with during stage 1 was the identification and minimization of potential sources of risk.  When planning a performance qualification or validation, our experimental design should seek to combine the remaining sources of risk into a worst case scenario.  It is important to note that risks can be small individually, but when all the sources are combined in a single environment, risk can be greatly increased.

It is with the definition of validation in mind that we do this – recognizing that to provide the “highest degree of assurance that a process is reliable and consistent” we must evaluate its performance under the worst possible conditions.

It is also critical to note that ‘worst possible’ actually means the worst possible under controlled conditions.  We need to construct the validation studies such that the outer limits of the process are tested by stressing the routine conditions to the greatest extent possible, but stop short of creating non-routine conditions.

At this point we also need to recognize that once we have constructed and run the validation studies, we have also formalized the definition of the process that can be routinely executed.

The data set that is generated during the qualification and validation activities will live to support the process that was run while it was collected.  It can support no other process, as it does not reflect the state of any other process.    


That being said, implementing a modification to any portion of the validated process must be done under an effective change control system, designed to manage planned change.  Through this system, we propose change, evaluate impact and determine what level of verification or re-validation testing will be required to modify and/or supplement our initial evidentiary data set.

The following is a simple illustration of the relationship between design space, the process to which our validation data set supports commercial use of a process, and the need for change control:


 Change Control

The red box in the center represents the process that was executed during validation runs (which our data set supports routine use of) while the bold red line, represents the outer limits of that process used during validation.

Everything within the red solid line represents the allowable space for routine production, which includes equipment adjustments, calibrations, material supply changes, and/or every other variable that was planned into the validation studies.

The exterior dotted line represents everything that was studied and documented during the design stage, and marks the outer limits of our documented process based knowledge base.  Everything between the dotted and red line, is referred to as design space

The development data that lives in this space, in support of use of those parameters, can be exceptionally useful to the organization.  For instance, the documentation that supports the space outside of the bold red line, but within the dotted line, can be used to support release of a product if a product run deviated outside of the red space, but stayed within the yellow space.  Appropriately documenting development activity can, in cases like these, literally save the market life of finished lots.  If there is no documentation to support this case, a data driven argument for release of lots produced within that space can never be supported.

If we determine a need to modify our commercial process by incorporating any value within the design space, we must do this through change control.  This may result in a need to re-run the validation in part or in whole (supplementing or replacing the validation data set), effectively re-drawing the red line.

If we intend to incorporate, for routine use, a process parameter that is outside of the dotted line (outside the outer limits of our design space), we must then go back into the design stage, effectively re-drawing the dotted line.

Stage 3: MONITOR

Once we have successfully completed stages 1 and 2, we proceed with our process into licensing activities for new processes, or straight into routine manufacturing if the validation was in response to a change or improvement to a licensed process.  Once we begin routine processing, we also begin stage 3, regularly monitoring the process and all available data sources capable of providing feedback to the control state and the quality of the product.

The monitoring systems we put into place must be proceduralized, and must be functioning at all times.  They must also be capable of generating data that can be compared to the data sets collected during validation such that process drift can be detected.  This is important to note, because we are not only expected to be able to monitor for process deviations, or failures in control, but also to know if we move from the accuracy we documented within the validation. If we put this into visual terms using our previous figure, we need to be able to recognize any movement within the bounded space, not just movement that takes us out of that space.

The very fact that we conduct monitoring activities implies that we are receiving feedback upon which we may act.  We may identify problems that require solving, or recognize obvious opportunities for improvement.  Like the design stage, the monitoring stage is iterative, each iteration increases our knowledge base and provides us opportunities to improve our process. 

In fact, finding opportunities for improvement is so probable, that the modern Quality System Regulations (QSRs) have been written with the expectation that we will continue to improve our systems, processes, and products throughout the life of process.  It is critical to note that these regulatory expectations mean that meeting release specifications is no longer enough.  We do not have to encounter failing runs or products to be considered below the standard expected.


It is also worth noting that we should not consider change control systems routine monitoring systems.  Monitoring tools must be in place at all times; process feedback should be generated with each run, not only when change is being implemented and evaluated.


Earlier in the blog we saw a very simple circular diagram illustrating the stages of the product lifecycle designed to illustrate their relationship with each other.  Here we see a version of the circular representation, with added detail now that we have a broader understanding of the programs that are involved. 


Here we see;


  • Research: which involves the earliest conceptual version of the process; ideas are proposed and feasibility is determined. 
  • Development: where we iteratively perform laboratory and pilot scale experiments in order to define the process and identify variables and limits.


  • Qualification and Validation activities: engineered to confirm approved definitions generated during stage 1 (e.g., for types and usage of equipment, software, and processes) on a commercial scale.  This confirmation is used to support the accuracy and efficacy of those definitions, establishing the allowable limits for routine use and proving our state of control when we operate within those limits. 

Assess & Monitor

  • Routine operation: where we continually monitor and assess in order to preserve the state of control.  As we receive feedback from the monitoring programs, we then have the ability to further refine and improve the process.  And when we implement changes and improvements, we utilize our change control systems and move back around the circle again.

This is cradle to grave validation.

We are Coda, and we hope that this answers the questions submitted!

 © Coda Corp USA 2012. All rights reserved.




Gina Guido-Redden

Chief Operating Officer


Coda Corp USA


“Quality is never an accident; it is the result of high intention, sincere effort, intelligent direction and skillful execution. It is the wisest of many alternatives.”

Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>