By Wai Wong and Bikash Chatterjee, Pharmatech Associates
In January 2011, the FDA issued its new guidance regarding Process Validation. Based upon experience gathered by the agency since 1987, the new guidance reflects the principles of the 2004 FDA initiative, Pharmaceutical cGMPs for the 21st Century – A Risk-Based Approach.
This new definition of process validation is a significant paradigm shift from the original concept, embracing the basic principles of scientific understanding put forth in ICHQ8 and Q9 as a foundation for controlling process variability.
The challenge most organizations will have with this new guidance will be assuming responsibility for defining what is scientifically acceptable for characterizing the sources of process variability. This article will present a roadmap that is both practical and scientifically sound, for deploying a process validation program that is consistent with the new guidance.
In our experience, the biggest challenge facing organizations attempting to bridge the classical paradigm of “three batches and we’re done,” is in understanding how the new process validation stages work together to build the argument for process predictability.
The new guidance divides process validation into three stages:
- Stage 1 Process Design: The commercial manufacturing process is defined during this stage based on knowledge gained through development and scale-up activities.
- Stage 2 Process Qualification: During this stage, the process design is evaluated to determine if the process is capable of reproducible commercial manufacturing.
- Stage 3: Continued Process Verification: Ongoing assurance is gained during routine production that the process remains in a state of control.
A proposed approach for connecting the activities within each stage and implementing a manageable program is given in Figure 1.
This initial stage is the most significant departure from the classical definition of what constitutes process validation. Stage 1 focuses on process characterization studies to identify the Key Process Input Variables (KPIV) that affect Critical-to-Quality (CTQ) measurements for the product. This characterization affects a product’s form, fit or function and is typically performed on small or intermediate stage equipment.
- Product Design
One might ask, why go all the way back to product design? Process predictability relies on understanding what is important to process predictability and product performance. Having a solid grasp of the formulation and product design rationale is essential to achieving that level of understanding. The formulation will provide an early glimpse as to what processing steps may become critical downstream and hence become sources of variation in the process. The product design rationale will define how the formulation, raw materials and processing steps are related to achieving the desired product performance. Without this understanding, it becomes difficult to know where the emphasis should be for the initial characterization studies in Stage 1 and confirmatory studies in Stage 2.
- Process Risk Assessment
As the process is developed at small scale, a process risk management tool such as a Process Failure Modes and Effects (pFMEA) can be powerful in identifying which processing steps could affect process stability at Stage 2. Before conducting any risk management it is a good idea to create a process map that captures all inputs, outputs and control variables. This can be used to discuss what CTQs will be measured and provide a risk based foundation for developing a sampling and testing strategy. Small scale and scale-up data may be captured here if a comparability argument is part of the downstream scale-up exercise to ensure there is parity between the critical output parameters as they relate to identified CTQs.
At this point, the pFMEA can be used to prioritize which key process steps and KPIVs represent areas of risk to process predictability. These areas will become the focus of characterization studies in Stage 1 and later in Stage 2.
Equipment/Process Characterization Studies
Before beginning any characterization study, it is essential to be sure the equipment performance is stable and reproducible. Characterization studies performed on unstable equipment will introduce variability that will not be indicative of the final process. While a formal qualification process is not required, fundamental engineering characterization studies should be performed on the equipment before beginning the process studies.
When looking at the basic principles behind ICH Q8, the guidance describes a tiered exercise in which process understanding and key parameter variability is methodically narrowed as the process definition moves from the knowledge space through the design space to the control space used for manufacturing. Characterization studies need to be balanced in their experimental design. This means that early one-factor-at-a–time (OFAT) studies can serve as supportive data for the design of these experiments but that characterization studies should be balanced or “orthogonal” when it comes to determining the contribution to process stability from critical input parameters. While the number of lots will increase during this phase, these smaller scale studies provide the opportunity for larger sampling plans and greater process characterization than would be required with full-scale batches. Effective Stage 1 characterization studies are based on several factors:
a) Sampling Plans
Designing a sampling plan that has the appropriate resolution to describe the process variability is important to building confidence as the process scales up and moves to validation. There is no one approach to determining the appropriate sampling plan. The FDA does not legislate a specific approach to establishing a sampling plan. Whatever approach is selected, however, must have a clearly defined rationale behind it. Possible sources and approaches for developing a sampling plan include PQRI recommendations for powder processes, ANSI Z1.4-2008, Acceptable Quality Level (AQL), Lot Tolerance Percent Defective (LTPD), or the Operating Characteristic (OC) curve. There is no right or wrong answer, but whatever sampling plan is developed must be defensible based upon the level of resolution necessary to see variation in the process.
b) Sampling Technique
Although the equipment may not reflect the sampling challenges at full scale, demonstrating that sampling and storage methodology does not introduce variability into the process is a precursor step to performing characterization studies. A Gage Reliability and Reproducibility (GRR) study would be an effective way of demonstrating the sampling technique is robust.
c) Method Robustness
Typically, analytical and in-process methods are validated at this stage but it is important to ensure the accuracy and precision of the method itself. Making sure the measurement tool is capable of seeing the differences in the process performance being evaluated is fundamental to knowing you are characterizing process variability and not measuring noise.
Design Space Establishment
To identify the boundaries and variables that drive process stability it is possible to focus only on the parameters that steer the process and the corresponding Key Process Output Variables (KPOV) that affect the product CTQs. The design space will explore the boundary limits of the parameters that are critical to process stability. Identifying the KPIVs of interest can be achieved using a combination of a balanced Design of Experiments (DOE) approach and statistical analysis, such as Analysis of Variance (ANOVA) to summarize the contribution of each variable to the variation seen in the data being analyzed. A high correlation of determination (r2 ) means that most of the variation seen in the data can be explained by the variables evaluated.
Validation Master Plan
The end of Stage 1 should provide sufficient detail to develop the validation master plan that will describe the approach, justification and rationale for moving to Process performance Qualification.
Stage 2: Process Qualification
The demonstration phase of the process validation lifecycle occurs in Stage 2. Before moving to this phase there are several critical precursors. First, the facility and its supporting critical utilities must be in a state of control. Secondly, the equipment must be qualified—meaning the installation qualification, operational qualification and performance qualification are all complete. Finally, the in-process and release methods used for testing must be validated, and their accuracy and precision well understood, in terms of the final control space being evaluated. These steps are essential to ensure that the unknown variability we are evaluating is attributable to the process alone.
The new guidance introduces a new term Process Performance Qualification (PPQ), in lieu of process validation for process demonstration. The PPQ is intended to subsume all of the known variability from the manufacturing process and demonstrate that the process predictability is sufficient to ensure the product performs as it claims to do. In this case, the big departure from past process validation approaches is that it is the cumulative understanding from Stage 1 and 2 that drives the decision that the process is predictable. The rigor applied in Stage 1 will dictate the level of characterization, sampling and testing required in Stage 2. Dedicated focus in Stage 1 will result in reduce Stage 2 cost and timeline impact.
The PPQ exercise focuses on demonstrating process control. Data from platform formulations and unit operations can be used to manage the risk moving forward and establish the level of characterization required in the PPQ protocol. Consequently, the old rule of “three lots and we are done” goes out the window. For simple processes with a low risk of process excursion, e.g. high loaded dose, direct blend formulations, the PPQ may be three lots or less. For complex processes, e.g. low dose controlled release spray drying processes or mammalian cell processing, the number of demonstration lots will likely be higher. Old paradigms, supplied by FDA Guidance for such things as media fills for aseptic validation, will now require a risk-based statistical justification based upon lot size and risk tolerance. The PPQ will challenge the process control space. The control space represents the recommended manufacturing limits for the process. The control limits are typically established by moving away from the boundary limits of the design space, and selecting parameter limits in a process design space that will ensure process predictability away from the edge of failure for each KPIV.
There are no sacrosanct evaluation parameters for demonstrating a successful PPQ. Process Capability is a fundamental metric that can be used to compare process variability and process centering against allowable specifications. It can be used to justify AQL or LTPD sampling levels at the commercial level that could be a substantial on-going cost savings. If desired, this information will support any PAT strategy the site may have for the process downstream.
A good practice at the end of the PPQ is to go back to the risk management evaluation and demonstrate that the process risk elements identified at the outset of Stage 1 have been mitigated. This data will be the basis of managing continued improvement on the process via the change control system.
Stage 3: Continued Process Verification
The goal of the third validation stage is continual assurance that the process remains in a state of control (the validated state) during commercial manufacture. The FDA is looking for a monitoring program capable of detecting gradual or unplanned departures from the process as designed. Historically, we have used the product stability program, change control process and the Annual Product Review Process as vehicles for monitoring and assessing process stability. The challenge with this approach has always been the resolution of these systems making proactive intervention difficult to achieve when dealing with process drift. For this stage the agency is looking for a program that builds upon the process understanding acquired in Stages 1 and 2.
Stage 3 will require a monitoring program that balances sampling, testing costs and demand with process understanding. A matrix approach to sampling with a focus on looking at intra- and inter-batch variation of the KPIVs and CTQs for the process is one way to cost effectively monitor the commercial process stability. Employing Statistical Process Control, Moving Range Charts and XBar-R charts are also simple ways to evaluate if the process is wandering unacceptably. It is important to apply a data-gathering phase before establishing alerts and action limits, since the commercial process will subsume the totality of variation from the raw material, process, and testing methods. This data should drive a statistical analysis of data against the process characterization and PV lot performance. Understanding the intent behind each analysis is essential to coming to the right conclusion. Statistical software packages such as Minitab and JMP can make analysis simple and reproducible and introduce, as required, data evaluation criteria such as the Westinghouse rules which, when used to discriminate aberrant data from true process variability, can determine if further action is required. As areas of further study are identified, the risk management tools should be revisited to ensure the impact of the process variation is evaluated consistently.
Underlying the new guidance is the need for proactively establishing a system for knowledge management. This means ensuring all parties involved in the development, analysis and evaluation of the data and process have a solid understanding of past performance and its implications on process stability and product performance. Consolidating the information in a central document or repository will ensure some continuity of learning and will allow continuous improvement or CAPA activities to build upon best practices of the past.
Quality Management System (QMS)
The largest paradigm shift within the new guidance is the Quality function. Moving away from a product centric QMS requires that Quality be intimately involved in the evaluation and decision-making criteria as the process moves through each stage. It will require a heightened level of scrutiny to make sure all supportive elements are in place. For example, ensuring critical monitoring systems are calibrated will beg the question: “Is it single point or three point calibration?” Method capability will focus on accuracy and precision and interference points. Ensuring that controlling and measurement tools are capable will become the foundation for managing the QMS, rather than the QMS procedures and documentation audit trail. To facilitate both the knowledge management and QMS paradigm shifts, a milestone or stage gate approach to process validation is an effective way to ensure all key stakeholders and decision makers remain on board with the new process-centric philosophy. An example of one possible approach is shown in Figure 2.
Figure 2: New Process Validation Stage Gate Approach
The new Process Validation guidance represents a dramatic shift from the 1987 FDA guidance issued to industry. While less prescriptive, it provides a sufficiently descriptive framework for industry to create a scientifically driven approach to demonstrating process predictability. There is no single answer to this guidance, and a structured roadmap, with clearly defined deliverables at each milestone will ensure that the philosophical and technical components required to demonstrate process predictability will be applied in a uniform fashion across the organization. In addition, this uniform approach to process validation will allow the organization to reap the benefits of a more focused validation effort, potentially reducing the cost of Stage 2 PPQ and resulting in products and processes which are both stable and predictable.
1. FDA Guidance for Industry, Process Validation: General Principles and Practices, January 2011
2. The Westinghouse Rules for Identifying Aberrant Observations in the Statistical Quality Control Handbook, 1984, Section 1B
- Webcast: Redefining Pharmaceutical Process Validation
- FDA's new process validation guidance spells out the new rules for process validation....
- Process Validation: Can We Now Get Back to Basics?
- FDA’s new guidance on process validation gets back to the original language of the GMPs on the subjects of control...
- PAT, QbD and Process Validation – The Enablers of Pharmaceutical Quality
- PAT, QbD and Process Validation - The Enablers of Pharmaceutical Quality...
- QbD and FDA’s Process Validation Guidance: Square Peg, Round Hole?
- Are FDA's Process Validation and Quality by Design aligned? The proof will be in the pudding, say several industry QbD...
- Will the New Process Validation Guidance Make a Difference?
- Yes, says expert Jim Agalloco, but the guidance "falls heaviest on smaller firms that are often ill-equipped to to the...