By Ali Afnan, Contributing Editor
The mission of the pharmaceutical regulator the world over is to represent the consumer, assess the safety and efficacy of medicines and assure the availability of quality products.
In the United States, pharmaceutical products are regulated routinely through Chapter 21 of the Code of Federal Regulations. The CFR is not prescriptive. Its goal is to specify “what” needs to be done, not “how” it should be done.
However, the regulator has often been driven to prescriptive oversight, pressured by public outcry over recent drug failures, and the challenges posed by new technology, increased drug potency and the risk of adverse drug side effects in patients.
It’s no surprise, then, that regulators around the world have become increasingly risk averse. They oversee an industry that, outside of drug discovery, is measurably less innovative than others, at a time when consumer expectations are rapidly increasing.
At the turn of this century, FDA held advisory committee meetings to better understand why the drug industry wasn’t more innovative. The results were some of the Agency’s boldest steps ever: the Process Analytical Technology (PAT) and Drug Product Quality initiatives.
To stimulate innovation, FDA argued, manufacturers needed more flexibility to vary their processes and continuously improve them.
The dilemma was the absence of scientific and engineering knowledge in drug applications and submissions. Measurement of the material attributes critical to quality was often neglected. Firms did not adjust and improve their processes, because any changes required FDA’s approval, a lengthy and time-consuming course of action.
Now, FDA called for process understanding, measuring and the real-time control of attributes critical to product quality. At the same time, it reinforced the dependence of consistent quality product on risk assessment, based on process understanding.
This was a major challenge to an industry whose processes are fixed in time, despite inherent process and material variability.
The pharmaceutical industry was about to catch up with other manufacturing industries, and the end appeared to be in sight for the old adversarial relationship between the regulator and the regulated.
A season of change had arrived.
But first, the industry and regulators had to come to a mutual understanding of what was critical to quality and what needed to be controlled.
The consensus standards process and ASTM E55 activities created the opportunity for dialogue and consensus in coming to a common understanding of terms and practices needed.
Then, FDA launched the QbD initiative, the basics of which had been defined within ICH Q8. While outlining the suggested content for the pharmaceutical development section of a regulatory submission, Q8 outlined how information and knowledge gained from pharmaceutical development studies and manufacturing experience provided the scientific understanding required to establish a design space, specifications, and manufacturing controls.
By definition, QbD focused the industry on drug development. In their eagerness for innovation, and regulatory flexibility, a number of pharmaceutical manufacturers supported the QbD pilot. Gaining flexibility required defining critical quality attributes (CQAs) as soon as possible.
However, all the parties involved had overlooked the potential for confusion. How could R&D define attributes, outside of its domain and area of practice, that were critical to quality?
In response, some pharmaceutical manufacturers turned, not to their own prior knowledge and manufacturing histories, but to the pharmacopeias. CQA’s were defined as assay, hardness, dissolution, content uniformity, appearance, description and impurities.
Had we had forgotten the advice of Socrates, that the definition of terms is the beginning of wisdom?
In the old days of business, we considered the tests and specifications of monographs as attributes critical to quality. Today, many of us still do.
The tests address the quality of individual samples tested; according to USP, they cannot be extrapolated to other doses in the batch. They yield no information that would make it easier to vary a process in real time, to improve it.
At the turn of this century, the pharmaceutical industry supported change; a decade later are we merely using new jargon while practicing just as we did before? Is it that we cannot define what is critical about a manufacturing process, raw materials and in-process materials?
As a veteran of this industry, I find this hard to believe.
The underlying challenge still remains: the real-time control of attributes critical to product quality. Is pharma up to that challenge?
About the Author
Ali Afnan, formerly a scientist at CDER’s Office of Pharmaceutical Sciences, and a member of FDA’s original PAT Team, is now principal of the consulting firm, StepChangePharma. He can be reached at email@example.com
- Ali Afnan: Defining Pharmaceutical Quality
- Without true process understanding and control, product quality remains an abstraction that gets reported through specifications, says contributing editor and...
- Ali Afnan: Bridging Pharma’s Innovation Divide
- Could closer study of Lean performance, supplement approval, and GMP oversight reveal correlations and drive holistic innovation, and cultural change?...
- What Will the FDA Say? Time to Change the Question
- Ask not what FDA will say about our products and processes, says Contributing Editor Ali Afnan, but rather what we...
- Ali Afnan: Yes, Virginia, There is a Santa Claus (But Little Science, or Change, in Pharma…Yet)
- In the end, this industry is what we make it. Let’s base our actions on science and our goals, on...
- Ali Afnan on Changing the Way Pharma Thinks
- Is it time for change? It is whether we like it or not, says consultant and contributor Ali Afnan. Our...