Categorized | QbD News

Sampling at IFPAC: Will Pharma Move to Large N?

    Sampling at IFPAC: Will Pharma Move to Large N?

    by Agnes Shanley

    The GMP code requires that manufacturers run a “statistically significant” number of tests on product, and assumes that they maintain control over their processes. However, in practice, testing of batches is currently between 10-30 samples. As more drug manufacturers implement at-line, online or inline process analyzers, the number of tests run could potentially increase dramatically.

    A committee on PQRI has been discussing sampling and product testing for quality control, and held a meeting about this last Fall.  Their goal is to develop a white paper on the topic and open it up to public discussion.

    At IFPAC this week, committee members Sonja Sekulic of Pfizer and Merck’s Lori Pfahler and Gert Thurau, shared results of that meeting, discussed QC sampling, the direction PQRI is taking, and opened the floor to some lively debate. Currently, EC regulators use content uniformity for release, but USP does not offer specific standards for testing larger samples of tablets for content uniformity. (For their recent thinking on this topic, click here.)

    Ms. Sekulic noted, the question now is: “How much tighter should the standard be?” There should be no disincentives for companies to use larger data sets for testing, she said.

    Merck’s Pfahler discussed her company’s experience and some of the statistical issues involved. At PQRI’s Fall conference, she and her colleague Thurau had presented results based on five years of real-time release work. Merck used tablet weight and automated sampling, testing 240-690 tablets, collecting NIR data on average concentration by batch, mg/g and plotting operating characteristic curves. (For insights into Pfizer’s work, click here).

    She then summarized PQRI’s efforts. “We spent a lot of time looking at OC curves (with y axis indicating the probability of passing the batch, and X, the % relative standard deviation of product. As RSD goes up, she said, p, the probability of passing the batch goes down. The steepness of the curve indicates the batch’s discrimination capability. Several statistical methods have been proposed so far, she noted. Compendial tests are often multistage, making them difficult to do, so the tests that PQRI has proposed so far are single-stage tests.

    Avoiding Zero Tolerance
    In addition, she noted, the committee is trying to avoid zero-tolerance criteria and is working with both large N and modified large N. Europe currently has a chapter, she said, with versions of Large N and tolerance interval tests, but both have a zero tolerance component.

    Ultimately, PQRI’s committee opted for a quadrant approach. Pfahler emphasized the limitations of the testing and the statistics. “Release tests are just an audit of our system,” she noted. “We ensure quality by assuring that processes and products are well designed from the start.”

    Debate began, initiated by consultant and former FDA PAT Team member Ali Afnan, who asked whether there was a control target, whether they planned to control to a normal distribution, and whether they had considering using military sampling standards, specifically MIL 1916, as a model. “In a couple of these tests, tolerance intervals can be nonparametric,” Pfizer’s Sekulic noted. “Normal distribution is just a theoretical thing.  You can’t have content uniformity follow normal distribution.  Some tests assume it but others don’t.”She also countered that military standards were meant as an audit for incoming and outgoing supplies, but not intended to guarantee a given quality standard.

    During the question and answer session,  consultant and NIR expert Emil Ciurczak asked, tongue in cheek, whether figures reflected the need to make regulators happy, by taking the maximum number of tests, or management happy, by taking the minimum.

    “We don’t think it’s about taking the highest number of samples possible,” Sekulic said.  “Depending on how process and facilities are operating….if you’re operating at 5% RSD then, depending on whether you use one curve or the other, it will help you decide whether product is good enough.”

    Right-Sizing the Sample
    “It’s about right sizing the sample to understand the variability that’s happening,” said Pfahler. The ICH EU standard was set based on practice, not patients, she said, and excluded processes with too much variability.

    One attendee distinguished between variability and interval criteria. “The whole problem is that we have mixture of variability criteria and interval criteria,” he said. “If you think about variability criteria, large N is good because it increases the chance that you will find a bad batch,  but it also increases the potential of rejecting a good batch.”

    The real problem, Pfahler said, is the zero tolerance interval. “There is a producer risk of failing a good product vs. consumer risk of releasing a bad one,” said FDA’s Friedman,  There is a tension now between the two that can be essentially solved through more upstream testing and dynamic control.  “So the question really is,” he said, “Why can’t we get business and quality interests to converge through industrial modernization?”

    During the Q&A session, Friedman drew the line as far as the importance of measuring critical quality attributes, which are meant to link to safety and efficacy. “CQA’s never change in their criticality,” he said.  “As you increase detectability, process parameters can become more or less critical, but attributes stay the same.” He added, “A dissolution or assay test is considered critical, since these are our in vitro quality measures used to try to detect a potential change in bioavailability or efficacy.”

    “The goal should be for every batch, every day, and every unit to be equivalent to and as safe as clinical product. To the extent that one can use dynamic process control and other control for better assurance, this goal should be attainable,” he said.

    He noted the fact that some assay limits, such as 93-107% or 95-105%, are tighter for some drugs in USP.  This makes a clinical connection based on a narrow therapeutic range, but a strong link between specifications and clinical performance is not always made.  “It is well known that, in some cases, this link could be better explored during product development.  We have not always forced ourselves to form strong clinical connections with dissolution, impurities, and content uniformity,” he said.

    “We won’t have a clear understanding of the clinical relevance of manufacturing for quite a few years,” noted Fernando Muzzio, professor atRutgersUniversity. “But,” he noted later, “Large N is going to happen because of PAT and continuous manufacturing.”


    USP’s Discussion of Sampling Issues

    Merck’s Analysis of Five Years of Testing Data

    Goals of the PQRI Group

    Sampling at Pfizer


    Related posts:

    1. Memoirs from IFPAC 2012
      Contributor Emil Ciurczak gives a session-by-session account of last week's IFPAC meeting in Baltimore. It was, he says, not the...
    2. IFPAC 2012: The Club Opens Up
      If this conference was any indication, maybe more people are starting to see the benefit of modern control, and the...
    3. Radio Free Emil: Report from IFPAC 2011
      Emil Ciurczak reports from IFPAC in Baltimore on key ASTM E55 committee meetings. ...
    4. IFPAC: A Snapshot from Opening Day
      A brief Wednesday report from Emil at IFPAC in Baltimore....
    5. IFPAC Reflections Day Two: From Models to Multivariate Stats
      Our on-the-spot reporter (and all-around smart guy) Emil Ciurczak breaks down day two of IFPAC in Baltimore....

    One Response to “Sampling at IFPAC: Will Pharma Move to Large N?”


    Leave a Reply

    You must be logged in to post a comment.