The pharmaceutical industry’s pipelines suffer the difficulty to identify and exploit new and promising drug candidates. Pharmacologist Silvio Garattini comments the PLoS’s article on predictive models for R&D
Juicy fruits are often difficult to be reached, as they are placed on the higher branches of the trees: a similar situation occurs with many new drug candidates emerging from pharmaceutical pipelines. The provocation comes from a paper published on PLoS and signed by analysts Jack Scannell and Jim Bosley of the Oxford University. Too few new medicines have been approved in recent years, they write, if compared to the high investments in R&D and to the sophistication level of science and technology available. According to the paper, development costs would have doubled every nine years in the period 1950-2010 and an incredible amount of “brute force” is available for pharma companies to be used in R&D activities. Nevertheless, just few drug candidates pass successfully the clinical phases of development. The two experts of the Centre for the Advancement of Sustainable Medical Innovation call this phenomenon the “Eroom’s Law”: the pipelines crisis would be perpetuated, according to them, because the current R&D model is hardly sustainable.
Predictive validity needs improvements
The low efficiency of R&D would be the result of the type of predictive models used to explore the pharmacological research space and their real capacity to predict the final clinical efficacy of the drug candidate during human trials (Box “The research space”). The models currently used would refer, say the authors, to therapeutic areas where many medicinal products are already available: these are the fruits pending from lower branches of the tree. But many are still the areas of therapeutic need still waiting for solutions: the predictive validity of current models would be too low to address such needs.
According to Scannell and Bosley, a complete rethinking of the basis of predictive models is needed, so that a increasing number of drug candidates might reach the final phases of development. A possible approach would imply to lower the requests of validity and predictivity of models used to evaluate rare or orphan diseases. The new modalities would also help to fully exploit the “brute force” of technological potential. The focus should be “how” experiments should predict the true pharmacological activity of the candidate, and less attention should be paid to the scaling up of the method from the industrial point of view, or to its “elegance” from the academic one.
«It should also be considered that new products are often authorised without a relevant clinical efficacy. The first issue I would wonder about is whether easier things to be approved had been already found, just the complex ones are still available. This is the reason why it is difficult to find predictive models», tells Silvio Garattini, the director of Milan’s based Istituto di Ricerche Farmacologiche Mario Negri.
When a sufficient number of medicinal products are available on the market for a certain therapeutic area, the research in this area becomes less interesting from the industrial point of view. There is less space to study new approaches based on more complex models, suggests PLoS’s article. «The authors consider the issue from a theoretical and economic point of view. From the research point of view, we need to look at where difficulties are located: i.e. at more complex things. There are no medicines available to treat ictus, Alzheimer or certain tumors. There is need to address these areas», further tells Garattini to Pharma World.
The scientific method typically requires that each hypothesis should be experimentally verified: something that might result difficult, explain the authors of the article, for in vitro or in vivo models that are quite far from the real situation found in the human being (Box “A change of paradigm”). «Relevant findings should always be verified in men. It is not possible to establish in advance the predictively of a certain test, this is possible only after the clinical trials», adds the director of the Mario Negri Institute.
Medicines approved fifty or sixty years ago suffered the same bias, but the pathological reference framework was far less complex. «We knew that histamine plays an important role in allergic reactions, for example: in many instances the development of an anti-histaminic agent had been enough. An entire organ is involved in Alzheimer disease, the one regulating the entire organism. It is then not so easy to address the problem», explains Silvio Garattini.
How to find good models
Pre-clinical models are very important as they give many useful indication for clinical phases, thus they should help to increase the predictivity. The drug candidate is usually evaluated in men only upon the presence of a relevant effect. «Predictivity always depends on the findings of clinical trials. As a pharmacologist, I note an issue with research: first of all I need to find the most predictable model. In my opinion there are no rules allowing to skip clinical evaluation – adds Garattini.- Another issue is that we always look at benefits of a medicinal product, but it often fails due to toxicity».
Personalised medicine adds another bias to this already complex picture, as the predictive model should address very small populations of patients carrying a specific biomarker or mutated protein. This cause an exponential increase in the number of models for the disease. «This is the big issue, it is not easy to set up such models, as well as to verify the efficacy and toxicity for the single case», comments the pharmacologist.
The change in perspective proposed by the two authors is based on a statistical analysis typical of the “theory of decisions” and based on the quality of data more than on their plain measure. «The problem is to decide what is “quality”. In my opinion, the authors mean that the effect should be consistent, reproducible over several animal tests and it should be validated through tests measuring the same type of data. We also need to wonder if diminishing the threshold, thus decreasing the efficacy, we would find an effect on men», tells the expert.
The debate about the need to optimise the R&D chain through the full exploitation of the technology and the decrease of costs is not new. Scannell and Bosley, with their article, try to proposed a new and “consistent” choice to develop a product. «Research accounts for only the 8% of pharmaceutical revenues. It is a false issue to ascribe costs to research, they depend from the amount of profit the industry is looking for and the time needed to reach it. This is the basis of the high price of medicines: an high price for a truly curative medicinal products is something different with respects to products that do not offer this certainty. Antitumorals, for example, often extend life expectancy just of two months, maybe with a bad quality of life because this type of medicines are highly toxic in combination with chemotherapy. It’s here that there is need to pay greater attention», further tells Silvio Garattini.
How to choice the goals for investments
One could argue that pharma companies might continue to prefer traditional “solid” models for the allocation of R&D investments instead of using more innovative models characterised by a more “intuition-based” predictive validity. Among the motivations of the final decisions are, according to the PLoS’s article, the arguments to be used during regulatory filing, unitary costs and the possibility of integration with throughput technologies. The risk is that orphan diseases would continue to lack therapeutic solutions. «From this point of view, if the industry does not invest it is the State that should do so. There are no new psychiatric drugs since thirty years, because the industry thinks it would be difficult to find something new. The same occurs for the Alzheimer disease, we would need an animal model for such a complex condition, but this is somehow a “presumption”. In research many failures will always occur before we find the correct way», tells Garattini.
The need for a new “lingua franca”
According to Bosley and Scannell, a new “lingua franca” would be needed to help identify and circulate more easily the information about the predictive validity of R&D models. This language should overtake the limits of the traditional scientific language, the common language for research. «The authors ask for a more consistent use of terms. Speaking about reliability, for example, one should specify that a repeating variable is consistent. It is something that depends upon the conditions in which the variable is observed, and it could greatly vary if measured by lab A or lab B. A model might have a low variability and it might be hardly verified or transferred. Another model might have a high variability and a high capacity to be transferred», tells the director of the Mario Negri Institute.
The validity domain of each model should be better characterised in a similar way as physics do, where the classical mechanics cannot be used to explain the behaviour of sub-atomic particles. The correct representation of the reference system for decision is one of the main requests for the application of the theory of decisions. Translational medicine, say the authors, and the commercial exploitation of medicinal products often imply aspects which are related each other and which are discussed just at a qualitative level of results. «The article represents a theoretical discussion and tries to make explicit the elements needed to verify a model in order to increase the success rate. Authors used mathematical terms, but it is possible to use also terms coming from the experience and the knowledge of the development space for new medicines. It is something difficult in any case», comments Garattini.
The research space
According to Scannell and Bosley, reductionist predictive models would have a low predictive value and might result redundant as they offer obvious answers for pathological conditions already characterised. The development of new and reliable models would be the main limit of research. There are few not-obvious models available, not enough characterised yet to give solid results. Their high noise might cause bias due to variable professional skills of the operators or to the “chance” to find a positive result.
Current predictive models would not be adequate because of their scarce reproducibility, a primary goal of the current regulatory framework compared to the quality of the model itself. It is more difficult to evaluate qualitative evidences hard to be exactly measured, but these might result far more effective to evaluate the real utility in men of the candidate medicinal product. On-field observations during the clinical practice might have greater predictive validity compared to “throughput” evaluation methods. The mechanism of action of the active ingredient or the design of the trial might loose their central position in favour of the observation of the “true” clinical effect of the product.
A change of paradigm
«All is useful, but nothing is decisive. A model that put together human cells [i.e organs on-a-chip, ndr] is far from the human being more than the simpliest living organism. This last one is, in any case, a complex organism», tells Silvio Garattini. It is different to in vitro study the pharmacological action of a drug candidate on a receptor or to address issues emerging in vivo, i.e. the relevance of the receptor with respect to the pathology. «The organism is highly redundant, the receptor might be just one of the factors that cause the disease», adds the expert.
The paradigm of research changed dramatically during the past decades, and Silvio Garattini remembers the pioneering times when he studied the adrenergic receptor using the hearts of rats or rabbits. «We could observe the direct functional consequence of the drug and its toxicity at the same time. Now we talk of mechanisms, with high heterogeneity among different diseases. For example, lung tumours are characterised by a wide range of different genetic profiles. The medicinal product should be targeted to a specific profile and the experimental model should address the specific genomic profile of the tumour’s subtype», concludes Silvio Garattini