Developing cures for today’s complex diseases with ever-decreasing budgets yet stricter regulatory requirements requires drastic optimisation to avoid late failures. Accelrys looks at how predictive science tools have evolved and are now delivered directly to drug discovery teams.
The past decade has seen profound and far-reaching changes in the pharmaceutical industry. In part, these changes have been driven by concerted attempts to accelerate successful new medicines to market by applying critical lessons learned during late-stage clinical trial failures to early-stage discovery – often referred to as the ‘fail fast, fail early’ paradigm. The looming financial ‘patent cliffs’ associated with multibillion-dollar blockbuster drugs coming off patent are also driving change and forcing pharmaceutical companies to adopt leaner, more efficient approaches to drug development.
As a result of these industry trends, many pharmaceutical organisations are fundamentally changing the way they do research. For example, many are looking to reduce costs by externalising some or all of their discovery processes. Similarly, many are restructuring research teams into smaller, more agile discovery groups. A significant number of organisations are also looking beyond small molecule drugs and considering new drug markets such as biologics that have improved patent protection. Collectively, drug design teams today must do more, with less, faster.
In early drug discovery, optimising a molecular lead series is a critical step in transitioning initial hit compounds from screening into potent, selective and bioavailable agents suitable for progression to preclinical development and eventual consideration as candidate drugs. With the advent of the ‘fail fast, fail early’ paradigm, this process is now actually a multi-objective optimisation. For example, scientists must not only achieve biochemical potency, they must also at the same time optimise other characteristics of a drug-like compound with reasonable in vivo pharmacokinetics.
As part of a co-ordinated strategy to achieve these objectives, services such as predictive science are now being routinely delivered directly to drug discovery teams (see Figure 1). Advanced modelling and simulation solutions for both small molecule and macromolecule-based drug design can help these teams investigate and test hypotheses in silico prior to costly experimentation, thus reducing the time and expense involved in bringing new drugs to market.
Figure 1: A typical desktop Structure Activity Relationship (SAR) dashboard, as used by many medicinal chemists today to optimise drug series efficiently. This example includes predicted molecular properties, colour-coded for quick identification of favourable or unfavourable values
Today, the race to do more with less, faster is significantly increasing the use and positive impact of predictive science in three key areas: ADMET studies, automated model building/validation and biotherapeutics.
Accelerating ADMET studies
The ‘fail fast, fail early’ paradigm owes much to lessons learned from late-stage failures in clinical trials. Sometimes these failures stem from a change in commercial direction, but in all too many cases they are the result of scientific shortcomings, e.g. failing to show significant efficacy or not demonstrating a large enough safety margin. Some of these issues can be detected ahead of time using bioavailability or toxicity studies, but these can be particularly costly and time-consuming. Any cost-reducing approach that can predict potential undesirable outcomes in a new chemical entity before costly clinical trials are initiated can reduce clinical attrition and prove highly advantageous to an organisation.
In vitro methods associated with absorption, distribution, metabolism, excretion and toxicity (ADMET) studies have been a key strategy in reducing costs and increasing compound throughput earlier in the drug development cycle. However, while the later stages of drug discovery typically focus on optimising one or two series of compounds, the earlier stages typically consider multiple series of compounds for testing. Hence, the throughput of many in vitro ADMET end-points is often not sufficient to enable testing of all compounds.
Fortunately, computational models (often referred to as in silico models) provide a fast and relatively low-cost ADMET prediction service alongside both in vitro and in vivo methods. These models range from very high throughput permeability, microsomal stability and blood-brain barrier models to pharmacokinetics/pharmacodynamics (PK/PD) models to predict the concentration and the effect of a drug dose over time.
It is important to appreciate that the predictive accuracy of such models is the most critical element of any in silico end-point. Confidence in such models is essential to provide research groups with the assurance that they can take the calculated outcomes and make design decisions based on their findings.
Until recently, many of these ADMET and PK/PD models could be developed only by larger pharmaceutical companies that had access to high quality, validated in vitro and in vivo data. Fortunately, this trend is now rapidly changing
Until recently, many of these ADMET and PK/PD models could be developed only by larger pharmaceutical companies that had access to high quality, validated in vitro and in vivo data. Fortunately, this trend is now rapidly changing with the advent of publicly available databases such as ChEMBL, PubChem, DrugBank, ChemBank and the PDB that have collected much of the data available in the public domain. In addition, the Innovative Medicines Initiative (IMI) has sponsored several projects, such as eTOX and OpenPHACTS, to encourage the innovative development of in silico models, both in terms of underlying tools and algorithms.
In silico ADMET models are increasingly garnering broad interest and are being evaluated and applied by regulators more and more. It is not impossible to suggest now that in silico methods might soon complement and even strengthen evidence associated with regulatory filings.
Automating and validating predictive models
As well as supporting the in silico prediction of ADMET end-points, statistical models such as Quantitative Structure- Activity Relationship (QSAR) models have long been used for the prediction of biological activity, to help guide and optimise the design of experiments through the elimination of unpromising compound proposals prior to synthesis. This has sometimes been referred to as ‘quality at the point of design’.1 In the past, the development and validation of these models was the preserve of highly specialised QSAR experts within the computational chemistry community. However, with the increase in cost pressure and reductions in workforce, a large community of such experts is often no longer available. Today, there is often a struggle to build and maintain all of the QSAR models that projects typically require.
Two notable efforts have been made to address this change in resource – the QSAR Workbench from GlaxoSmithKline2 and AutoQSAR1,3 at AstraZeneca. The first seeks to enable a wider computational community to develop and validate models in a guided fashion. The second addresses the issue of the automated maintenance of the models and managing their lifecycle and applicability. Both are in direct response to the lack of skilled QSAR artisans who would previously have performed those tasks.
QSAR model development and validation is a complex workflow involving choices of training set, molecular descriptors, statistical modeling methods, validation methods and model comparison and selection.
The QSAR workbench provides a guided workflow in which non-QSAR-expert computational chemists can walk through the choices in a directed fashion, designed by one or two QSAR experts, such that the computational chemists can build, validate and compare a large number of QSAR models and decide which ones are best to be deployed to the project.
As a therapeutic project develops, the chemistry being used often changes and as a consequence, activity models drift
As a therapeutic project develops, the chemistry being used often changes and as a consequence, activity models drift. That is, the compound training set (from the start of a project), from which a model is built, becomes less similar to the molecules being predicted (later in the project). This causes the accuracy of the model to decline over time.4 The traditional remedy would be to have an expert rebuild the model by extracting new activity results from the corporate database and building a new or updated model. AutoQSAR automates this process by retrieving new data and re running the model building and validation process to keep the models relevant to the project data.
Speeding predictive methods for improved biotherapeutics
Alongside small molecule-based drug discovery, the pharmaceutical industry is dramatically expanding its exploration of novel biologic molecules as therapeutic medicines. One of the fastest growing areas of research has been the development of novel therapeutics based on second- and third-generation antibody technologies.
Bolstered by increasing drug approvals from the FDA in recent years5 and improved patent protection rights,6 biological therapies are some of the most valuable in the pharmaceutical industry and are expected to account for 50% of the top 100 drugs in 2018.5 However, the development of biologics is non-trivial and includes a number of unique challenges. For example, storage and administration typically involve high-concentration solutions requiring specific biophysical profiles, including solubility and thermal, chemical and aggregation stability.
Biologics can potentially elicit unfavourable responses from a patient’s own immune system. This means scientists must identify and mitigate potential immunogenic characteristics prior to development
Additionally, biologics can potentially elicit unfavourable responses from a patient’s own immune system. This means scientists must identify and mitigate potential immunogenic characteristics prior to development. It is impossible to test a biologic for all of these development requirements outside human subjects, and feasible testing is time-consuming, expensive and dependent both on experimental methods to test and on cell lines to express the biologic material.
As the pharmaceutical industry looks to speed up and rationalise the development process for novel biologics, it is increasingly turning to predictive methods to help identify and optimise the best biologic leads early on as well as reduce the number of expression cycles. When applied alongside experimental procedures, predictive methods not only help to prioritise the selection of biologics, they can also identify potential undesirable properties long before any material has been expressed. For example, it is now possible to predict accurately potential chemical liability motifs, pH and thermal stability7 and even aspects of aggregation propensity8,9 using model data alone.
Today the development and validation of novel biophysical property prediction methods in the field of biotherapeutics is arguably one of the most exciting and active areas of innovation in predictive science.
Predictive science offers a wealth of possibilities for scientists to do more, with less, faster – in designing in silico ADMET models that help teams apply ‘quality at the point of design’ or ‘fail fast, fail early’ with unproductive chemical series; in building powerful predictive QSAR models to accelerate hit identification; and in exploring novel biologic molecules as therapeutic medicines. Predictive science has not only come of age at a challenging time for a pharmaceutical industry beset by sweeping change, it is also proactively leading the charge towards a new age of better, faster, more cost-effective and innovative drug discovery and development.
References
1. Davis A.M. and Wood D.J., Mol.Pharma., 2013. DOI: 10.1021/mp300466n.
2. Luscombe C., “QSAR Workbench: Guided QSAR Model Building for non-Experts.” The UKQSAR and ChemoInformatics Group, Cambridge, UK, 2011. (http://www.ukqsar.org/slides/Nov2011_Luscombe.pdf)
3. Rodgers S. L., Rodgers S. L., Davis A. M., Tomkinson N. P., Waterbeemd H. V.-D, Mol. Informatics, 2011, 30(2-3), 256-266, DOI: 10.1002/minf.201000160.
4. Rodgers S. L., Davis A. M., Waterbeemd H. V.-D., QSAR & Combinat. Sci., 2007, 26(4), 511-527. DOI: 10.1002/qsar.200630114.
5. ‘EvaluatePharma: World Preview 2018 Embracing the Patent Cliff’, EvaluatePharma, 2012.
6. ‘Patient Protection and Affordable Care Act (PPACA)’, Public Law 111 148, 124 STAT. 119, 111th Congress, 2010.
7. Spassov V. Z., Yan, L., ‘pH-Selective mutagenesis of protein-protein interfaces: In Silico design of therapeutic antibodies with prolonged half-life’, Proteins, 2013, DOI: 10.1002/prot.24230
8. Chennamsetty N, Helk B., Trout B. L, Voyonov V., Kayser V. PCT/US2009/047954. Filed 19 June, 2009.
9. Lauer T. M., et al., J. Pharm. Sci., 2012, 101(1), 102-15.
DOI: 10.1002/jps.22758
The authors
Adrian Stevens, Senior Product Marketing Manager for Life Sciences; Sabine Schefzick, Product Marketing Manager for Pre-clinical, ADMET; Clifford Baron, Director of Biology Product Marketing; and Rob Brown, Senior Director – Informatics Marketing