Wednesday, August 31, 2011

Biomarker of depression: Doctor I feel blue or maybe just over-methylated

In the August 30th issue of PLoS One, Fuchikami and colleagues report their findings about a new biomarker of severe depression based on Brain-Derived Neurotrophic Factor (BDNF) gene methylation profiles (reference).  Briefly, the authors analyzed the methylation profile of the BDNF gene in blood samples collected from 20 clinically diagnosed severe depression patients and 18 healthy human volunteers (see figure 1).  Their analysis covered 81 CpG units upstream of exon 1 (CpG I) and 28 CpG units upstream of exon 4 (CpG IV) of the BDNF gene.  Differential methylation status in CpG I appeared markedly different between patients and controls, with an overall trend for hypo-methylation in patients with major depression.  The biological implication of this methylation profile is currently unknown. 

Fig.1

Considering the small sample size used in this study, these findings should be viewed as an initial screening for potential biomarker candidates which will require substantially more work to be confirmed.  First, because the individuals enrolled in this first study were exclusively of Japanese origin, the relevance of BDNG gene methylation status as a biomarker of depression remains to be established in a more ethnically diverse population.  Second, as I have mentioned in an earlier post (link), the reductionist sample selection process used in this study probably yielded over-optimistic statistical association values that may not translate well to the more complex real-world.  Indeed, the diagnosis of major depression is almost never made as a simple binary determination of “healthy” vs. “depressed”.  Rather, the diagnosis of depression is a process of eliminating other conditions that manifest themselves with similar symptoms.  Hence, analysis of the BDNF gene methylation profile in clinically related psychiatric conditions should constitute an important follow up to this initial study.  Finally, assuming that these biomarker candidates are confirmed, it would be particularly interesting to determine whether current treatments for depression affect the methylation profile of the BDNF gene.

Of note, it seems that the field of biomarker discovery in the area of depression is picking up speed lately. This paper comes one day after the announcement by Lundbeck Canada of a $2.7 million donation in support of biomarker discovery in the area of major depression and bipolar disorder (announcement), and a few weeks after the cover story of Ridge Diagnostics’ depression blood test in the August issue of Psychiatric Time (see earlier post). 



Thierry Sornasse for Integrated Biomarker Strategy

Monday, August 29, 2011

Genome Wide Association Studies: Beyond the Sample Size Barrier


Genome Wide Association (GWA) studies (GWAS) are hypothesis-free experiments aimed at identifying possible associations between subtle genetic variations and disease risk and/or disease state (see overview in Nature Review Drug Discovery 7, 221).  Over the last few years, the number of GWAS has exploded thanks to the shrinking cost and improved performance of all-genome analysis tools.  Despite their power to decipher the genetic susceptibility of many diseases, GWAS suffer from a major limitation: the sample size required to identify credible associations.  Because GWAS are hypothesis-free, all possible genetic variations are tested for possible association with the phenotype of interest, requiring much more stringent thresholds for statistical significance: depending on the penetrance of the genetic variation the type I error threshold (aka alpha) is usually set between 10-7 and 10-5.  Thus, sample sizes in GWAS tend to be well above a 1,000 cases and often exceed the 10,000 mark.  When you are considering a relatively rare disease or condition, these numbers can become a limiting factor.  Furthermore, these large sample sizes only permit testing of direct association hypothesis and not more complex hypotheses such as interactions between variants; the latter would require even larger sample size.  So, if sample size constitutes an inherent limit in GWAS, how can this field progress beyond this barrier?

A recent paper published by Hicks and colleagues in Cancer Informatics (Cancer Inform 2011; 10: 285-204) offers a possible solution.  Focusing on breast cancer, the authors combined GWAS information with gene expression data to determine the combined contribution of multiple genetic variants acting within genes and putative biological.  In addition, thanks to this approach, the authors were able to identify novel genes and biological pathways that could not be identified using traditional GWAS.



Thierry Sornasse for Integrated Biomarker Strategy

The computing power supporting modern genomics: a peek at the Broad Institute

In the August 12th issue of the Broad Institute’s “Five Questions”, Martin Leach – Broad Institute Chief Information Officer – shed some light on the computing challenges of modern genomics at the broad institute.

Beyond the astonishing information management infrastructure required by modern genomics (i.e. systems able to function at the petabyte scale), Martin Leach also emphasizes the power of collaboration between the institute and the biopharma world to solve the ever evolving challenges of biological information management.  



Thierry Sornasse for Integrated Biomarker Strategy

First approved companion diagnostics for a lung cancer drug: XALKORI (crizotinib) and ALK FISH Test

On August 26th 2011, Pfizer announced approval by the FDA of XALKORI (crizotinib) – an ALK-specific kinase inhibitor ‑  for the treatment of patients with ALK-positive, locally advanced or metastatic non-small cell lung cancer (NSCLC).  XALKORI was developed and approved in parallel with a molecular companion diagnostic, developed by Abbot Molecular, aimed at detecting the rearrangement of the ALK gene on the 2p23 chromosome by Fluorescent In Situ Hybridization (FISH). 

The parallel approval of this new drug with the companion diagnostic ALK FISH marks the first example of personalized therapy for lung cancer and reinforces the growing trend of personalized medicine in oncology.  Indeed, earlier this month, the FDA approved Zelboraf (vemurafenib) and companion diagnostic for BRAF-mutation positive metastatic melanoma (See: A biomarker finds its drug), adding to the list of cancer treatments that depend on companion diagnostics (See: Personalized cancer medicine review).



Thierry Sornasse for Integrated Biomarker Strategy

Friday, August 26, 2011

Personalized cancer medicine review: predictive biomarkers

In the advanced online issue of Nature Review Clinical Oncology of August 23rd, Nicholas La Thangue and David Kerr published a detailed analysis of the state and the future of personalized cancer medicine (reference).  In this review, the authors offer a thorough analysis of the history, significance, and evolution of current predictive biomarkers for drug response in cancer (see table 1).

Table 1


First, a note about terminology. The authors use a biomarker nomenclature popular in the oncology field where the term “predictive” biomarker describes two types of disease biomarkers (see previous post for details): 

  1. Trait biomarkers: a stable predictor of disease risk or response to treatment (e.g. genotype, liver CYP expression)
  2. State biomarkers: evolving predictor of a disease stage (e.g. most medical diagnostics).


Second, beyond their analysis of predictive biomarkers, the authors also point to a potential unintentional negative consequence of personalized cancer medicine.  While novel targeted treatments of cancer offer the prospect of greater efficiency and lower risk to the patients, these new personalized treatment tend to displace older, more affordable “untargeted” drugs, potentially reducing access to treatment for the patients.  Ideally, the use of these older broad spectrum “untargeted” treatments could be optimized through the identification of novel predictive biomarkers of response to treatment.  However, considering the limited incentives associated with such efforts, it is unlikely that the industry will actively pursue this option.

Finally, this review highlights the growing need to revisit some of the initial assumptions associated with the development of predictive biomarkers and established companion diagnostics such as HER2 testing.  As I wrote in a previous post (link), the apparent direct connection between these early biomarkers (i.e. presence of target) and the biological process of interest (i.e. response to treatment) resulted in a development process that put little emphasis on biological qualification.  Now, it has become clear that this apparent connectivity is much more complex than initially assumed, forcing the cancer biomarker community to dedicate substantial efforts to clarify the biological significance of these early cancer biomarkers.



Thierry Sornasse for Integrated Biomarker Strategy

Thursday, August 25, 2011

Biomarker Studies: samples, hypothesis, and statistics


In a recent post on BiomarkerBlog, David Mosedale highlights a common problem with the design of biomarker discovery studies: reductionist clinical samples selection.  While it is tempting to initially explore for potential new biomarkers in highly contrasted clinical samples (e.g. healthy vs. diseased, benign early cancer vs. advanced metastatic cancer), this approach is almost guaranteed to yield over-optimistic results that do not translate easily to the real, complex world.  As a solution to this common problem, the author proposes that the design of the initial biomarker discovery study should reflect more accurately the intended application of the biomarker by including a spectrum of cases representative of the true complexity of the target patient population.  While I fully agree with David’s point, I would like to suggest an alternative view on this issue.

I would argue that the root-cause of the disconnection between biomarker discovery and their translation to medical use is the application of the right statistics to the wrong questions (i.e. statistical hypothesis).  Based on this premise, is there a fundamental issue with biomarker exploration using highly contrasted clinical samples?  I would argue that this approach can be useful as long as it is recognized for what it is: an initial screening step designed to test the minimalist hypothesis of whether a distinguishing factor (or factors) can be detected under artificially contrasted conditions.  Thus, the strength of the statistical association between the distinguishing factor and the selected sample phenotypes only reflects the pre-defined sample choice, not the true nature of the factor’s statistical association in the real-world population.  Hence, the use of this approach should be limited to the selection of potential biomarker candidates intended to be studied in a representative clinical sample.

Another case of inappropriate hypothesis definition is often encountered in the so-called validation of candidate biomarkers where a subset of the clinical sample used for discovery is used to determine the predictive value of the candidate biomarker using techniques such as Receiver Operator Curve analysis.  Here again, the strength of the statistical predictive value (Positive and Negative Predictive Values) derived from this approach is skewed by the initial sample selection, offering limited information about the predictive value of candidate biomarkers in the real-world.

So what is the solution to this somewhat frustrating trend in biomarker research?  I would argue that biomarker scientists should learn to ask the right questions to statisticians, and that statisticians should learn to challenge biomarker scientists about the actual hypothesis they wish to test.



Thierry Sornasse for Integrated Biomarker Strategy

Wednesday, August 24, 2011

Biomarker Research: The Pre-Analytical Puzzle


In the August 17th issue of PLoS One, Mueller at al. describe the performance of a one-step tissue preservation method that not only maintains tissue morphology but also preserves phosphoproteins and optimizes bio-molecules recovery (full paper).  Anybody who has dealt with the sometimes inextricable problem of choosing a tissue collection and preservation method will appreciate the potential of this new technique.  Indeed, currently available tissue preservation techniques are essentially exclusive, enabling either morphology analysis (e.g. formalin fixing) or proteomic analysis (e.g. snap freezing in liquid nitrogen) but not both.  Being able to use the same tissue sample to perform both types of analysis constitutes, in my mind, a major step forward.


This article reminded me how critical the steps of collecting, processing, and storing bio-samples (aka pre-analytical processes) are to biomarker discovery and development.  Take the following examples I have had to deal with in the past:

  • In the Alzheimer’s disease biomarker field, the performance of the amyloid-beta 42 fragment assay is critically impacted by the type of container used for the storage of cerebrospinal fluid samples (the type of plastic used in the manufacturing of the storage container appears to affect differently the recovery of different amyloid-beta fragments; see ADNI Procedure Manual).
  • In the immunophenotyping field, the survival and hence recovery of certain leukocyte subsets (e.g. NK cells) is critically affected by the type of collection anticoagulant and the storage temperature of the samples.
  • In the cellular immunoassay field, the sample shipment conditions can dramatically affect the outcome of the assay.  Something as simple as shipping blood samples from clinical sites to the testing facility can turn into a nightmare during winter simply because the material freezes in transit.

These anecdotes just give a glimpse of the number of pre-analytical factors that need to be accounted for during biomarker development.  Hence, understanding and defining as early possible the key pre-analytical parameters associated with the conduct of biomarker research is, in my mind, an essential component of a successful biomarker development plan.



Thierry Sornasse for Integrated Biomarker Strategy

Impatience along the way in biomarkers' long march from lab to clinic


Earlier this summer, Howard Lovy (Fierce Biomarkers) echoed the frustration of the medical community with the often over-hyped publicity surrounding reports of new biomarkers while true medical impact remains elusive (read the full post here).


I believe that bridging the gap between the early excitement of discovery and the true satisfaction of positively affecting patient health requires a new mindset focused on bringing to the biomarker field the same development processes used in drug development (see my earlier post: “Translational Biomarker Development: mind the gap”).



Thierry Sornasse for Integrated Biomarker Strategy

Friday, August 19, 2011

The first potential personalized medicine in respiratory diseases: Lebrikzumab and periostin


In the August 3rd issue of the New England Journal of Medicine, Corren et al. from Genentech report the results from a phase 2 study of lebrikizumab – a humanized antibody specific for interleukin 13 – in adult patients with asthma.  While Genentech is not the only biopharmaceutical company developing antibodies specific for interleukin 13 (IL13) for the treatment of respiratory diseases (e.g. MedImmune CAT-354 and Novartis QAX576), it is the only company so far with a viable patient selection biomarker associated with its drug.  Indeed, Corren et al. showed that the positive effect of lebrikizumab treatment on lung function (FEV1) was more pronounced in patients with high serum periostin levels at baseline than in patients with low serum levels of this biomarker. 



Since periostin is a cellular matrix protein known to me modulated by IL13 and also known to be upregulated during tissue remodeling in the lung, it is tempting to speculate that serum periostin is a biomarker indicative of a specific asthma IL13-dependent pathobiology.  Formal testing of this hypothesis in future clinical studies should yield the necessary evidences to advance periostin as the first patient selection companion diagnostic for a respiratory disease therapy.



Thierry Sornasse for Integrated Biomarker Strategy

A biomarker finds its drug: the BRAF V600E-specific Zelboraf (vemurafenib, PLX4032) story


On August 17th 2011, Roche and Daiichi Sankyo announced that the FDA approved Zelboraf and companion diagnostic for BRAF-mutation positive metastatic melanoma.  Beyond the fact that this is probably one of the fastest FDA drug review and approval in recent history (3 months from submission to approval!), it is also a remarkable example of a drug being developed specifically in response to disease biomarker evidences.

The mutant form of the BRAF protein V600E, which shows increased signaling activity, has been identified as a potential prognostic or therapeutic response biomarker in multiple forms of advanced cancers (colorectal, thyroid, melanoma).  In particular, the mutated form of BRAF is found in the 30% to 60% of melanomas in which it is thought to play a critical role in the malignancy process.  Based on the thorough validation of this target in this type of cancer, Plexxikon Pharmaceuticals initiated a drug development program specifically aimed at the mutant form of BRAF.  Early in 2011, the company released the results of their phase 3 study showing conclusive therapeutic effect of the PLX4032 molecule in previously untreated melanoma patients positive for the BRAF V600E mutation.  Of note, because of the specificity of PLX4032, the study only enrolled patients with tumor positive for the mutated form of BRAF.  Considering the high mortality rate of patients suffering from advanced melanoma, the approval of this new personalized treatment is expected to fill a critical unmet medical need.

Notes: In January 2011, Plexxikon entered an agreement with Roche – Genentech to co-promote PLX4032 in the US. In late February 2011, Plexxikon was acquired by Daiichi Sankyo.



Thierry Sornasse for Integrated Biomarker Strategy

Tuesday, August 16, 2011

Biomarker Classification: what are we talking about?

The lack of consistent nomenclature can make the field of biomarker confusing at time.  Here, I would like to propose a classification that reflects an emerging consensus among my peers.

First, any attempt to classify biomarkers based on detection technology is, in my mind, inappropriate since the technology component of biomarkers should be seen as an enabling factor and not a defining one.  In addition, some biomarkers can be measured using different methods.  For example the biomarker HER2 associated with breast cancer was first measured using RNA expression and then was translated to immunohistochemistry.  Similarly, in Alzheimer’s disease, the levels of b amyloid fragments in cerebrospinal fluid are routinely measured using mass-spectrometry and immunoassays.  Actually, the classification of biomarker based on technology is more a reflection of the historical tendency of organizing biomarker discovery and development groups based on technical specialties (e.g. genomics, proteomics, genetic) than a conscious attempt to classify biomarkers in a logical manner.

Fundamentally, biomarkers can be subdivided into two main categories based on the origin of the core stimulus producing the biological effect evaluated by the biomarker.
Figure 1        

 Figure 2
  1. 1.    Drug - Target centric biomarkers for which the stimulus is extrinsic to the system (fig. 1), enabling experimental control of the dose and duration of the stimulus.  These biomarkers can be further classified based on the biological “distance” between the observation and the core stimulus

a.    Target engagement: drug binding to its target
b.    Proximal pharmacodynamic: direct biological effect of a drug binding its target
c.    Distal pharmacodynamic: indirect or secondary effect of a drug binding its target
d.    Activity: integrated effect of a drug on tissues, organs, and/or the entire system
Note: this classification also applies to negative or undesired drug effects translating into: off-target engagement, proximal and distal toxicodynamic, and toxicity.

  1. 2.    Patient - disease centric biomarkers for which the stimulus is intrinsic to the system (fig. 2), only allowing observation of the duration of the stimulus and in some cases observation of the dose/magnitude of the stimulus (in many cases, the nature of the core stimulus of diseases remains unknown).  These biomarkers can be further classified based on the nature the information produced:

a.    Trait: stable predictor of disease risk or response to treatment (e.g. genotype, liver CYP expression)
b.    State (diagnostic): evolving predictor of a disease stage (e.g. most medical diagnostics).
c.    Rate (prognostic): evolving predictor of a disease course (e.g. Oncotype Dx [breast cancer recurrence], CSF Ab42 [transition from mild cognitive impairment to Alzheimer’s disease])

Beyond the fundamental scientific differences between these two main biomarkers classes, the general development processes applicable to these two classes of biomarkers are quite different too.  The development of Drug-Target centric biomarkers is essentially an internal translational process from discovery biology to nonclinical development and ultimately to clinical development.  In contrast, the development of Patient – Disease centric biomarkers takes place in a limited translational space.  Indeed, until a project reaches the stage of clinical development, internal access to patients is limited or non-existent, implying that early development of Patient-Disease biomarkers must occur through external collaborations such as pre-competitive initiatives.



Thierry Sornasse for Integrated Biomarker Strategy

Harmonization of Biomarker Qualification Regulatory Submissions: follow the CTD


This August 2011, the FDA released its guidance on “Biomarkers Related to Drug or Biotechnology Product Development: Context, Structure, and Format of Qualification”.  This document was developed under the auspices of the International Conference on Harmonization (ICH) and therefore is intended to apply across all ICH regulatory regions (US, Europe, & Japan).  Although this guidance primarily focused on the qualification of genomic biomarkers associated with drug or biotechnology product development, the principles articulated in this document are applicable the qualification of a broader spectrum of biomarkers (e.g. imaging, proteomics).

The main purpose of this document is to establish consistent technical standards for the submission of nonclinical and clinical biomarker qualification information.  In keeping with the structure of the Common Technical Document for the Registration of Pharmaceuticals for Human Use (CTD), the guidance proposes a submission format consisting of 5 parts (or modules):

·         Section 1 (Regional Administrative Information) è CTD Module 1
·         Section 2 (Summaries) è CTD Module 2
o   Biomarker qualification overview
o   Analytical Assay Data Summary
o   Nonclinical Biomarker Data Summary
o   Clinical Biomarker Data Summary
·         Section 3 (Quality Reports) è CTD Module 3
·         Section 4 (Nonclinical Study Reports) è Module 4
o   Analytical assay development reports
o   Analytical assay validation reports
o   Nonclinical study reports (in vitro)
o   Nonclinical study reports (in vivo, specify species)
·         Section 5 (Clinical Study Reports) è Module 5
o   Analytical assay development reports
o   Analytical assay validation reports
o   Clinical pharmacology study reports
o   Clinical efficacy and/or safety study reports

Since biomarker qualification can occur at anytime during the drug development process and may involve several qualification stages (e.g. nonclinical, clinical), the structure of the submission is intended to be flexible enough to accommodate different context but also to be consistent regardless of the specific context proposed.

With the release in July 2011 of the draft guidance on “In Vitro Companion Diagnostic Devices” (see my earlier post Companion In Vitro Diagnostics (IVD) Development: some clarity at last), and the release in October 2010 of the draft guidance “Qualification Process for Drug Development Tools”, the release of this latest guidance demonstrate that the FDA has fully embraced the active use of biomarkers in support of drug development and personalized medicine



Thierry Sornasse for Integrated Biomarker Strategy

Friday, August 12, 2011

The Predictive Safety Testing Consortium: I feel safer already

In my post of August 11th (Biomarker Qualification Consortia: The ADNI Success Story) I alluded to the fact that independent, isolated pharmaceutical companies have few incentives to invest the resources necessary to qualify safety biomarkers intended to be broadly applied by the industry: the competitive nature of the drug development industry tends to suppress any effort that would benefit competitors at the expense of the company investing the resources.  Fortunately, the past decade has seen the emergence of non-competitive consortia that have created the necessary conditions for the industry, academia, and non-profit organizations to collaborate efficiently.
The Predictive Safety Testing Consortium (PSTC), led by the non-profit Critical Path Institute (C-Path), is one of these non-competitive partnerships.  Its mission is to bring together pharmaceutical companies to share and validate safety testing methods under the guidance of the US (FDA) and European (EMA) regulatory agencies.  The PSTC has been working on six major toxicology areas: carcinogenicity, kidney, liver, muscle, vascular, and cardiac.  Its major achievement to date is, in my mind, the successful qualification of a set of new biomarkers of acute kidney injury. 

The PSTC recognized a few years ago that the standard blood tests (i.e. blood urea nitrogen [BUN] and serum creatinine) accepted by the regulatory agencies for the monitoring of kidney toxicity lacked sensitivity.  Indeed, these two factors reflect the overall functional performance of the kidney with minimal sensitivity to potential underlying pathologies.  Significant changes in BUN and/or serum creatinine can only be detected after major injuries to the kidney have occurred, at stage where the kidney has lost a substantial part of its filtering capacity.  Therefore, the PSTC decided to identify and qualify new biomarkers of early kidney toxicity that would be sensitive to injuries to specific segment of the nephron (the basic kidney filtering unit), would reflect the degree of toxicity to the nephron, and would be translatable across multiple species including humans.  To do so, the PSTC screened a large array of factors in the urine of test animals that had been exposed to kidney toxicants known to cause injuries to specific segment of the nephron.  Histopathology was used to determine the magnitude and extent of the kidney injuries, factor-specific histological assays were used to identify the origin of selected factors detected in the urine (fig. 1).

Figure 1


In 2008, the PSTC successfully filed the first EMEA-FDA joint biomarker qualification review under VxDS process which resulted in seven novel biomarkers (see table 1) of acute kidney toxicity to be granted the status of qualified biomarkers by the FDA and the EMEA.  Of note, these new biomarkers were deemed qualified for application to nonclinical studies on a voluntary basis but the agencies considered that more exploratory clinical data was necessary to qualify these biomarkers for clinical application.  In 2010, the Japanese Pharmaceuticals and Medical Devices Agency announced the first ever biomarker qualification decision under PMDA's new consultation process on pharmacogenomics/biomarkers for use in Japan.


Biomarker
Segment specificity
KIM-1
Proximal tubules
Albumin
Glomerulus & proximal tubules
Total Protein
Glomerulus
b2-Microglobulin
Glomerulus & proximal tubules
Cystatin C
Glomerulus & proximal tubules
Clusterin
Distal tubules
Trefoil factor-3
Proximal tubules
Table 1

Scientifically, this collection of new biomarkers is expected to provide an early indication of possible kidney toxicity during drug development and to greatly improve the understanding of the mechanisms underlying the toxicity of certain compounds to the kidney.  Beyond the scientific achievements of the PSTC, this effort has also paved the way for similar joint biomarker qualification submission with the major regulatory: a process that is still in its infancy.

Ridge Diagnostics Depression Blood Test Biomarker


The cover of the August 2011 issue of Psychiatric Times features Ridge Diagnostics Depression Blood Test Biomarker Technology.

As I wrote in an earlier post (System biology derived biomarker: fight complexity with complexity), I am convinced that the development of diagnostic biomarkers has a lot to gain from using a system biology approach (i.e. biomarkers based on multiple independent factors). In many cases, relying on a single factor to diagnose or predict complex diseases has turned out to be an intractable problem.  Here, I would like to attract your attention to another example of a system biology based disease diagnosis biomarker for Major Depressive Disorder (MDD). Ridge Diagnostics, based in San Diego, has developed a new blood-based, multi-factor diagnosis for MDD (MDDScore) that is intended to assist the clinician in diagnosing and monitoring patients.  Currently, the diagnosis of MDD is mostly based on subjective criteria (i.e. interview and observation by the clinician) which represent the distal output of a complex pathobiology. By providing an objective means to diagnose patients, MDDScore should shed a new light on the fundamental biological imbalances that cause MDD.



Thierry Sornasse for Integrated Biomarker Strategy


Thursday, August 11, 2011

Biomarker Qualification Consortia: The ADNI Success Story

Over the past decade, the rate of biomarker discovery has accelerated considerably thanks to the development of new methods such as advanced mass spectrometry techniques.  However, the effort dedicated to the qualification of these newly discovered biomarkers (i.e. the process of confirming the predictive value of a biomarker candidate) has remained insufficient.  Disease biomarker qualification most often requires access to a large population of patients representing the full spectrum of disease stages encountered in every day medical practice.  While a few large organizations can muster the resources necessary to assemble such large patient samples, most biomarker qualification efforts require close collaboration between multiple stakeholders.  Similarly, the qualification of safety biomarkers, intended to be broadly applicable across the industry, presents an investment that few companies are willing to take on their own.  Hence, the gap in biomarker qualification can only be eliminated by promoting the collaboration between all drug development stakeholders.

The past ten years have seen the emergence of a series of non-competitive biomarker qualification efforts driven by consortia of academic, industrial, and non-profit organizations (see table 1 for examples)

Name
Main Sponsor
Focus
Cancer
Immunity/Inflammation
Metabolic Disorders
Neurosciences
Predictive drug safety
Alzheimer’s Disease
Parkinson’s Disease
Table 1

Among these examples, one effort stands out due its ground-breaking impact on the field of biomarker development.  The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a vast longitudinal study focused on monitoring the evolution of Alzheimer’s disease (AD) patients, Mild Cognitive Impairment (MCI; a common precursor of AD) patients, and age-matched healthy volunteers (see also the article in Lancet of March 2011 about ADNI).  Since its inception in 2004, ADNI enrolled 200 AD patients, 400 MCI patients, and 200 age-matched controls who were followed every 6 months for 2, 4, and 4 years, respectively.  At the onset of the project, the consortium established a set of standardized procedures aimed at monitoring cognition, cerebrospinal fluid (CSF) biomarkers, and brain structure biomarkers.  These standardized procedures have been used uniformly across 57 North-American clinical sites who committed to sharing their results through a common data repository (see also the public presentation by Dr. Michael W. Weiner for more detail about the science).  Over the years, these procedures have been adopted by additional ADNI participants in Europe, Japan, Australia, China, and Korea.  

This treasure trove of inter-connected standardized data has resulted in an amazing number of publications (212 accepted publications as of August 2011; source ADNI.org) which have already have fundamental impact in the field of AD science.  For example, the National Institute on Aging and the Alzheimer’s Association proposed in July 2010 to amend the official diagnosis criteria for AD to reflect the advances in biomarkers of AD.  Also, the work on CSF biomarkers has revealed that the level of amyloid-beta fragment Ab42 is a reliable predictor of conversion from MCI to AD.  Finally, the work on brain structure biomarkers using standardized MRI has demonstrated that changes in key structures of the brain (cortex, ventricles, hippocampus) are much more sensitive than conventional cognitive assessment tools, potentially enabling clinical studies requiring significantly fewer patients.  In 2010, ADNI received additional funding to extend the monitoring of patients already enrolled in the program and to enroll additional volunteers to explore more advanced biomarkers such as high resolution MRI, functional imaging, and amyloid aggregate imaging.


Beyond these remarkable scientific achievements, ADNI has demonstrated beyond doubt that academia, the industry, and non-profit organizations can collaborate efficiently in a non-competitive environment.  Convincing all stakeholders that sharing their data with each other was not an easy task but this proved to be one of the key elements to the success of this project.

Wednesday, August 10, 2011

Blood-derived disease biomarker of Alzheimer’s disease: tell me what you react to and I’ll tell you who you are

In the August 3 online issue of PLoS One (PLoS One. 2011; 6(8): e23112), Eric Nagele and colleagues report the discovery of a potential new diagnostic biomarker for Alzheimer’s disease (AD) based on the detection of serum autoantibodies specific for a unique set of self-proteins.  This work marks a major step forward in the process of realizing the promises of preventive medicine for this devastating disease.  As I wrote in a previous post, the ability to intervene early in the course of AD is considered to be critical to the success of disease modifying therapies currently under development.  By providing an easily deployable means to screen for individual at risks prior to the onset of clinical manifestation of the disease, this simple blood test has the potential to radically change the disease management practices in AD.

The association of autoantibodies with certain diseases is not new.  Antibodies specific for self-proteins are characteristic of rheumatoid diseases and most cases of multiple sclerosis.  These autoantibodies are generally thought to contribute to the pathobiology of these diseases.  In contrast, the association of autoantibodies with non-immunological diseases is rather intriguing.  It is tempting to speculate that this process could be generalized to other degenerative diseases:  disease states characterized by the release in the body of rare and/or usually sequestered proteins that would be recognized by the immune system as potentially foreign (see principles of immune tolerance).  If this generalization turns out to be valid, one could easily imagine a broad application of autoantibody-based disease diagnostics.  While this hypothesis could potentially open the doors to new applications, it could also constitute a potential obstacle.  If auto-antibodies are broadly associated with degenerative diseases, it might be difficult to distinguish between closely related conditions.  Nagele et al. already addressed part of this potential issue by showing that the subset of autoantibodies specific for AD could distinguish with relative accuracy the sera from AD patients from the sera of patients diagnosed with Parkinson’s disease.  Further analysis of this autoantibody-based biomarker in samples derived from patients at different stages of AD and from patients with different forms of dementia (e.g. vascular dementia, dementia with Lewy bodies) should provide the necessary evidences to qualify this biomarker for practical application.



Thierry Sornasse for Integrated Biomarker Strategy

Monday, August 8, 2011

New Biomarker of Disease Modification in Multiple Sclerosis: expect the unexpected


In the August 1st Issue of Multiple Sclerosis, Sheridan et al. reported their findings about a novel biomarker of daclizumab – a humanized anti-CD25 therapeutic mAb – activity in multiple sclerosis (MS) patients.  My interest in this article is many fold.  First, the work presented in this report is an excellent example of in-depth biomarker qualification, combining clinical observations and laboratory experiments.  Second, this article concludes a remarkable journey in translational research during which the authors showed a remarkable readiness to embrace unexpected observations. Finally, although I had nothing to do with the actual work presented in the current article, I was involved with this project at its very beginning, making me a somewhat privileged observer of this journey.

Let me set the stage.  In 2004, Bibiana Bielekova and colleagues published exciting observations from a small exploratory phase 2 study of daclizumab in IFN-gamma-refractory MS patients.  The remarkable aspect of this study was that the treatment with daclizumab did not appear to have profound effect on T cells in the patients but instead, the treatment appeared to stimulate the expansion of a subset of regulatory Natural Killer cells (CD56bright NK cells; see reference).  At the time, daclizumab was assumed to be primarily an inhibitor of T cell activation thanks to its ability to block the high-affinity IL2 receptor CD25.  This T-cell centric mechanism of action was the based for daclizumab’s approval for the prevention of transplant rejection (marketed as Zenapax).  At the same time, the biology of IL-2 was undergoing a major update: The observation that IL-2 deficiency in mice resulted in a massive autoimmune syndrome forced the immunology community to consider the immuno-regulatory property of IL-2.  Finally, my colleagues and I at Protein Design Labs (now Abbot Biotherapeutics) had observed that daclizumab does not simply block the binding of IL-2 to the high-affinity IL-2R (CD25) it also induces a rapid and profound down-regulation of this receptor without affecting the low-affinity sub-units of the IL-2R (CD122 & CD132).  The beauty of the paper by Sheridan et al. is that they embraced these seemingly conflicting observations to show that by blocking CD25, daclizumab redirects the signaling pathway of IL-2 towards the low affinity IL-2R which in turn provides the necessary stimuli to expand regulatory CD56bright NK cells.  This expansion of the CD56bright NK cells is a reliable predictor of lower disease activity (as assessed by newly formed Gg-enhancing and T2-MRI lesions) in treated patients, providing an early indicator of disease modification.  Furthermore, it seems that the baseline number of CD56bright NK cells, that express the low-affinity IL2R (CD122), is a predictor of positive response to treatment with daclizumab.

Beyond the value of this paper as a model of effective translational research and biomarker development, this work by Sheridan et al. also identifies a potential new mode of intervention in multiple sclerosis which had been neglected until now.



Thierry Sornasse for Integrated Biomarker Strategy

Thursday, August 4, 2011

System biology derived biomarker: fight complexity with complexity

The biomarker development community suffers from a strange disease: “single factor biomarker compulsive focus”.  This condition manifests itself by a strong propensity to try to identify and develop biomarkers based on a single analyte or factor.  While it seems that dealing with a single factor at a time would be a neat and effective approach, it suffers from a major flaw: biological processes are complex events and can rarely be described by a single biological factor.  First, a single biological process usually can involve a great number of biological factors with varying degree of connectivity.  Second, many biological factors can be involved in multiple apparently unrelated biological processes.  Thus, a biomarker based on a single biological factor may show significant statistical association with a specific biological process under controlled conditions but, in an uncontrolled real-world situation, its predictive value may vanish due to the confounding effects of co-morbidities and/or concomitant drugs.  Although it is theoretically possible to circumvent this limitation by systematically identifying and controlling for confounding factors, I would argue that developing biomarkers based on multiple factors (i.e. derived from a system biology approach) could prove more effective.

First, multi-factor biomarkers provide a means to capture the true complexity of biological processes.  As discussed above, most pathobiological processes affect multiple systems through time and space.  Capturing this complexity through the identification of multiple independent biological factors is more likely to yield a specific and unique signature of pathobiology.  Second, there is a statistical benefit to multi-factor biomarkers. 
Imagine that you want to predict a certain phenotype (e.g. disease, response to treatment) based on single factor biomarker.  While this biomarker shows significant statistical association with the phenotype, its intrinsic distribution leaves a zone of predictive uncertainty between the high-end of the control group (blue) and the low-end of the target group (green) [see fig. 1].

Fig. 1
Now, imagine that you have identified a second independent factor with the same properties (i.e. certain degree of overlap).  By combining these two factors into a multi-factor biomarker, the overlap between the control and target groups is abolished, removing the prediction uncertainty of each factor alone (see fig.2).

Fig.2

This statistical advantage resulting from combining 2 factors scales to greater numbers of factor combinations as long as the combined factors are independent (i.e. minimal covariance).


This approach has been successfully used by Genomic Health in the development of their oncotype DX diagnostics for the risk of recurrence in breast and colon cancer patients that would justify the use of chemotherapy.  These 2 diagnostics are based on the quantitative analysis of multiple RNA molecules that yield an integrated prediction of the risk of recurrence (see Genomic Health Laboratory Videos for an overview of the process).  Similarly, Dr. David Eidelberg has used multi-factor functional brain imaging to derive specific patterns of brain activity associated with different stages of neurodegenerative diseases such as Parkinson’s disease (Reference: Metabolic Brain Networks in Neurodegenerative Disorders: A Functional Imaging Approach).



Thierry Sornasse for Integrated Biomarker Strategy






Tuesday, August 2, 2011

Prognostic biomarkers, preventive medicine, and the aging population challenge

By 2050, the US Census Bureau estimated that the number of US residents 65 or older will double to reach almost 87 million, representing about 21% of the US population (2010 estimated: 40.2 million, 13%).



In keeping with this aging trend of the US population, the Alzheimer’s Association predicts that, based on current estimates of Alzheimer’s disease (AD) prevalence (5.4 million or 13% of people 65 or older), between 11 and 16 million of individuals will suffer from AD by 2050.  Similarly, the number of individuals suffering from Parkinson’s disease (PD) is also expected to increase dramatically: the incidence of PD increases dramatically in people 50 and older (see Incidence of Parkinson’s Disease: Variation by Age, Gender, and Race/Ethnicity for reference).  Considering these figures, the need for efficacious treatment of AD and PD are urgently needed (as of today, there is no preventive or disease modifying treatment for either AD or PD). 



Treatment of AD and PD is particularly challenging because when an individual starts to present the clinical sign of the disease, a great deal of damage has already be made to critical structures of the brain.  Therefore, most specialists in AD and PD believe that early intervention will be critical to successful therapies in these diseases.  Ideally, individuals at risks will be treated prior to the onset of any clinical sign of the disease in a truly preventive manner. But, how do you identify individuals at risk?  Genetic predispositions that could be detected by genetic tests only represent a minor fraction of the individuals at risk of AD or PD.  New brain imaging biomarkers and possibly cerebro-spinal fluid biomarkers currently under development could turn out to be valid prognostic biomarkers. However, in this new era of healthcare cost control, it is difficult to imagine that such sophisticated and/or complex techniques could be practically deployed to assess all individuals 50 - 60 and older.  Therefore, I would argue that one of the major challenges of addressing the rise of neurodegenerative diseases is to develop low cost and easily deployable screening prognostic biomarkers that could be used as a preliminary step to the more complex definitive prognostic tests.  Already, the PD community (See PARS study) has explored the possibility of using a low cost olfaction test (UPSIT) to screen for individuals at potential risk of PD.  Briefly, in Parkinson disease, the decrease in the sense of smell frequently occurs prior to the onset of motor symptoms.  Although the loss of smell (hyposmia and anosmia) is not specific for PD, identifying individuals with abnormally low sense of smell represents a valid first screen to enrich for individuals at potential risk for PD.



Thierry Sornasse for Integrated Biomarker Strategy