The genome is “the blueprint of life,” a repeating string composed of four letters, whose order and configuration lay the plan for each individual’s growth and development. Genomics is the study of the structure, function, and evolution of genomes at a variety of scales: from the single cells of a cancer tumor to the genomes of an entire population of individuals. Scientists use “sequencers” to look at the molecular structure of the genome much the same way that astronomers use telescopes to examine composition of stars, and what they see with these molecular telescopes holds the potential to find new drugs, diagnose patients, uncover the genealogy of entire populations, and discover the genetic bases for human disease.
Genomics is also in the middle of a massive technological revolution; over the past decade, the sequencers used by scientists have improved in cost, quality, and speed at exponential rates. Fifteen years ago, it took billions of dollars and years of work for an international consortium of researchers to produce a single human genome; today a single sequencing center can sequence a human genome in a single day for almost $1000. Thousands of human genomes have been sequenced, and projects to sequence hundreds of thousands or millions of genomes are already underway.
Even as the experimental machinery of genomics has advanced, however, its computational support — the tools and methods that convert raw data into clinical findings and research discoveries — has not kept pace. Genomics software today runs much the way it did ten years ago: discrete tools, scripting for workflow, files instead of databases, file formats in place of data models, and little-to-no parallelism.
Spark is an ideal platform for organizing large genomics analysis pipelines and workflows. Its compatibility with the Hadoop platform makes it easy to deploy and support within existing bioinformatics IT infrastructures, and its support for languages such as R, Python, and SQL ease the learning curve for practicing bioinformaticians. Widespread use of Spark for genomics, however, will require adapting and rewriting many of the common methods, tools, and algorithms that are in regular use today.
This talk will present ADAM, an open-source library for bioinformatics analysis, written for Spark and hosted by the AMPLab. We will discuss both the places where Spark’s ability to parallelize an analysis pipeline is a natural fit for genomics methods, as well as some methods that have proven more difficult to adapt. We will also cover ADAM’s use of technologies like Avro, for schema specification, and Parquet, for compressed file formats, in conjunction with its Spark-based workflows.
Timothy Danford is a computer scientist working on advanced automation approaches to big data variety in the pharmaceutical and healthcare industries. Previously, Timothy was a software architect, engineer, and founding team member for Genome Bridge LLC, a Broad Institute subsidiary organized to develop cloud-based SaaS genomic analysis pipelines. He has experience in developing data-management services, applications, and ontologies for bioinformatics and genomics systems at Novartis and Massachusetts General Hospital. As a PhD student in computer science at MIT CSAIL, he focused on computational functional genomics. He is a contributor to ADAM, an open source project for bioinformatics on Spark.
Comments on this page are now closed.
©2015, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.