Computational Biology Workshop
Biology is drowning in data. An almost overwhelming volume ofinformation is a defining trend in modern life science research, asgenetic analyses and simulated models force biologists to develop newways to wrangle their data. This onslaught of material has forged newrelationships between the fields of mathematics, computational scienceand biology. For many aspiring researchers, embracing thismultidisciplinary approach will be a requirement for progress andsuccess. A recent week-long workshop of practical sessions and seminarsgave students at the CSC an introduction to computational analysis, anda set of skills to apply to their research.
“The students were actively engaging with many and verydiverse topics and questions related to computational biology,” says Enrico Petretto (Integrative Genomics and Medicine), who, along with Boris Lenhard (Computational Regulatory Genomics), organised the workshop. “Even the ‘difficult’topics such as Bayesian modelling in genomics, or imaging and modellingof mechanosensitive processes generated lots of discussions between thestudents and speakers. Interactions and exchanges between differentbranches, from biology to advanced computational modelling, is what weenvisage these students will become familiar with.”
The programme included practical sessions with statistics software ‘R’,and data analysis training using microarrays and ChIP-seq – twotechniques used to analyse DNA expression and interaction with othermolecules of the body. These were complemented by presentations fromsenior scientists from the CSC and beyond. Centring his talk onnon-Hodgkin lymphoma – the fifth most common form of cancer – AnthonyUren (Cancer Genomics) highlighted the importance of the computational elementof genetic approaches to biomedical research. Genetically, he warned,cancer is a very heterogenous disease; any number of mutations may beinvolved in tumour development. As genetic approaches make their waytowards clinical application, it will be important to know the precisegenotype of a tumour, to predict if a particular treatment will help orharm a patient.
Other talks discussed the challenges of integrating dissectedgenomewide information with phenotypic data for complex traits such asobesity, and the potential of synthetic biology; Rob Kraus fromImperial College’s Department of Bioengineering suggested that asynthetic implanted ‘mechanosensor’ might give more information about asystem than the conventional approach of ‘knocking down’ a gene andwatching the result.
In his keynote address, Michale Stumpf, Chair of Theoretical SystemsBiology in Imperial College’s Division of Molecular Biosciences,discussed what makes a ‘good model’ in statistical analysis. Modelsmust find a balance between the simplicity that we are drawn to, andthe detail that is required. The degree of abstraction – the process ofrepresentation that Stumpf illustrated by comparing the London andMelbourne transport maps – can dictate the practical use of a model,while its predictive and explanatory properties determine its worth. Heemphasised that “useful models require biological expertise as well asmathematical and statistical knowledge.” With the growing IntegrativeBiology section, and a research community increasingly well versed inthese analytical methods, the CSC is poised to make the most ofcomputational biology.