menu

Yuya Hayashi

Yuya Hayashi says: “From an organ to another organ, cells send signals to coordinate the physiology of the entire body. A well-known example is signalling by hormones, but what if cells instead wish to deliver more complex messages than signals? A striking discovery in the past years is the packaged delivery of small RNA in nano-sized vesicles called exosomes to “stream” the RNA language over a long distance. Much remains unknown, however, about the precise context of such messages that are exchanged between cells of a living animal. This project aims to decipher the secret RNA codes delivered by exosomes starring zebrafish embryos as a research model that allows genetic manipulation to capture target exosomes and live imaging of the exosome transport through the bloodstream. Furthermore, the uncovered role of exosomes will be tested using advanced nanotechnology. The deeper understanding of the exosome-powered RNA communication between distant cells will identify novel targets for non-viral gene therapy.”

Niklas Pfister

The CausalBiome project will develop a new unified framework for statistical analysis and causal inference on human microbiome data.

Microorganisms, such as bacteria, fungi and viruses interact in diverse ways with their surroundings. The human body is estimated to be a habitat for more than 10,000 different microbial species and they have been associated with various health outcomes such as cardiovascular disease, metabolic diseases, obesity, mental illness, and autoimmune disorders. Thanks to recent advances in gene sequencing technology, scientists are now able to directly measure these microbes. However, to understand how they interact with their human host, sophisticated statistical tools are needed to analyze the highly complex data. Unfortunately, current techniques do not offer a unified approach that incorporates all available knowledge into the analysis.

The CausalBiome project will fill this gap by developing novel statistical and data science analysis methods, which will lead to a better understanding of how the microbiome interacts with its host. All results will be made publicly available to help other scientists gain new insights into how microbes affect our health.

Shilpa Garg

The project aims to develop new computational methods for analysing and integrating data from both short-read and long-read DNA/RNA sequencing experiments.

The field of DNA and RNA sequencing has for many years been dominated by the so-called next-generation or second-generation sequencing technologies with rely on massive parallel sequencing of short reads, and methods for analyzing such data are well developed. Recently, however, new third-generation technologies have emerged which produce much longer reads enabling scientists to fill gaps and study phenomena such as repetitive sequences and structural variants.

However, computational methods to process and integrate these data types are missing. This project therefore aims to develop efficient, high-quality computational methods and open-source software packages for processing massive datasets for integrative sequencing analysis of complex diseases. Such new methods will significantly improve the understanding of genetic variation in novel megabase-sized repetitive regions and to study cell heterogeneity underlying complex diseases.

The computational tools will be useful to large-scale initiatives such as the Human Pangenome Reference Consortium and the Danish National Genome Center, and may yield new insights into complex diseases, such as cancer and Type 2 diabetes.

Read more about the project here.

Jesper Madsen

This project will shed new light on the regulatory and functional architecture of the human pancreatic Islets of Langerhans with the aim of furthering the understanding of how it contributes to health and disease, most notably to diabetes.

In recent years, hundreds of human islet samples have been analysed with new high-throughput technologies, providing insight into single cell transcriptomes, spatial organisation, chromatin accessibility, etc. This project aims to collect, integrate, and jointly analyse these data sets with new data science methods to build a comprehensive map of the human islet.

The resulting web resource will be made publicly available for other researchers to pave the way for new insight that could lead to prevention, diagnosis and treatment of diabetes and other pancreas-related diseases.

Signe M. Jensen

This project aims to improve the analysis methods for high-throughput plant phenotyping to provide an efficient and accurate method for evaluating ecotoxicological hazard of new and existing pesticides.

Crop production needs to increase dramatically to feed the growing world population, particularly if meat consumption and production is reduced to lower CO2 emissions. Agrochemicals, such as herbicides, insecticides, fungicides, and chemical growth regulators, are widely used to achieve the goal of higher, more stable yields but to avoid harm to the environment, such products must undergo rigorous environmental toxicological risk assessment.

The project seeks to expand and improve upon the so-called benchmark dose methodology – a statistical methodology used for ecotoxicological risk assessment – to use large-scale, high-throughput and high-dimensional dose-response data to assess the efficacy and ecotoxicological hazards of chemicals used in agriculture or in private gardens. The methodology should be equally useful for non-chemical stressors, e.g., evaluating the effect of climate-related stress on plants in the context of climate change.

Read more about the project here.

Morten Arendt Rasmussen

The project aims to understand individualised human response to food intake which could potentially lead to individually tailored dietary advice.

People digest and metabolize nutrients differently and such differences have been linked to disease. Better understanding of individualized dietary responses and their links to health outcomes thus holds great potential for predicting, preventing and treating certain diseases. To do so requires integration of many different types of large, complex data sets to get the full picture and understand cause and effect relationships, and new methods will be required to fully achieve this.

This project will develop new computational methods for analysing and integrating different types of data available from an existing cohort of young adults, including continuous glucose measurements, images of meals, metabolomics data, gut microbiome samples, etc.

The methods will be shared with the scientific community and could later be used to guide and execute clinical studies on personalized nutrition for symptom relief in diseases such as asthma, allergy, obesity, and metabolic syndrome.

Søren Besenbacher

The project will develop new improved mathematical models of the mutational process to be used in the analysis and understanding of both germline and cancer mutations.

Mutation of the DNA is a truly fundamental process in biology. It occurs in all species and is the ultimate source of all genetic variation. Germline mutations – i.e., mutations that occur during the formation of egg and sperm cells – are ultimately responsible for all evolutionary adaptations and heritable diseases. Mutations occurring later in life may on the other hand turn normal cells into dangerous cancer cells. Good models of the mutational processes are therefore essential in both cancer research and studies of germline mutations and evolution.

This project will develop new methods that address specific shortcomings in how we currently model mutation processes and demonstrate the usefulness of these methods. The new methods will improve the ability to detect the activity of a specific mutational process in a tumor and make it easier to find cancer genes. Furthermore, the new germline mutation rate models created as a part of this project will help researchers find genes where new mutations cause severe diseases.

Read more about the project here.

Rasmus Pagh

How do we ensure that we can trust systems that use data to make decisions? Lawmakers across the world are grappling with the question of how to properly regulate systems that collect, analyze, and use data. Getting the balance right is crucial. Too little regulation could increase the risk of compromising basic societal values and privacy. Too strict regulations will limit our possibilities for realizing the value and societal benefit of AI and big data analytics.

The Providentia project will advance algorithms for integrating and analyzing sensitive data – such as health data or medical records – in a secure way that preserves privacy and does not require all data to be transferred to a central location.

During the last decade, differential privacy has emerged as the gold standard for protecting private information, based on firm mathematical guarantees on how much private information can be deduced from data sets, analyses, or predictions that are released. Extending recent developments, the project will establish a research group focusing on differentially private algorithms in distributed settings. The goal is to enable data science when no entity holds all relevant data, and where privacy considerations make it impossible or undesirable to consolidate all data for central analysis. This is relevant, for instance, when using healthcare data to improve health outcomes via better prevention, diagnosis and treatment of disease.

The project name is inspired by the ancient Roman goddess Providentia who personifies the ability to foresee events and make suitable provision. In this spirit, the Providentia project seeks to provide the forethought needed to ensure that data scientists can make use of valuable sources of insight, also when data contains sensitive information.

Clarissa Schwab

Clarissa Schwab says: “About thirty percent of our food becomes waste. One big problem is food spoilage because of bacteria and fungi, which might also make the food unsafe to eat. This is a serious problem for the food industry, which needs to guarantee food safety and quality, and also wants to reduce waste. Organic acids are natural preservatives from plants and bacteria that inhibit spoiling microbes. Many different organic acids exist, but it is still not completely clear why and how these organic acids inhibit microbes, and which organic acids work best in a specific food product. BIOFUNC will investigate which organic acids are most active, and at which condition. We will use food bacteria and develop biotechnological processes in bioreactors to produce organic acids. BIOFUNC will test, whether we can prevent food spoilage in different food products, for example yogurt, bread and plant-based meat analogues. Our results will help to make food products safer in a natural way to reduce food waste.”

Photo by: Lars Kruse

Sebastian Marquardt

Sebastian Marquardt says: “Temperature fluctuations stunts plant growth and development, thus threatening yields. My proposal offers biotechnological solutions to enhance the resilience of crops to changing environments. I aim to exploit the natural ability of plants to respond to temperature fluctuations at the molecular level to promote resilience to changing climates, with an emphasis on untimely cold. My research group identified RNA molecules at the center of plant gene regulation: short promoter-proximal RNAs (sppRNAs). I propose to use sppRNAs to activate cold-tolerance genes in tomato. My proposal will exploit RNA biotechnology as a new principle of gene expression control to promote climate resilience. Engineering increased cold tolerance through sppRNA-mediated gene regulation pathway in tomato has a high potential to translate into a new strategy for “climate-smart agriculture” with both greater resilience to climate change and more sustainable crop production.