menu

Peter Jepsen

Emil Fosbøl

Gregers Rom Andersen

Anders Nykjær

Claus Løland

Jan Eriksson

Kim Rewitz

Stefan Sommer

The project aims to develop data science and AI methodology needed for analysis of the shape and form or animals, organs, and plants – their morphology. Morphology has traditionally been important features in studies in fields ranging from biology over health science to plant science, but it is hard to quantify morphological differences, to do well-defined statistical analysis, and to integrate morphological data in data science and AI analysis pipelines. The project aims at solving this by developing the necessary statistical, machine learning and mathematical foundation for such analysis together with software implementations that are directly useful for researchers in the above fields. By doing this, we will make large classes of high-resolution data available for research to make new discoveries, to improve our understanding of animal evolution, to develop more resilient crops, and to increase our understanding of the relation between human organ shape and evolving diseases.

Jakob Skou Pedersen

Solid tumors release circulating tumor DNA (ctDNA) to the blood, where it can be recovered from the plasma. Detection and analysis of ctDNA may transform cancer care. However, in many clinically relevant settings, it comprises only a minute fraction of the circulating free DNA (cfDNA), with most coming from healthy cells. The cfDNA can be cataloged in vast data sets using DNA sequencing techniques. We will use generative AI statistical modeling techniques to detect subtle ctDNA signals and characterize the underlying cancer biology. Predictive methods will be trained and evaluated on comprehensive public cancer genomics and local cfDNA data sets. The goal is to contribute cancer biology insights and help advance cancer care with methods for early diagnosis, disease surveillance, and cancer characterization.

Aasa Feragen

The upcoming AI act will likely lead to widespread use of trustworthy AI methods such as explainability and algorithmic fairness. Such methods are crucial to ensure that AI is implemented responsibly and safely into increasingly critical societal functions, such as medical imaging. Nevertheless, the reliability of explanations and algorithmic fairness models is rarely addressed in state-of-the-art responsible AI research. In this project, we will showcase how trustworthy AI algorithms can fail, and develop theoretical and practical links between uncertainty in AI models, and failure modes of their trustworthy counterparts.

This has several advantages: First, it gives us potential tools to assess whether trustworthy AI algorithms are likely to fail, so that we can safely use them when they don’t. Second, these methods come with a straightforward generalization to the modern generative AI models, for whom trustworthy AI tools are currently largely unavailable.