XAI-graph: Explainable AI and improved Measurements of Uncertainty for Machine Learning on (biomolecular) Structure Graphs

Explainability and the quantification of uncertainty are crucial elements to increase the credibility of DL methods. This project covers the integration of explainable artificial intelligence in our existing deep learning approach (dfpl). In particular, we extend the graph neural network dfpl implementation with respective algorithms. Further, we aim to address the issue of side or unwanted chemical effects using artificial intelligence. A potential research question is: Can we generate molecular structures that interfere with a particular receptor (which is 2B defined) but do not interact with receptors that induce endocrine disruption or neuronal activity? We aim to create a prototypic tool that generates safe-by-design molecular structures using generative adversarial neural network architectures, naturally, including ExAI.

We aim to develop a toolbox to assess toxicity on the universe level, containing ready-to-use applications applicable to any (set of) compounds from the chemical universe.

Further, we aim to apply our tools to the chemical inventory and thoroughly investigate their predictions and for hypothesis generation.

HAICU
This is a Helmholtz AI Cooperation Unit (HAICU) project.