School of Computer Science

Faculty of Science Doctoral Training Centre in Artificial Intelligence

AI DTC 2022

The Faculty of Science AI DTC is a new initiative by the University of Nottingham to train future researchers and leaders to address the most pressing challenges of the 21st Century through  foundational and applied AI research on a cohort basis.  The training and supervision will be delivered by a team of outstanding scholars from different disciplines cutting across BiosciencesChemistryComputer ScienceMathematical SciencesPharmacyPhysics and Astronomy, and Psychology.

The Faculty of Science invites applications from Home students for up to 6 fully-funded PhD studentships to carry out multidisciplinary research in the world-transforming field of artificial intelligence. The PhD students will have the opportunity to:

 
  • Choose from a wide choice of AI-related multidisciplinary research projects available, working with world-class academic experts in their fields;
  • Benefit from a fully-funded PhD with an attractive annual tax-free stipend;
  • Join a multidisciplinary cohort to benefit from peer-to-peer learning and transferable skills development.
Studentship information
Entry requirements Minimum of a 2:1 bachelor's degree in a relevant discipline to the research topic (please consult with the potential supervisors), and a strong enthusiasm for artificial intelligence research. Studentships are open to Home students only
Start date 1st October 2022
Funding Annual tax-free stipend based on the UKRI rate (currently £15,609) plus fully-funded PhD tuition fees for the four years 
Duration 4 years

 

The deadline to have completed and submitted your application to NottinghamHub is 8th April 2022. 

For information on how to apply, click here. 

Research Topics

Rooted in the exceptional research environments of the Faculty of Science Schools at the University of Nottingham, the first cohort of the AI DTC will be organised around 15 multidisciplinary research topics. It is important that you identify a research topic aligned with the expected skill set, your background and particular areas of interest. You will need to obtain support from the supervisors associated with your research topic choice before submitting your official application. You can do this by exploring the research projects below and contacting the main supervisor of the project that is of interest to you, directly, to discuss the further details and to arrange an interview as appropriate. In your PhD studentship application, you will be asked to provide a research topic from the following list, and state the names of the supervisors you have support from.AI DTC

Explainable Generative Models for Biomaterials Discovery

This project aims to develop interpretable machine learning and artificial intelligence (AI) approaches to the design of novel biomaterials to be used in medical devices.

Advanced biomaterials are urgently needed to address the healthcare challenges faced by societies associated with ageing populations. High throughput technologies generate substantial volumes of data on large numbers of biomaterials, with diverse chemical and topographical properties. Machine learning methods are highly successful in making predictions of useful properties driving the biological phenomena within complex materials. This project aims at exploring AI for new biomaterials design. It involves three stages:

  1. Extracting materials’ chemical and topographical properties using deep learning and few shot learning;
  2. Designing new improved materials using generative methods;
  3. Machine learning decision interpretation to further inform biomaterials researchers about the design decisions made by the machines.

Applicants are expected to have experience of machine learning, python, and software engineering. Knowledge of evolutionary algorithms and agile methodology is desirable, but not essential.

Supervisors: Dr Graziella Figueredo (School of Computer Science), Prof Morgan Alexander (School of Pharmacy).

For further details and to arrange an interview please contact Dr Graziella Figueredo.

 

Explainable Transfer Learning in Understanding Mechanisms in Cancer

AI promises to assist in developing improved understanding of disease risk and precision medicine strategies in cancer and other diseases. However, two confounding issues are frequently encountered:

  1. Insufficient training data: Often, there are too few labelled samples to enable effective AI model creation. One feasible solution is to adopt transfer learning methods by transferring knowledge extracted from other domain(s), but there are many technical challenges. For example, when transferring between different types of cancer, some features may be constant, but others may be domain specific. Novel AI techniques are required to allow transfer learning to create effective models.
  2. Concept drift: In fast-moving fields in which new knowledge is constantly emerging, it is important to explore novel methods for coping with concept drift in decision support systems. That is, how to allow systems to 'drift' to accommodate new knowledge, but doing so in a controlled and understandable manner.

This project will explore novel mechanisms for incorporating transfer learning and concept drift into decision support models in the context of understanding mechanisms of cancer biology.

Supervisors: Prof Jon Garibaldi (School of Computer Science), Prof Christian Wagner (School of Computer Science), Prof DM Heery (School of Pharmacy).

For further details and to arrange an interview please contact Prof Jon Garibaldi.

 

Artificial Synapses with Dual Opto-Electronic control for Ultra-Fast Neuromorphic Computer Vision

Memristors (or resistive memory) are a new generation of electronic devices that directly emulate the chemical and electrical switching of biological synapses, i.e., the key learning and memory components of the human brain. Memristors also have the advantage of ultra-fast switching, low-power consumption, and nanoscale size, and therefore have the potential to usher in a whole new era of artificial intelligence, devices, and applications. The aim of this project is to develop new state-of-the-art memristor devices that can switch optically as well as electronically, thereby enabling these “optically switching synapses” to be used as “in-memory” computing elements in neuromorphic circuits for computer vision applications. This PhD project will develop new optically active materials, based on semiconducting nanowires/nanotubes coupled with metal nanoclusters and/or photoactive molecules, with enhanced light sensing capabilities that are suitable for integrating with memristor materials and devices.  You will learn materials synthesis and deposition techniques, nanoscale device fabrication as well as advanced electrical and optical characterization methods.

Supervisors: Dr Neil Kemp (School of Physics and Astronomy), Professor Andrei Khlobystov (School of Chemistry), Dr Jesum Alves Fernandes (School of Chemistry).

For further details and to arrange an interview please contact Dr Neil Kemp.

 

Artificial gene regulatory networks as a new AI paradigm

Gene regulatory networks (GRNs) are the primary ways by which living cells are programmed to respond to their environment in real time. They allow for a population of genetically identical cells to behave differently, for example the way the cells in our eyes behave differently from cells in our skin. GRNs evolve in a specific way, allowing them to learn new responses or behaviours from previous patterns without losing existing knowledge. Artificial GRNs (aGRNS), that is, computer implementations of GRNs, have been used to help understand the biology of GRNs. However, they have not been considered as a computational paradigm in their own right.

The aim of this project is to establish aGRNs as a computational AI paradigm. It will involve the implementation of aGRNs using both deterministic and stochastic formulations, and the identification and testing of problem types for which this paradigm is likely to be especially valuable. These include systems that need to switch rapidly between different contexts, and systems that need to transfer learning from one domain to another. These are both important challenges for improving the generalisation of AI systems.

Applicants are expected to have strong computer programming skills and a broad knowledge of artificial intelligence. Some biosciences background would also be beneficial to help understand the concepts, but this could be learned as part of the project if needed.

Supervisors: Dr Colin Johnson (School of Computer Science), Prof Dov Stekel (School of Biosciences).

For further details and to arrange an interview please contact Dr Colin Johnson.

 

Developing a robust methodology to analyse taste buds (fungiform papillae) using smart phones by consumers

Fungiform papillae (FP) are ‘mushroom-like’ papillae that appear as pinkish spots, located on the anterior part of the tongue, containing taste buds. Research has suggested that the anatomical structure of FP varies greatly across individuals and could be a marker for taste sensitivity, and further linked to food preference and choice. Until now, manual FP counting from digital photography was the most popular method of quantification. This is extremely time consuming and error prone. Automated methods have started to be developed in recent years; however, this requires the image to be high quality and taken under very strict conditions using professional cameras.

This project aims to:

  1. Fully automate the quantification of FP using cutting edge computer vision methods, such that we are able to provide reliable counts from lower quality images.
  2. Develop an interactive imaging app that can be used by to guide the self-photography of FP at home. We will use the app platform to additionally explore automated capture of food choices and nutritional information from plated food (which can add new dimensions to future studies). 
  3. Integrate this new technology in a food sensory study investigating the relationship between FP, taste sensitivity and taste preference.

The successful applicant will develop algorithms using deep machine learning to quantify FP on the tongue utilising images collected via smart phones with an interactive app, so that consumers can easily take this information by themselves. The app itself will use computer vision techniques to interactively help the user take a high-quality photograph. These two novelties together contribute to a new platform for conducting taste research in the general public.

Although applicants are expected to have a computer science background, they will be integrated into a team of food scientists to help create a new image dataset to train the machine learning models, and to co-develop the app with domain input to ensure the system delivers the best quality data it can.

Supervisors: Prof Andrew French (School of Computer Science), Dr Qian Yang (School of Biosciences).

For further details and to arrange an interview please contact Prof Andrew French.

 

Machine Learning for complex 3D data structures

Plant canopy architecture, the arrangement of plant structural material in 3-dimensions (3D), determines plant function, resource capture and performance. The ability to measure and apply architecture is of great importance. It is, however, governed by a complex number of traits and whilst the tools for its study have advanced, there are still numerous limitations preventing key breakthroughs. Generation of accurate, 3D digital models of plant structures is difficult, with many challenges relating to the complexity of plants. An efficient and accurate method for obtaining such traits is urgently required.

This project seeks to combine computer vision and machine learning in a biological setting to combine accurate plant model generation with an automatic phenotyping pipeline. While current state of the art illustrates machine learning can be applied to 3D models of simple structures, its application biological objects is in its infancy.  Expanding on existing machine learning techniques, the successful candidate will generate novel neural networks and apply them to complex objects consisting of mesh surfaces to automatically extract plant traits relating to architecture and repair errors in the underlying mesh representation. The methodology will be evaluated using the µX-ray CT-scanning facilities available at the University’s Hounsfield Facility. 

Applicants are expected to have knowledge of programming (Python or C++) plus a strong interest in machine learning and computer vision. Experience of deep learning and an interest in biological systems are desirable, but not essential. 

Supervisors: Dr Alexandra Burgess (School of Biosciences), Prof Erik Murchie (School of Biosciences), Prof Tony Pridmore (School of Computer Science).

For further details and to arrange an interview please contact Dr Alexandra Burgess.

 

Machine learning for first-principles calculation of physical properties

The physical properties of all substances are determined by the interactions between the molecules that make up the substance. The energy surface corresponding to these interactions can be calculated from first principles, in theory allowing physical properties to be derived ab-initio from a molecular simulation; that is by theory alone and without the need for any experiments. Recently we have focussed on applying these techniques to model carbon dioxide properties, such as density and phase separation, for applications in Carbon Capture and Storage. However, there is enormous potential to exploit this approach in a huge range of applications. A significant barrier is the computational cost of calculating the energy surface quickly and repeatedly, as a simulation requires. We have recently developed a machine-learning technique that, by using a small number of precomputed ab-initio calculations as training data, can efficiently calculate the entire energy surface. This project will involve extending the approach to more complicated molecules and testing its ability to predict macroscopic physical properties.

Applicants will be expected to have a numerate background from a first degree in Maths, Chemistry, Physics or similar; and interest in learning about applied machine learning and in developing their coding and data science skills.

Supervisors: Prof Richard Graham (School of Mathematical Sciences), Dr Richard Wheatley (School of Chemistry).

For further details and to arrange an interview please contact Prof Richard Graham.

 

A Generic Image Segmentation Platform for Novel Feature Exploration in Multimodal MR Imaging using Minimally Supervised Machine Learning

This PhD will develop novel AI approaches to image segmentation to explore meaningful clinical features in multimodal Magnetic Resonance Imaging (MRI).  We will drive this by tackling several important imaging scenarios characterised by small datasets and challenging image quality. 

Often in advanced MRI we can visually identify a feature of interest and segment it manually with a-priori knowledge about anatomical features and expected MR contrast. However, this can be extremely difficult and time consuming, particularly when we are interested in ‘difficult’ to segment features, for instance very small objects with low contrast, or objects that have ill-defined anatomy or variable image contrast. For example, layer 4 of the cortex is this very fine continuous layer that runs through much of the grey matter of the brain and is pivotal in furthering our understanding of brain function and dysfunction. However, it is hard to distinguish reliably in all subjects and there are currently no methods to automatically identify it from MR images. 

The aim of this PhD project is to:

  • Develop machine learning methods to segment the tissue of interest with minimal high-level supervision (e.g. shape, topology, connectivity etc.).
  • Supplement the machine learning results with model-based approach development where appropriate, e.g. to ensure the results maintain a complete surface rather than broken patches.
  • Optimise the multimodal MRI acquisition, informed by initial AI results to maximize the efficiency of the automatic segmentation, in terms of image quality and data harmonisation. 

Supervisors: Dr Xin Chen (School of Computer Science), Dr Karen Mullinger (School of Physics and Astronomy), Dr Caroline Hoad (School of Physics and Astronomy), Prof Andrew French (School of Computer Science), Prof Penny Gowland (Physics and Astronomy). 

For further details and to arrange an interview please contact Dr Xin Chen.

 

AI for Optimisation of Chemical Reactions

This is an interdisciplinary project cutting across Chemistry and Computer Science which will look into the practical goals of maximising the efficiencies and selectivities of chemical reactions using AI optimisation and machine learning techniques. Traditionally, chemists have done this by empirical experimental procedures, which have three critical failings: that it’s slow, it’s a poor way to explore chemical data space, and critically ‘fail’ results are often just completely ignored (because they are frequently beyond human interpretation).

An initial collaborative study carried out using a limited data set describes featurisation of seven components of a chemical catalyst producing chiral pharmaceutical building blocks. Interpretable featurisation allows Quantitative Structure-Property Relationships (QSPR) between the catalyst structure and the chiral purity of the pro-pharmaceutical products in these rhodium-catalysed asymmetric Michael additions (RhCASA). Our overall approach based on machine learning favours easier human interpretation and the realisation of improved catalysts, some currently on preliminary trial within GlaxoSmithKline. 

This PhD project will extend the previous study, investigating recent advances in optimisation and machine learning methods focusing on active learning, explainable machine learning and generative methods, and how they can be applied effectively to the challenging data at hand. 

Applicants will be expected to have a Chemistry or Chemical Science relevant background. Knowledge of machine learning and/or optimisation methods is desirable, but not essential. 

Supervisors: Prof Simon Woodward (School of Chemistry), Prof Ender Ozcan (School of Computer Science), Dr Grazziela Figueredo (School of Computer Science). Industrial External Advisors/Mentors: GlaxoSmithKline (Dr Katherine Wheelhouse), molecule.one (Piotr Byrski).

For further details and to arrange an interview please contact Prof Simon Woodward.

 

Age-related changes in brain connectivity and cognition studied through machine learning

This PhD project aims to study how the brain/cognition relationship changes across lifespan, using an open data archive containing neuroimaging data from 600+ participants (Cambridge Centre for Ageing Neuroscience). The project will combine information obtained from multiple brain imaging methods:

  1. Functional MRI (fMRI), which measures brain activity indirectly through its effects on blood vessels;
  2. Magnetoencephalography (MEG), which measures the magnetic field around the head generated by brain activity as a direct measure of neural activation;
  3. Diffusion-weighted MRI (DWI), which measures the anatomical “wiring” of the brain; and
  4. Structural MRI (sMRI), which measures shape-related properties of cortical folding and subcortical structure.

To combine information from the various neuroimaging modalities, several approaches will be explored, among them methods based on recent theoretical advances in complex networks that employ multilayer networks to describe multiple interacting network structures simultaneously. Machine learning methods, including neural networks and relevance vector machines, will be used to determine which aspects of brain networks (across all imaging modalities studied) best predict individual cognitive ability. Finally, this approach will be used to test existing theories of how brain networks reorganize with age, with hypotheses about age-related changes of brain lateralization and about shifts between anterior and posterior brain activation.

Applicants are expected to have basic Matlab or Python programming skills, and a quantitative background (physics, mathematics, computer science or engineering) is desirable.

Supervisors: Dr Christopher Madan (School of Psychology), Dr Reuben O'Dea (School of Mathematics), Dr Andrew Reid (School of Psychology), Dr Martin Schürmann (School of Psychology).

For further details and to arrange an interview please contact Dr Martin Schürmann.

 

Life of a sperm whale: an AI support for its preservation.

The sperm whale is a long-lived pelagic mammal with a worldwide range which has been listed as endangered in the Mediterranean sea. Knowledge of sperm whale social organisation and movement in the basin is still scarce. Non-invasive techniques to study their lives, habits, and migration patterns include geotagged photographic data. Single-subject identity is reconstructed through a visual investigation of unique marks and pigmentation patterns on the dorsal fin, tail and other body parts. However, this process is still primarily done manually, which inhibits the ability of researchers to track individuals across large geographic areas and time scales.

This project will develop advanced deep learning approaches to automate the identification of single subjects (individual whales), combining this with picture geolocation data to track interactions between subjects and whale pods’ evolution over time. The project will use a combination of image recognition, machine learning, and statistical inference methodologies to address questions about the social structure, habits and movements of sperm whales that populate the Mediterranean Sea. The outcome of this project will contribute to supporting the activity of the NGO OceanoMare Delphis in the education and influence of national/international policies for the conservation of whales' critical habitats and migration corridors.

Supervisors: Dr Silvia Maggi (School of Psychology), Dr Michael Pound (School of Computer Science), Prof Theodore Kypraios (School of Mathematical Sciences). External Advisor: Barbara Mussi, President of OceanoMare Delphis Onlus.

For further details and to arrange an interview please contact Dr Silvia Maggi.

 

Seeking a better view: Guiding cameras for optimal imaging via reinforcement learning

In Biosciences, image capture is often used as the first step in obtaining reliable scientific measurements of plants. Good image capture is fundamental to many of these experiments, by measuring plants you can determine which are healthier, more robust, or producing more food. However, you can’t measure what you can’t see, and, capturing a better image could be crucial in measuring the subtle differences which indicate a higher yielding crop, or a resistance to pests. When humans take photos they adjust their position and angle to better see the subjects being captured, particularly when imaging physically complex and highly variable objects. Taking better pictures is something photographers aspire to for years, and yet in science most image capture is performed using static cameras that don’t move, using a “one size fits all” approach for convenience. This introduces significant risk that crucial information will not be captured.

Advances in machine learning and robotics mean we can now train machines to look for better views before taking pictures, this is active vision. This PhD will explore reinforcement learning techniques to move robotically controlled cameras to new positions, views and zoom in in order to better capture the subject of interest.

You will explore approaches to train robotic systems to capture better images with no human interaction. You will work across disciplines, imaging a variety of plant subjects ranging in size and shape, with a view to improving performance on a range of tasks including 3D reconstruction, feature detection and counting. Access will be provided to automated imaging systems equipped with varied movement capability ranging from linear actuators to 6-dof robotic manipulators, and you will work closely with bioengineers in designing new systems. Key to this work will be an exploration of different reward systems, how do we determine which view is better than another, or which view is the best? We will then explore the most appropriate and powerful reinforcement learning regimes, including those guided by human examples. Though the PhD will focus on plants, the results of your work will ultimately be used to drive imaging systems across the University, and will be applicable to many other areas of image capture, such as physics, astronomy and medical imaging.

Supervisors: Dr Michael Pound (School of Computer Science), Dr Darren Wells (School of Biosciences), Dr Jonathan Atkinson (School of Biosciences),  Prof Tony Pridmore (School of Computer Science).

For further details and to arrange an interview please contact Dr Michael Pound.

 

Training a Nanorobot to Build a Molecule

The invention of the scanning probe microscope (SPM) in the early 1980s revolutionised the science of the ultrasmall. Atoms, molecules, and nanostructures are now routinely probed with a spatial resolution all the way down to the single chemical bond limit. Measurements of quantum phenomena that, as recently as a few decades ago, were thought to be so far beyond our capabilities that they would forever be only gedankenexperiments have been made possible, in spectacular fashion, by the probe microscope.

Since the groundbreaking work of IBM Research Labs thirty years ago, a key focus of scanning probe microscopy has been the controlled manipulation of single atoms and molecules to form what have been described as designer states of matter. In this context, an SPM is better thought of as a robot capable of targeting and positioning single atoms, rather than a microscope alone.

But despite this unique ability to manipulate matter on the smallest scales, SPM has a big problem: it’s painfully slow to position single atoms, not least because the human operator represents a major bottleneck in the process. This PhD project represents the next stage in atom manipulation: the integration of machine learning with probe microscopy to automate the assembly of matter from its constituent atomic and molecular building blocks. Building on recent work in the Nanoscience Group in the School of Physics & Astronomy, and in collaboration with the School of Computer Science, you will develop algorithms, architectures, and protocols to automate atomic manipulation, with the ultimate objective of building a molecule, an atom at a time, without human intervention.

Supervisors: Prof Philip Moriarty (School of Physics & Astronomy), Dr Michael Pound (School of Computer Science), Dr. Brian Kiraly (School of Physics & Astronomy). Industry partners: Unisoku.

For further details and to arrange an interview please contact Prof Philip Moriarty.

 

Modelling Human-Robot Interaction in Social Spaces

Robotics and related AI technologies are rapidly gaining presence in different areas of our everyday life, e.g. cleaning robots vacuuming floors, warehouse robots carrying pallets, robotic vehicles with cruise control. An exciting use of robotics is social and telepresence robots, which are intended to work in public and social contexts, including educational and museum settings, and to provide support for older adults and populations with accessibility issues.

This PhD project will study and quantify human interactions with commercially available robots in different contexts (participants/robots/places/functions) with a view to creating models of human-robot interaction (HRI) in these contexts. These models will help to improve design of spaces optimising human-robot interaction and also inform the development of best practice guidelines for robot embodiment, interaction strategies and autonomous behaviour.

In line with this goal, this PhD project aims to model sustainable human-robot interaction strategies for socially capable robots designed to function in public spaces. The project will target technological and psycho-sociological challenges related to AI to investigate the following overarching research questions:

  1. How can social and telepresence robots be used to connect groups of remote humans and mediate the interaction between them?
  2. What kind of personalisation methods and input/output modalities are useful to improve the interaction between humans and robots and enable long term sustainability of the communications?
  3. How do the attitudes and perceptions toward robots change in children and adults over time?
  4. Are these attitudes and perceptions affected by cultures, communities and the interaction environments? 

This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Psychology. Applicants are expected to develop technological advancements in AI and Interaction Design, including using machine-learning for generating personalised user models for children and adults, adaptive motion planning in social environments, feedback generation. In addition, the successful student will design, conduct and analyse experiments to investigate the socio-psychological effects of the technologies.

Supervisors: Dr Ayse Kucukyilmaz (School of Computer Science), Dr Emily Burdett (School of Psychology), Prof. Praminda Caleb-Solly (School of Computer Science).

For further details and to arrange an interview please contact Dr Ayse Kucukyilmaz.

 

Machine Learning for predicting yeast phenotype from genotype for biotech applications

Yeast is an ideal platform for the manufacture of biomedically important protein products, including life-saving medicines (e.g. vaccines, therapies for cancer and infectious diseases). The diversity of yeast genotypes and the proteins to be produced means that the best yeast strain for maximum yield and quality of a given product is typically unknown. One approach is to screen a panel of yeast strains for each desired protein product, but this is labour intensive and inefficient, not using past experience or knowledge about specific gene loci involved in determining yield, or previous related proteins which may be expected to behave similarly. More generally, being able to predict phenotype (observable traits) from genotype is a very important challenge in the biosciences with broad applications. This project will apply artificial intelligence approaches to address this challenge: given data on yeast genotypes, growth conditions and phenotypes (traits), can we develop predictive models for the phenotype of novel strains and hence ultimately predict strains that could out-perform any of those in the training data. Such novel strains could be produced using synthetic biology approaches and the model predictions tested.

This project would work with published data from the group of Ed Louis (Chief Scientist, Phenotypeca, industrial partner on this project), to develop the AI approaches and understanding of the context. This data includes large panels of hundreds of genotypes with quantitative measurements of traits such as growth and response to various treatments. The developed approaches could then be applied in the context of Phenotypeca, which has the world's largest collection of yeast strains for recombinant protein production.

Applicants will be expected to have strong programming skills, e.g. in R and/or Python, and a background in Statistics, Mathematics, Computer Science, Computational Biology or a relevant discipline with a significant data analysis component.

Supervisors: Prof Markus Owen (School of Mathematical Sciences), Dr Simon Preston (School of Mathematical Sciences), Prof Jonathan Hirst (School of Chemistry). Industry partner: Ed Louis, Chief Scientist, Phenotypeca.

For further details and to arrange an interview please contact Prof Markus Owen.

 

Further information

For further enquiries, please contact Professor Ender Özcan - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit:
www.nottingham.ac.uk/enquire