School of Computer Science

Faculty of Science Doctoral Training Centre in Artificial Intelligence Cohorts and Projects





Artificial gene regulatory networks as a new AI paradigm

Gene regulatory networks (GRNs) are the primary ways by which living cells are programmed to respond to their environment in real time. They allow for a population of genetically identical cells to behave differently, for example the way the cells in our eyes behave differently from cells in our skin. GRNs evolve in a specific way, allowing them to learn new responses or behaviours from previous patterns without losing existing knowledge. Artificial GRNs (aGRNS), that is, computer implementations of GRNs, have been used to help understand the biology of GRNs. However, they have not been considered as a computational paradigm in their own right. 

The aim of this project is to establish aGRNs as a computational AI paradigm. It will involve the implementation of aGRNs using both deterministic and stochastic formulations, and the identification and testing of problem types for which this paradigm is likely to be especially valuable. These include systems that need to switch rapidly between different contexts, and systems that need to transfer learning from one domain to another. These are both important challenges for improving the generalisation of AI systems. 

Applicants are expected to have strong computer programming skills and a broad knowledge of artificial intelligence. Some biosciences background would also be beneficial to help understand the concepts, but this could be learned as part of the project if needed. 

Supervisors: Dr Colin Johnson (School of Computer Science), Prof Dov Stekel (School of Biosciences).

For further details and to arrange an interview please contact Dr Colin Johnson

Artificial Synapses with Dual Opto-Electronic control for Ultra-Fast Neuromorphic Computer Vision

Memristors (or resistive memory) are a new generation of electronic devices that directly emulate the chemical and electrical switching of biological synapses, i.e., the key learning and memory components of the human brain. Memristors also have the advantage of ultra-fast switching, low-power consumption, and nanoscale size, and therefore have the potential to usher in a whole new era of artificial intelligence, devices, and applications. The aim of this project is to develop new state-of-the-art memristor devices that can switch optically as well as electronically, thereby enabling these “optically switching synapses” to be used as “in-memory” computing elements in neuromorphic circuits for computer vision applications. This PhD project will develop new optically active materials, based on semiconducting nanowires/nanotubes coupled with metal nanoclusters and/or photoactive molecules, with enhanced light sensing capabilities that are suitable for integrating with memristor materials and devices.  You will learn materials synthesis and deposition techniques, nanoscale device fabrication as well as advanced electrical and optical characterization methods. 

Supervisors: Dr Neil Kemp (School of Physics and Astronomy), Professor Andrei Khlobystov (School of Chemistry), Dr Jesum Alves Fernandes (School of Chemistry).

For further details and to arrange an interview please contact Dr Neil Kemp

Enhanced artificial intelligence for retrosynthesis planning

In this PhD project, we will develop innovative enhancements of Monte Carlo tree search (MCTS) algorithm for the problem of retrosynthesis. Retrosynthesis is the process of repeatedly breaking down a ‘target’ molecule using valid chemical reactions to attain a series of more simple start molecules and several reaction routes which lead to the initial target molecule. The MCTS is an efficient search algorithm, most notably known for its use in Google Deepmind’s AlphaGo. The algorithms developed in the project will be implemented in our ai4green electronic lab notebook, which is available as a web-based application: and which is the focus of a major ongoing project supported by the Royal Academy of Engineering. Improvements to the MCTS algorithm in the context of retrosynthesis will help chemists to make molecules in a greener and more sustainable fashion, by identifying routes with fewer steps or routes involving more benign reagents. 

Applicants should have, or expected to achieve, at least a 2:1 Honours degree (or equivalent if from other countries) in Chemistry or Computer Science or a related subject. A MChem/MSc-4-year integrated Masters, a BSc + MSc or a BSc with substantial research experience will be highly advantageous. Experience in computer programming will also be beneficial. 

Supervisors: Prof Jonathan Hirst (School of Chemistry), Dr Kristian Spoerer (School of Computer Science).   

For further details and to arrange an interview please contact Prof Jonathan Hirst.

Digital twins for quantum microscopy

Superresolution microscopy is a rapidly developing field that provides the means to study biological and nanoscale structures with unprecedented detail. One of the most promising techniques for superresolution microscopy is spatial mode demultiplexing (SpaDe), which involves collecting information about the structure of the sample encoded in a suitable basis of spatial modes of light. This has been shown to enable unprecedented resolution enhancements compared to conventional direct imaging and has the potential to push microscopy towards the ultimate precision limits established by quantum mechanics. However, optimising the measurement setup and image reconstruction for SpaDe microscopy and surface analysis on real samples can be challenging and time-consuming. 

The objective of this project is to develop a software framework for comprehensive simulation of a quantum superresolution microscope -- a digital twin -- to benchmark different experimental approaches and investigate the resolution improvements enabled by SpaDe in practical settings. Digital twins have been used successfully for task-specific uncertainty evaluation in surface and dimensional metrology, but their application in optical imaging remains largely unexplored. 

The digital twin will be powered by physical models derived from first principles, including surface-scattering models, three-dimensional imaging theory, spatial mode demultiplexing, photon counting, and error-generation models. By incorporating the influences of various error sources (both intrinsic and environmental) via appropriate stochastic modelling before reconstructing the image, the virtual instrument will simulate the response of the real instrument in tunable conditions. The virtual instrument will be also used for uncertainty evaluation. This process will include the determination of relevant ISO metrological characteristics, such as noise, resolution, fidelity, which will be important to validate the emerging SpaDe imaging technology. 

This project will involve a combination of theoretical and computational work, as well as interdisciplinary collaboration with experts in the fields of quantum physics, material science, and engineering. The successful candidate will have the opportunity to work with cutting-edge technology and contribute to the advancement of both digital twin frameworks and superresolution microscopy. 

Supervisors: Prof Gerardo Adesso (School of Mathematical Sciences), Dr Katherine Inzani (School of Chemistry). 

For further details and to arrange an interview please contact Prof Gerardo Adesso.

Intelligent sensing and data fusion in a smart environment for human activity recognition to support self-management of long-term conditions

Given the pressure on health and social care resources, there is a growing incentive to explore methods for self-management for long-term conditions. Smart environments, realised through a range of ambient integrated sensors and service robotics, could people with long-term conditions improve their quality of life. There is emerging research on intelligent data fusion to combine a range of ambient and wearable data sensors for modelling and analysing physiological and behavioural data collected over time. This can be used to provide early warning or guidance for the patient themselves, or their healthcare professionals.  

The research challenges lie in developing person-specific machine learning models, which are verifiable and robust in the face of noisy real-world sensor data that will change over time, as the person’s condition changes. There is also a gap in knowledge on how best to select and integrate multiple types of sensor data, in a way that preserves the integrity of the different streams of information, while also providing a meaningful representation of the person’s activity.  

This research will address the challenges noted, and also explore the design of interactive systems that can incorporate user input for semantic labelling and modelling, using an active learning approach. Keeping the user in the loop can improve engagement, while offering improved reasoning and confidence in sensor selection and fusion techniques. This research will explore multi-modal user-input approaches for eliciting and integrating user input for semantic labelling, using a combination of supervised, un-supervised and self-learning techniques to address the challenges of noisy data and reliably tracking changes in long-term conditions over time.  

This research will be informed by, and related to, ongoing preclinical work being conducted by members of the interdisciplinary supervisory team, exploring behavioural and physiological changes in response to pregnancy, the ageing process and age-related diseases such as stroke, diabetes and cardiovascular dysfunction. 

Prospective PhD applicants are expected to have a degree in Computer Science or Maths with knowledge of Data Science, Machine Learning and AI. This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, C, Java, Python, ROS. 

Supervisors: Prof Praminda Caleb-Solly (School of Computer Science), Dr Matthew Elmes (School of Biosciences), Prof Claire Gibson (School of Psychology).  

Clinical partners: Alison Wildt (National Rehabilitation Centre Clinical Support Manager), Chrishanti Thornton (Extracare Charitable Trust)  

For further details and to arrange an interview please contact Prof Praminda Caleb-Solly.

Machine learning for gravitational wave astronomy: beyond vanilla black holes

Gravitational waves are propagating fluctuations of space and time created by accelerating objects in Einstein's theory of general relativity. For strongly gravitating objects undergoing highly dynamical motion---like the merger of two black holes---the emitted radiation is strong enough to propagate across the universe to Earth, where it is detected by the LIGO-Virgo-KAGRA (LVK) network of gravitational-wave observatories. These signals encode the properties of the source, which we can decipher by comparing to theoretical models. Gravitational waves were first detected in 2015, and since then nearly 100 such events have been observed. Together these have informed our understanding of astrophysics, cosmology, and fundamental physics---ushering in the new era of gravitational wave astronomy. 

As detectors are improved, analysis of observational data becomes more challenging: this is due to the complexity of the signal and noise models, the growing rate of detections, and a constant desire for rapid results. To address these challenges, new approaches including machine learning are being explored. In particular, probabilistic deep learning architectures such as normalising flows have demonstrated orders-of-magnitude speed-ups. This opens an opportunity to perform new types of analyses that were previously far too expensive. These include searching for gravitational waves from topological defects and phase transitions in the early universe, as well as black holes in alternative theories of gravity. These are currently limited by the number of models that can be investigated, whereas there is large uncertainty in the production of gravitational waves beyond standard model physics. This project will develop the relevant machine learning algorithms and use them to analyse real gravitational wave data and probe theories of gravity and cosmology. 

Applicants are expected to have a strong background in either physics, astronomy, mathematics, or computer science, as well as experience with Python. Experience with deep learning, PyTorch, and gravitational waves is desirable, but not essential. 

Supervisors: Dr Stephen Green (School of Mathematical Sciences), Dr Adam Moss (School of Physics and Astronomy), Prof Thomas Sotiriou (School of Mathematical Sciences).

For further details and to arrange an interview please contact Dr Stephen Green.

Modelling Human-Robot Interaction in Social Spaces

Robotics and related AI technologies are rapidly gaining presence in different areas of our everyday life, e.g. cleaning robots vacuuming floors, warehouse robots carrying pallets, robotic vehicles with cruise control. An exciting use of robotics is social and telepresence robots, which are intended to work in public and social contexts, including educational and museum settings, and to provide support for older adults and populations with accessibility issues. 

This PhD project will study and quantify human interactions with commercially available robots in different contexts (participants/robots/places/functions) with a view to creating models of human-robot interaction (HRI) in these contexts. These models will help to improve design of spaces optimising human-robot interaction and also inform the development of best practice guidelines for robot embodiment, interaction strategies and autonomous behaviour. 

In line with this goal, this PhD project aims to model sustainable human-robot interaction strategies for socially capable robots designed to function in public spaces. The project will target technological and psycho-sociological challenges related to AI to investigate the following overarching research questions: 

  1. How can social and telepresence robots be used to connect groups of remote humans and mediate the interaction between them? 

  1. What kind of personalisation methods and input/output modalities are useful to improve the interaction between humans and robots and enable long term sustainability of the communications? 

  1. How do the attitudes and perceptions toward robots change in children and adults over time? 

  1. Are these attitudes and perceptions affected by cultures, communities and the interaction environments?  

This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Psychology. Applicants are expected to develop technological advancements in AI and Interaction Design, including using machine-learning for generating personalised user models for children and adults, adaptive motion planning in social environments, feedback generation. In addition, the successful student will design, conduct and analyse experiments to investigate the socio-psychological effects of the technologies. 

Supervisors: Prof. Praminda Caleb-Solly (School of Computer Science), Dr Emily Burdett (School of Psychology), Dr Ayse Kucukyilmaz (School of Computer Science).

For further details and to arrange an interview please contact Prof. Praminda Caleb-Solly



A Generic Image Segmentation Platform for Novel Feature Exploration in Multimodal MR Imaging using Minimally Supervised Machine Learning

This PhD will develop novel AI approaches to image segmentation to explore meaningful clinical features in multimodal Magnetic Resonance Imaging (MRI).  We will drive this by tackling several important imaging scenarios characterised by small datasets and challenging image quality. 

Often in advanced MRI we can visually identify a feature of interest and segment it manually with a-priori knowledge about anatomical features and expected MR contrast. However, this can be extremely difficult and time consuming, particularly when we are interested in ‘difficult’ to segment features, for instance very small objects with low contrast, or objects that have ill-defined anatomy or variable image contrast. For example, layer 4 of the cortex is this very fine continuous layer that runs through much of the grey matter of the brain and is pivotal in furthering our understanding of brain function and dysfunction. However, it is hard to distinguish reliably in all subjects and there are currently no methods to automatically identify it from MR images. 

The aim of this PhD project is to:

  • Develop machine learning methods to segment the tissue of interest with minimal high-level supervision (e.g. shape, topology, connectivity etc.).
  • Supplement the machine learning results with model-based approach development where appropriate, e.g. to ensure the results maintain a complete surface rather than broken patches.
  • Optimise the multimodal MRI acquisition, informed by initial AI results to maximize the efficiency of the automatic segmentation, in terms of image quality and data harmonisation. 

Supervisors: Dr Xin Chen (School of Computer Science), Dr Karen Mullinger (School of Physics and Astronomy), Dr Caroline Hoad (School of Physics and Astronomy), Prof Andrew French (School of Computer Science), Prof Penny Gowland (Physics and Astronomy). 

Postgraduate Researcher: Stephen Lloyd-Brown (School of Computer Science)

Age-related changes in brain connectivity and cognition studied through machine learning

This PhD project aims to study how the brain/cognition relationship changes across lifespan, using an open data archive containing neuroimaging data from 600+ participants (Cambridge Centre for Ageing Neuroscience). The project will combine information obtained from multiple brain imaging methods:

1.     Functional MRI (fMRI), which measures brain activity indirectly through its effects on blood vessels;

2.     Magnetoencephalography (MEG), which measures the magnetic field around the head generated by brain activity as a direct measure of neural activation;

3.     Diffusion-weighted MRI (DWI), which measures the anatomical “wiring” of the brain; and

4.     Structural MRI (sMRI), which measures shape-related properties of cortical folding and subcortical structure.

To combine information from the various neuroimaging modalities, several approaches will be explored, among them methods based on recent theoretical advances in complex networks that employ multilayer networks to describe multiple interacting network structures simultaneously. Machine learning methods, including neural networks and relevance vector machines, will be used to determine which aspects of brain networks (across all imaging modalities studied) best predict individual cognitive ability. Finally, this approach will be used to test existing theories of how brain networks reorganize with age, with hypotheses about age-related changes of brain lateralization and about shifts between anterior and posterior brain activation.

Applicants are expected to have basic Matlab or Python programming skills, and a quantitative background (physics, mathematics, computer science or engineering) is desirable.

Supervisors: Dr Christopher Madan (School of Psychology), Dr Reuben O'Dea (School of Mathematics), Dr Andrew Reid (School of Psychology), Dr Martin Schürmann (School of Psychology).

Postgraduate Researcher: Kinga Korek (School of Psychology)

AI for Optimisation of Chemical Reactions

This is an interdisciplinary project cutting across Chemistry and Computer Science which will look into the practical goals of maximising the efficiencies and selectivities of chemical reactions using AI optimisation and machine learning techniques. Traditionally, chemists have done this by empirical experimental procedures, which have three critical failings: that it’s slow, it’s a poor way to explore chemical data space, and critically ‘fail’ results are often just completely ignored (because they are frequently beyond human interpretation).

An initial collaborative study carried out using a limited data set describes featurisation of seven components of a chemical catalyst producing chiral pharmaceutical building blocks. Interpretable featurisation allows Quantitative Structure-Property Relationships (QSPR) between the catalyst structure and the chiral purity of the pro-pharmaceutical products in these rhodium-catalysed asymmetric Michael additions (RhCASA). Our overall approach based on machine learning favours easier human interpretation and the realisation of improved catalysts, some currently on preliminary trial within GlaxoSmithKline. 

This PhD project will extend the previous study, investigating recent advances in optimisation and machine learning methods focusing on active learning, explainable machine learning and generative methods, and how they can be applied effectively to the challenging data at hand. 

Applicants will be expected to have a Chemistry or Chemical Science relevant background. Knowledge of machine learning and/or optimisation methods is desirable, but not essential. 

Supervisors: Prof Simon Woodward (School of Chemistry), Prof Ender Ozcan (School of Computer Science), Dr Grazziela Figueredo (School of Computer Science). Industrial External Advisors/Mentors: GlaxoSmithKline (Dr Katherine Wheelhouse), (Piotr Byrski).

Postgraduate Researcher: Eduardo Aguilar Bejarano (School of Chemistry)

Life of a sperm whale: an AI support for its preservation

The sperm whale is a long-lived pelagic mammal with a worldwide range which has been listed as endangered in the Mediterranean sea. Knowledge of sperm whale social organisation and movement in the basin is still scarce. Non-invasive techniques to study their lives, habits, and migration patterns include geotagged photographic data. Single-subject identity is reconstructed through a visual investigation of unique marks and pigmentation patterns on the dorsal fin, tail and other body parts. However, this process is still primarily done manually, which inhibits the ability of researchers to track individuals across large geographic areas and time scales.

This project will develop advanced deep learning approaches to automate the identification of single subjects (individual whales), combining this with picture geolocation data to track interactions between subjects and whale pods’ evolution over time. The project will use a combination of image recognition, machine learning, and statistical inference methodologies to address questions about the social structure, habits and movements of sperm whales that populate the Mediterranean Sea. The outcome of this project will contribute to supporting the activity of the NGO OceanoMare Delphis in the education and influence of national/international policies for the conservation of whales' critical habitats and migration corridors.

Supervisors: Dr Silvia Maggi (School of Psychology), Dr Michael Pound (School of Computer Science), Prof Theodore Kypraios (School of Mathematical Sciences). External Advisor: Barbara Mussi, President of OceanoMare Delphis Onlus.

Postgraduate Researcher: Sam Fuller (School of Psychology)

Seeking a better view: Guiding cameras for optimal imaging via reinforcement learning

In Biosciences, image capture is often used as the first step in obtaining reliable scientific measurements of plants. Good image capture is fundamental to many of these experiments, by measuring plants you can determine which are healthier, more robust, or producing more food. However, you can’t measure what you can’t see, and, capturing a better image could be crucial in measuring the subtle differences which indicate a higher yielding crop, or a resistance to pests. When humans take photos they adjust their position and angle to better see the subjects being captured, particularly when imaging physically complex and highly variable objects. Taking better pictures is something photographers aspire to for years, and yet in science most image capture is performed using static cameras that don’t move, using a “one size fits all” approach for convenience. This introduces significant risk that crucial information will not be captured.

Advances in machine learning and robotics mean we can now train machines to look for better views before taking pictures, this is active vision. This PhD will explore reinforcement learning techniques to move robotically controlled cameras to new positions, views and zoom in in order to better capture the subject of interest.

You will explore approaches to train robotic systems to capture better images with no human interaction. You will work across disciplines, imaging a variety of plant subjects ranging in size and shape, with a view to improving performance on a range of tasks including 3D reconstruction, feature detection and counting. Access will be provided to automated imaging systems equipped with varied movement capability ranging from linear actuators to 6-dof robotic manipulators, and you will work closely with bioengineers in designing new systems. Key to this work will be an exploration of different reward systems, how do we determine which view is better than another, or which view is the best? We will then explore the most appropriate and powerful reinforcement learning regimes, including those guided by human examples. Though the PhD will focus on plants, the results of your work will ultimately be used to drive imaging systems across the University, and will be applicable to many other areas of image capture, such as physics, astronomy and medical imaging.

Supervisors: Dr Michael Pound (School of Computer Science), Dr Darren Wells (School of Biosciences), Dr Jonathan Atkinson (School of Biosciences),  Prof Tony Pridmore (School of Computer Science).

Postgraduate Researcher: Lewis Stuart (School of Computer Science)

Training a Nanorobot to Build a Molecule

The invention of the scanning probe microscope (SPM) in the early 1980s revolutionised the science of the ultrasmall. Atoms, molecules, and nanostructures are now routinely probed with a spatial resolution all the way down to the single chemical bond limit. Measurements of quantum phenomena that, as recently as a few decades ago, were thought to be so far beyond our capabilities that they would forever be only gedankenexperiments have been made possible, in spectacular fashion, by the probe microscope.

Since the groundbreaking work of IBM Research Labs thirty years ago, a key focus of scanning probe microscopy has been the controlled manipulation of single atoms and molecules to form what have been described as designer states of matter. In this context, an SPM is better thought of as a robot capable of targeting and positioning single atoms, rather than a microscope alone.

But despite this unique ability to manipulate matter on the smallest scales, SPM has a big problem: it’s painfully slow to position single atoms, not least because the human operator represents a major bottleneck in the process. This PhD project represents the next stage in atom manipulation: the integration of machine learning with probe microscopy to automate the assembly of matter from its constituent atomic and molecular building blocks. Building on recent work in the Nanoscience Group in the School of Physics & Astronomy, and in collaboration with the School of Computer Science, you will develop algorithms, architectures, and protocols to automate atomic manipulation, with the ultimate objective of building a molecule, an atom at a time, without human intervention.

Supervisors: Prof Philip Moriarty (School of Physics & Astronomy), Dr Michael Pound (School of Computer Science), Dr. Brian Kiraly (School of Physics & Astronomy). Industry partners: Unisoku.

Postgraduate Researcher: Martin James Benedict (School of Physics & Astronomy)

Improving 3D Small Scale Medical Image Segmentation using AI

Image segmentation is a common task in computer vision and has many useful applications in life science imaging, particularly in medical research. The process of segmenting images has moved towards a more automated approach since the introduction of artificial intelligence (AI) and deep learning (DL) because manual annotation of images is tedious and, in the case of life science imaging, often requires expert knowledge. These AI methods require some level of annotation in order to learn so the burden of manual annotation is not removed entirely, but is reduced significantly.

The objective of this PhD project is to implement novel AI methods to improve the segmentation of cell-scale images by enhancing the segmentation quality, while reducing the time and the effort required from experts for manual segmentation for pre-annotated data. The project will focus on tackling some of the difficulties associated with 3D medical image segmentation such as limited training data, loss of global image context and large computational resource requirements. We will also look at modality specific data augmentation and expanding datasets with synthetic images. The objective is for solutions resulting from this project to be adopted by the members of the Rosalind Franklin Institute to apply to new datasets. 

Supervisors: Dr Michael Pound (School of Computer Science), Prof Andrew French (School of Computer Science) and Mark Basham (Rosalind Franklin Institute).

Postgraduate Researcher: Victoria Hann 


Further information

For further enquiries, please contact Professor Ender Özcan - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit: