School of Computer Science

Faculty of Science Doctoral Training Centre in Artificial Intelligence


AI DTC 2022

 The Faculty of Science AI DTC is a new initiative by the University of Nottingham to train future researchers and leaders to address the most pressing challenges of the 21st Century through  foundational and applied AI research on a cohort basis.  The training and supervision will be delivered by a team of outstanding scholars from different disciplines cutting across BiosciencesChemistryComputer ScienceMathematical SciencesPharmacyPhysics and Astronomy, and Psychology.

The Faculty of Science will invite applications from Home students for fully-funded PhD studentships to carry out multidisciplinary research in the world-transforming field of artificial intelligence. The PhD students will have the opportunity to:

  • Choose from a wide choice of AI-related multidisciplinary research projects available, working with world-class academic experts in their fields;
  • Benefit from a fully-funded PhD with an attractive annual tax-free stipend;
  • Join a multidisciplinary cohort to benefit from peer-to-peer learning and transferable skills development.
Studentship information
Entry requirements Minimum of a 2:1 bachelor's degree in a relevant discipline to the research topic (please consult with the potential supervisors), and a strong enthusiasm for artificial intelligence research. Studentships are open to Home students only
Start date 1st October 2024
Funding Annual tax-free stipend based on the UKRI rate plus fully-funded PhD tuition fees for the four years 
Duration 4 years


UoN-2-720 bunny

Research Topics

The following research ideas are from 2023. New project ideas will appear here when the applications open.


AI Chart

AI insight into the bionic (wo)man

The central role of electricity in biological systems is gaining prominence. The cell is increasingly accepted as a mass of bioelectrical interconnected circuits and in disease these circuits malfunction. However, our ability to electrically communicate with such systems is limited by a mismatch in materials and technological understanding and selective targeting bio-interfacing of electrical reporting systems. This PhD will focus on formulating 100s of new conducting biomaterials and interfacing with 100s of cell types in vitro to elucidate the material-electrical interactions that allow or seamless integration of electronics with biology. Artificial intelligence will be utilised to elucidate the material-cell directed communication. 

There are several data challenges associated with the above area that require the use of AI to facilitate bioelectronic integration. This includes integration of data from various sources (such as genomics, transcriptomics, proteomics, imaging) in response stimulation of cell bioelectronic interfaces and subsequent bioelectrical alterations, into a single platform that can be analysed. This is a challenge that requires AI algorithms to process, integrate, and make sense of large and complex data sets. Signal Processing of Bioelectrical signals, such as electrophysiology and imaging data, are noisy and have a high-dimensional structure that requires AI algorithms for signal processing, noise reduction, and feature extraction. And lastly a challenge remains in the context of Predictive Modelling. This is needed for developing predictive models that can accurately predict the behaviour of cells, tissues, or organisms in response to bioelectronics interfaces and requires machine learning to aid our predicative ability allowing us to program biology. These challenges highlight the need for AI in bioelectronic research, as it will enable us to process, analyse, and make predictions about complex biological systems. This will facilitate a technological revolution towards the movement of bionics with applications in diagnostics, bioelectronics medicine and healthcare more broadly.

Supervisors: Dr Frankie Rawson (School of Pharmacy), Prof Juan P. Garrahan (School of Physics), Prof Morgan Alexander (School of Pharmacy).  

For further details and to arrange an interview please contact Dr Frankie Rawson


AI-based quantitative methods to investigate spinal cord regeneration in the axolotl

In this interdisciplinary PhD project, the student will investigate how the axolotl regenerates the spinal cord after injuries by combining AI-based image analysis with computational modelling and using experimental data generated by our collaborators. In contrast to humans, salamanders like the axolotl can resolve severe and extreme injuries of spinal cord throughout complete and faithful regeneration. Although more than 250 years have passed since the original discovery of salamander tail regeneration after amputation by Spallanzani, the governing mechanisms underlying these unparalleled regeneration capabilities are not yet understood.  

This project is part of an international collaboration between the lab of Elly Tanaka, a world leader in regeneration in the axolotl at the Institute of Molecular Pathology in Vienna and the Chara lab at the UoN, which is the only modelling lab in the UK and possibly in the world investigating regeneration of this salamander. Recently, our two labs demonstrated that tail amputation leads to a particular spatiotemporal distribution of cycling cells in the axolotl. By combining a new transgenic axolotl using FUCCI technology (AxFUCCI) with the first cell-based computational model of the regenerative spinal cord, we found that regeneration is orchestrated by a particular spatiotemporal pattern of neural stem cell recruitment along the anterior-posterior (AP) axis. The goal of this PhD project is to build on these results to quantitatively and mechanistically investigate the axolotl regenerative response. The student will quantitatively analyse confocal images of AxFUCCI to develop AI methods to accurately estimate for the first time the architecture of the axolotl spinal cord during regeneration, building on image-analysis software developed by co-supervisor and computer scientist, Prof Andrew French. Then, the student will use this dynamical tissue architecture to develop a cell-based computational model of the axolotl spinal cord during regeneration. The image analysis will generate detailed knowledge of cell geometries and growth that will be embedded within a computational multicellular model, making use of the multicellular modelling framework developed by co-supervisor Dr Leah Band, thus enabling accurate simulations of transport and signalling mechanisms. 

Applicants are expected to have experience of python, machine learning / deep learning. Knowledge of modelling is desirable, but not essential. 

Supervisors: Dr Osvaldo Chara (School of Biosciences), Dr Leah Band (School of Mathematical Sciences and School of Biosciences) Prof Andy French (School of Computer Science and School of Biosciences).   

For further details and to arrange an interview please contact Dr Osvaldo Chara


AI-Enabled Reaction Design and Discovery

 This is an interdisciplinary project involving both Chemistry and Computer Science which will work towards the challenge of computational chemical reaction discovery. Discovery of new reactions is critical for access to new, improved pharmaceuticals and agrochemicals. This is generally achieved through experimental trial-and-error, and is therefore slow and wasteful. Traditional computations can be used to understand known reaction mechanisms, but are too slow to rapidly search the vast chemical space for novel, feasible reactions. This project will develop AI methods for molecule energy prediction, which can be up to million times faster. 

A particular focus will be on modelling of transition states between stable molecules, as they are critical to understand the speed and therefore the feasibility of a chemical transformation. This is a little-explored area, because generating the training data from traditional slow computations is more challenging for transition states than for stable molecules. We have developed workflows for automated transition state computation and AI methods for stable molecule energy prediction. This project will leverage the experience in both and tackle the challenge of transition state prediction, by first assembling a large training dataset using automated computational tools and then exploring a wide variety of AI methods for their energy prediction. The new AI methods will then be applied to the rapid exploration and discovery of new pericyclic reactions with a focus on applications in the synthesis planning for drug and agrochemical candidates, and new bioorthogonal reaction discovery. Pericyclic reactions include click chemistry, which earned their inventors 2022 Nobel Prize in Chemistry.  

Candidates are expected to have a minimum of a 2:1 bachelor's degree in a Chemistry or related discipline, and a strong enthusiasm for artificial intelligence research. 

Supervisors: Dr Kristaps Ermanis (School of Chemistry), Dr Grazziela Figueredo (School of Computer Science). 

For further details and to arrange an interview please contact Dr Kristaps Ermanis.  


Developing a robust methodology to analyse taste buds (fungiform papillae) using smart phones by consumers

Fungiform papillae (FP) are ‘mushroom-like’ papillae that appear as pinkish spots, located on the anterior part of the tongue, containing taste buds. Research has suggested that the anatomical structure of FP varies greatly across individuals and could be a marker for taste sensitivity, and further linked to food preference and choice. Until now, manual FP counting from digital photography was the most popular method of quantification. This is extremely time consuming and error prone. Automated methods have started to be developed in recent years; however, this requires the image to be high quality and taken under very strict conditions using professional cameras. 

This project aims to: 

  1. Fully automate the quantification of FP using cutting edge computer vision methods, such that we are able to provide reliable counts from lower quality images. 

  1. Develop an interactive imaging app that can be used by to guide the self-photography of FP at home. We will use the app platform to additionally explore automated capture of food choices and nutritional information from plated food (which can add new dimensions to future studies).  

  1. Integrate this new technology in a food sensory study investigating the relationship between FP, taste sensitivity and taste preference. 

The successful applicant will develop algorithms using deep machine learning to quantify FP on the tongue utilising images collected via smart phones with an interactive app, so that consumers can easily take this information by themselves. The app itself will use computer vision techniques to interactively help the user take a high-quality photograph. These two novelties together contribute to a new platform for conducting taste research in the general public. 

Although applicants are expected to have a computer science background, they will be integrated into a team of food scientists to help create a new image dataset to train the machine learning models, and to co-develop the app with domain input to ensure the system delivers the best quality data it can. 

Supervisors: Prof Andrew French (School of Computer Science), Dr Qian Yang (School of Biosciences). 

For further details and to arrange an interview please contact Prof Andrew French.


Energy requirements of neuromorphic learning systems

Very large neural networks are rapidly invading many parts of science, and have yielded some very exciting results. However,  particularly the training of large networks requires a lot of energy. This energy is needed to compute, but also to store information in the synaptic connections between neurons.  Interestingly, also biological systems require substantial amounts of energy to learn. Under metabolically challenging conditions, these requirements can be so large that in small animals learning reduces the lifespan. Based on these findings we have started to design algorithms that reduce the energy needed to train neural networks. 

This project will explore energy requirements for learning in neuromorphics system with memristors. Neuromorphic systems mimic the biological nervous system in their design principles and are currently being explored to create highly energy efficient neural networks. Memristors that are a key technology in such devices. Specifically, we will 1) develop models that describe the energy needs for learning in neuromorphic networks, 2) use inspiration from biology to design efficient algorithms that are more energy efficient and test them in simulations, and 3) contrast energy requirements to the energy needs in biology as well as conventional hardware.  

The ideal applicant will have a strong background in physics, mathematics, computer science or engineering with both analytical and programming skills. Interest in biology and/or engineering will be beneficial. 

Supervisors: Prof Mark van Rossum (School of Psychology and School of Mathematical Sciences), Dr Neil Kemp (School of Physics and Astronomy).  

For further details and to arrange an interview please contact Prof Mark van Rossum.


Explainable Generative Models for Biomaterials Discovery

This project aims to develop interpretable machine learning and artificial intelligence (AI) approaches to the design of novel biomaterials to be used in medical devices. 

Advanced biomaterials are urgently needed to address the healthcare challenges faced by societies associated with ageing populations. High throughput technologies generate substantial volumes of data on large numbers of biomaterials, with diverse chemical and topographical properties. Machine learning methods are highly successful in making predictions of useful properties driving the biological phenomena within complex materials. This project aims at exploring AI for new biomaterials design. It involves three stages: 

Extracting materials’ chemical and topographical properties using deep learning and few shot learning; 

Designing new improved materials using generative methods; 

Machine learning decision interpretation to further inform biomaterials researchers about the design decisions made by the machines. 

Applicants are expected to have experience of machine learning, python, and software engineering. Knowledge of evolutionary algorithms and agile methodology is desirable, but not essential. 

Supervisors: Dr Graziella Figueredo (School of Computer Science), Prof Morgan Alexander (School of Pharmacy). 

For further details and to arrange an interview please contact Dr Graziella Figueredo


Guided Image Generation for Artists (GIGA) – Making Deep Learning-Based Image Generators Accessible to Artists

Deep learning-based image generators such as Stable Diffusion and DALL-E 2 promise to revolutionise the artistic process, allowing the creation of breath-taking images from simple text prompts. However, the black-box nature of such AI systems and the technical expertise required to steer such models create significant obstacles for their adoption in the arts community. Working with a group of artists, this project will develop a human-AI teaming tool that makes image generators more accessible to non-experts. The aim of this tool will be to guide artists through the creative process, expose lesser known features, provide interactive visualisations to help manage the process, and to adapt to the different intentions and preferences of individual artists. 

An important part of this project will be to study how artists adapt to, and use this tool. To capture (and quantify) how non-expert observers interpret the outputs of machine learning models prompted with different input parameter choices, we will use psychophysical and behavioural techniques. These methods are widely used in cognitive neuroscience and the collaboration with colleagues in the School of Psychology will allow us to leverage the best experimental paradigms for this aspect of the work. 

Your role in this project will be to develop new techniques and interfaces to state-of-the-art generative models. You will work with artists to explore the use of these models, capturing and analysing the steps artists take in using the tools, and their results. Working with colleagues in psychology, you will gain an understanding of psychophysical and behavioural techniques, providing important insights into the role of AI in art. 

Supervisors: Dr Kai Xu (School of Computer Science), Dr Michael Pound (School of Computer Science), Dr Jan Derrfuss (School of Psychology), Dr Denis Schluppeck (School of Psychology).  

External Partner (Artist): Richard Ramchurn (AlbinoMosquito) 

For further details and to arrange an interview please contact Dr Kai Xu.


Human-Robot Teamwork for Adaptive Motor Rehabilitation

This PhD project aims to develop innovative stroke rehabilitation methodologies with robots. Motor rehabilitation requires patients and therapists to coordinate and adapt their motions, and as such, relies heavily on social and personalised touch interactions. However, modelling the constantly evolving nature of human sensorimotor actions and interactive behaviours is a challenging research problem, hence, state-of-the-art therapies provided by robots lack proactivity and personalisation to handle changing human needs. A robot, if employed with intelligent mechanisms to instantaneously infer about how well it interacts with a human would better complement human movements during rehabilitation exercises. 

This PhD project will study and quantify human interactions using an immersive haptic setup, with a view of modelling physical human-human coordination and teamwork paradigms for rehabilitation. The aim of the project is to develop novel proactive embodied intelligence mechanisms where a robotic device will appropriately and safely work with a patient undergoing stroke therapy, while being aware of and responding to human-robot interaction states. 

The project will target technological and psycho-sociological challenges related to AI to investigate the following objectives: 

  • Develop indicators of user characteristics relevant to stroke (e.g. coordination, dexterity, range of arm motion), as well as general human states (e.g. effort, workload, fatigue) through modelling individual and interactive behaviours. A number of multimodal features will be considered (e.g. body-worn sensors, facial feature tracking, forces, kinematics, gaze estimation).  

  • Implement teamwork and role allocation strategies for human-AI collaboration, based on agreement on movement patterns in trajectory following and pulling/pushing tasks, which capture the continuous nature of interactive behaviours. Coordinated motion intentions, goals and human states will be estimated in real-time and will be used to support performance gains. 

  • Evaluate the outcomes of the proactive AI methodologies in controlled user studies. 

The student will be given full access to Cobot Maker Space facilities ( to work with commercial robots and develop software to implement (semi-) autonomous robotic behaviours. Projects will require working with real robots interacting with real humans in challenging environments. The student must have good experience with programming (C++, machine learning and robotics knowledge is desirable) and a strong interest in conducting user studies.  

Supervisors: Dr Ayse Kucukyilmaz (School of Computer Science), Dr Deborah Serrien (School of Psychology).  

For further details and to arrange an interview please contact Dr Ayse Kucukyilmaz.


Learning Heuristics for Computer Algebra Systems

Computer Algebra Systems (CASs) play an increasingly important role in pure mathematics research. These are immensely complicated pieces of software that allow the user to represent and handle abstract mathematical objects within a computer.  Handling these objects requires expensive computations and involves heuristics that choose the most appropriate computational methods. 

Each heuristic attempts to predict which of the available computational methods will be most effective in the current circumstances. The choice does not affect the correctness of the solution – it is essential that the output remains correct – but affects running times. Because of the exponential nature of many of the algorithms used by a CAS, a poor choice can make the difference between a computation finishing in under a second, to requiring years to complete. 

Traditionally, such heuristics are designed by humans who study at most a few hundred examples and produce common-sense-based algorithms. They are strongly influenced by the use cases they are familiar with. It has been shown that computers usually outperform humans in such predictions, although it is often a non-trivial task to employ machine learning within computational software due to the complexity of the patterns and the huge variability in use cases. 

This project will develop machine learning tools for algorithm selection, which can be embedded in an existing CAS. These tools will replace the existing hard-coded heuristics, and a life-long-learning mechanism will ensure that they develop over time in response to real-world use cases. 

This pioneering project will place the student at the forefront of an exciting new area of research: the application of machine learning techniques to exact symbolic computations. This work has enormous potential for collaborative projects with CAS research groups worldwide: for example, the hugely successful SAGE open source project; the DFG-funded Singular and OSCAR groups in Germany; and the Magma group in Australia. 

An ideal candidate will have computer science or engineering background, with interest in mathematics and artificial intelligence. Good programming skills are essential. 

Supervisors: Dr Daniel Karapetyan (School of Computer Science), Dr Alexander Kasprzyk (School of Mathematical Sciences).

For further details and to arrange an interview please contact Dr Daniel Karapetyan.


Long-term Autonomy and Mobile Inspection with Spot

Although autonomous mobile robots and related AI technologies are increasingly being adopted by the service sector, their use in extreme environments under time constraints is still a grand challenge in robotics research. In addition to difficulties in mapping, perception, exploration and navigation in such domains, there are mobility challenges due to environmental features, such as stairs, uneven terrains and risky, obstacle-prone zones. This PhD project will focus on legged locomotion using a quadruped mobile robot (Boston Dynamics Spot) to alleviate such traversal challenges. 

Even with increased locomotion and payload capabilities, facilitating long-term robot deployment in naturalistic setting is challenging. Long-term autonomy and recovering from errors is an essential skill, still missing in modern robotic applications. To a large extend this is due to the pitfall of existing AI solutions to detect and adjust their performance in changing uncontrolled environments. This project will develop novel formalisms for identifying and alleviating errors during navigation in the prolonged use of quadruped mobile robots in a wide range of indented use scenarios. 

The objectives of this PhD project are: 

  • To enable incremental learning methodologies to develop context-based policies, not only for navigation, but error recovery in long-term automation. Causal graphical models will be used to encode situation dependent adjustment formulas based on a range of simulated failure scenarios. 

  • To develop effective human-robot interaction methodologies for efficient management of day-to-day operation of the mobile inspection robot. Human-in-the-loop and teleoperated control methods will be used as the backbone strategy to ensure increasing levels of autonomy during inspection. Novel AI paradigms based on Reinforcement Learning, Learning from Demonstration and Prototypical neural networks will be combined to develop effective human-AI interaction. 

  • To test the developed outputs on a complex industrial use case scenario. 

This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Mathematical Sciences as well as industrial involvement with the sponsor company, RACE. Applicants are expected to have strong programming skills and be interested in working with embodied intelligent systems, i.e. robots. They will implement technological advancements in AI on robots, including using machine-learning for generating probabilistic models for life-long learning of different situations. 

Supervisors: Dr Ayse Kucukyilmaz (School of Computer Science), Dr Yordan Raykov (School of Mathematical Sciences), Dr Wasiur Khuda Bukhsh (School of Mathematical Sciences).

For further details and to arrange an interview please contact Dr Ayse Kucukyilmaz.


Machine Learning for complex 3D data structures

Plant canopy architecture, the arrangement of plant structural material in 3-dimensions (3D), determines plant function, resource capture and performance. The ability to measure and apply architecture is of great importance. It is, however, governed by a complex number of traits and whilst the tools for its study have advanced, there are still numerous limitations preventing key breakthroughs. Generation of accurate, 3D digital models of plant structures is difficult, with many challenges relating to the complexity of plants. An efficient and accurate method for obtaining such traits is urgently required. 

This project seeks to combine computer vision and machine learning in a biological setting to combine accurate plant model generation with an automatic phenotyping pipeline. While current state of the art illustrates machine learning can be applied to 3D models of simple structures, its application biological objects is in its infancy.  Expanding on existing machine learning techniques, the successful candidate will generate novel neural networks and apply them to complex objects consisting of mesh surfaces to automatically extract plant traits relating to architecture and repair errors in the underlying mesh representation. The methodology will be evaluated using the µX-ray CT-scanning facilities available at the University’s Hounsfield Facility.  

Applicants are expected to have knowledge of programming (Python or C++) plus a strong interest in machine learning and computer vision. Experience of deep learning and an interest in biological systems are desirable, but not essential.  

Supervisors: Dr Alexandra Burgess (School of Biosciences), Prof Erik Murchie (School of Biosciences), Prof Tony Pridmore (School of Computer Science). 

For further details and to arrange an interview please contact Dr Alexandra Burgess.


Machine learning for first-principles calculation of physical properties

The physical properties of all substances are determined by the interactions between the molecules that make up the substance. The energy surface corresponding to these interactions can be calculated from first principles, in theory allowing physical properties to be derived ab-initio from a molecular simulation; that is by theory alone and without the need for any experiments. Recently we have focussed on applying these techniques to model carbon dioxide properties, such as density and phase separation, for applications in Carbon Capture and Storage. However, there is enormous potential to exploit this approach in a huge range of applications. A significant barrier is the computational cost of calculating the energy surface quickly and repeatedly, as a simulation requires. We have recently developed a machine-learning technique that, by using a small number of precomputed ab-initio calculations as training data, can efficiently calculate the entire energy surface. This project will involve extending the approach to more complicated molecules and testing its ability to predict macroscopic physical properties. 

Applicants will be expected to have a numerate background from a first degree in Maths, Chemistry, Physics or similar; and interest in learning about applied machine learning and in developing their coding and data science skills. 

Supervisors: Prof Richard Graham (School of Mathematical Sciences), Dr Richard Wheatley (School of Chemistry).

For further details and to arrange an interview please contact Prof Richard Graham.


Multimodal integration during parent-child interactions as a predictor of later child executive functions

The first five years of the child’s life play a critical role for cementing cognitive functions. Several, however, disparate strands of work have shown that the quality and quantity of parent-child interactions are predictive of later cognitive development. From an attentional perspective, periods of joint attention between caregivers and children during interactions are integral for sustained attention and the nature of these dyadic interactions has important implications for developing a rich vocabulary. More recent evidence using neuroimaging research has shown that some brain regions might also be synchronized between parents and children during interactions and critically, the extent of this synchrony might be associated with home environmental factors such as life stress. Despite the association between a multitude of modalities and processes engaged in parent-child interactions and child cognitive functions, the extent to which these findings can be integrated to inform individual differences in, and critically, predict later cognitive success is unknown. If we can better understand these mechanisms, we can guide caregivers in interacting with young children, and transform their development in the crucial early stages of life. 

In this project you will work to better understand the interactions between young children and caregivers as they interact during exploratory play. You will apply state-of-the-art machine learning techniques to analyse videos of interactions, detecting poses, activities and key events. You will explore novel deep learning methods for integrating multi-modal information sources, to combine both video events and audio data and extract perceptual, verbalization, affect, brain function information (to name a few) during parent child interactions to predict cognitive functions in children. Video, questionnaire, experimental and neuroimaging data are already available from an ongoing, longitudinal project assessing neurocognition in children in the School of Psychology at the University of Nottingham. Separately, it might be possible to design and collect more data in the future. 

Applicants will be expected to have a good working experience with current machine learning and image processing tools and techniques. Prior knowledge of biomedical signal processing is desirable but not essential. 

Supervisors: Dr Sobana Wijeakumar (School of Psychology), Dr Joy Egede (School of Computer Science), Dr Michael Pound (School of Computer Science).

For further details and to arrange an interview please contact Dr Sobana Wijeakumar.


Physics-informed Machine Learning for Climate Wind Change

Machine learning simulation strategies for fluid flows have been extensively developed in recent years. Particular attention has been paid to physics-informed deep neural networks in a statistical learning context. Such models combine measurements with physical properties to improve the reconstruction quality, especially when there are not enough velocity measurements. In this project, we will develop novel methods to reconstruct and predict the velocity field of incompressible flows given a finite set of measurements. Specifically, using wind data from the Met Office, we aim at reconstructing the wind in the UK for the last 50 years and predict the main features of the wind in the UK in the upcoming decades and compare it against climate change models (CMIP6 and ERA5) based on classical data assimilation. For the spatiotemporal approximation, we will further develop the Physics-informed Spectral Learning (PiSL) framework which has controllable accuracy. Our computational framework thus combines supervised (wind data) and unsupervised (physical conservation laws) learning techniques. From a mathematical standpoint, we will study the stability and robustness of the method whereas, from a computer science standpoint, we will develop efficient algorithms for the adaptive construction of the sparse spectral approximation.

We are looking for master's degree holders who are interested in interdisciplinary research projects that revolve around computational methods such as mathematical models, simulation methods, and data science techniques. Applicants are expected to have a numerate background from a first degree in Maths, Physics, Computer Science or Engineering; and an interest in developing novel physics-informed machine learning approaches and in developing their coding and data science skills. It is essential that the applicant is proficient in using either Python or similar high-level programming languages. 

Supervisors: Dr Luis Espath (School of Mathematical Sciences), Dr Xin Chen (School of Computer Science)  

For further details and to arrange an interview please contact Dr Luis Espath.


Probing the Probe: Classifying Single Atom Spectra using Unsupervised Machine Learning

In the forty or so years since its invention, scanning probe microscopy [1] has revolutionised almost every aspect of condensed matter physics, solid state chemistry, materials science, and, of course, nanoscience. Probe microscopists can now routinely not only image individual atoms and molecules -- with single chemical bond resolution in many cases – but can position these building blocks of matter with exquisite precision. 

There is, however, a frustratingly persistent problem with probe microscopy: the probe itself. Interpretation of SPM data and next-generation atomic/ molecular manipulation experiments increasingly necessitate fine control and detailed understanding of the atomistic structure of the scanning probe’s apex. Although some of this understanding can be gleaned from a consideration of atomic resolution images, probe spectroscopy represents a much richer information source. Spectroscopic signals acquired with probe microscopes span a variety of channels arising from the electronic, vibrational, and chemical structure of not just the sample, but the probe itself, and are thus a powerful diagnostic and analysis tool [2]. 

In this project, you will develop unsupervised machine learning (ML) methods -- involving protocols based on, for example, principal component analysis, k-means clustering [3], deep learning feature extraction, and/or Voronoi segmentation techniques – to automatically classify spectroscopic data from scanning tunnelling microscopy and atomic force microscopy experiments. The project will involve a combination of computational and experimental work; in addition to developing ML approaches based on the extensive datasets previously acquired by the Nottingham Nanoscience Group, you will also be trained in state-of-the-art ultrahigh vacuum and low temperature SPM so as to carry out atomic resolution imaging, spectroscopy, and manipulation for yourself. 

[1] See O. Gordon and P. Moriarty, Mach. Learn. Sci. Tech. 1 023001 (2020) for a brief introduction to scanning probe microscopy in context of machine learning and artificial intelligence. 

[2] See, for example, S. Kalinin et al., ACS Nano 10, 9068 (2016) 

[3] As a recent (and rare) example of this type of unsupervised approach to SPM spectral classification, see P. Wahl et al., Phys. Rev. B 101 115112 (2020) 

Supervisors: Prof Philip Moriarty (School of Physics and Astronomy), Dr Michael Pound (School of Computer Science), Dr Brian Kiraly (School of Physics and Astronomy). 

For further details and to arrange an interview please contact Prof Philip Moriarty


Quantifying the risk of serious harms among people prescribed opioids for chronic pain using federated analytics with big healthcare data

Although opioids are beneficial for acute pain and in end-of-life care, their use for chronic pain remains controversial. Opioid prescribing in the UK has greatly increased in the past twenty years. Studies show that opioids have been prescribed too frequently for many patients with chronic pain incurring substantial healthcare costs. It has also become apparent that long-term use of opioids can be associated with serious harms, including addiction, overdose and death. Electronic health records, big data from CPRD, the UK Biobank, and federated infrastructures that collect big data sources regarding pain management have created opportunities to understand opioid use nationally and to develop advanced intelligent big data approaches for predicting opioid usage risks. 

Our aim is therefore to develop intelligent federated analytics approaches to assess whether these serious harms can be predicted using routinely collected data from UK primary care electronic health records and pain management data. The project will determine which information about the person, their medicine-taking behaviours and prescribing patterns that are associated with serious harms. 

Applicants will be expected to have a background in Statistics, Mathematics, Computer Science, Epidemiology or a relevant discipline with a significant data analysis component. Previous experience of analysis of large data sources ideally in a healthcare background would be an advantage. 

Supervisors: Dr Grazziela Figueredo (School of Computer Science), Dr Roger Knaggs (School of Pharmacy).

For further details and to arrange an interview please contact Dr Grazziela Figueredo.


“What the Cell!?” – Building interactive AI approaches to identify cellular features in microscopic images of plants and crops

Characterising plant anatomy (the structure and organisation of cells and tissues) is essential to understanding fundamental processes such as water and nutrient transport, biomechanics and photosynthesis. Anatomical features have been traditionally studied using sectioned material, and imaged in 2D. Recent advances in tomography allow the high-throughput acquisition of 3D datasets, shifting the bottleneck to the extraction of meaningful information from these complex images. The School of Biosciences is generating anatomical images from a range of plant species and at multiple scales (from inside cells up to whole tissues and organs) to answer fundamental and applied questions in plant biology. 

This project will use existing data and new datasets, annotated by plant science experts, to develop an interactive segmentation and classification pipeline to enable us to train and use new machine learning methods to extract 3D anatomical information from plant samples. The interactive aspect of this is key – the images are very challenging, and plant species look very different from each other. This PhD will develop new approaches enabling us to quickly build and retrain AI models with new datasets of plant cells. Images will be generated by the novel technique of Laser Ablation Tomography (LAT) and by more traditional confocal laser scanning microscopy. The project will be strongly transdisciplinary, with the successful applicant based in both the Computer Vision Laboratory (CVL) in the School of Computer Science and the Hounsfield Facility in the School of Biosciences. 

Skill set: Applicants should have strong programming skills; there is no strict programming language requirement, although Python would be a particular advantage. Some experience in computer vision (e.g. via an undergraduate module or project) is desirable but not essential. An interest in mathematics is helpful, as many computer vision approaches are mathematical in nature. Applicants would benefit from an interest in biology, but do not need biological experience. 

Supervisors: Dr Darren Wells (School of Biosciences), Prof Andy French (School of Computer Science), Dr Jonathan Atkinson (School of Bioscience), Dr Valerio Giuffrida (School of Computer Science). 

For further details and to arrange an interview please contact Dr Darren Wells.


Further information

For further enquiries, please contact Professor Ender Özcan - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit: