School of Computer Science

Faculty of Science Doctoral Training Centre in Artificial Intelligence


AI DTC 2022

 The Faculty of Science AI DTC is a new initiative by the University of Nottingham to train future researchers and leaders to address the most pressing challenges of the 21st Century through  foundational and applied AI research on a cohort basis.  The training and supervision will be delivered by a team of outstanding scholars from different disciplines cutting across Arts, Engineering, Medicine and Health Sciences, Science and Social Sciences.

The Faculty of Science will invite applications from Home students for fully-funded PhD studentships to carry out multidisciplinary research in the world-transforming field of artificial intelligence. The PhD students will have the opportunity to:

  • Choose from a wide choice of AI-related multidisciplinary research projects available, working with world-class academic experts in their fields;
  • Benefit from a fully-funded PhD with an attractive annual tax-free stipend;
  • Join a multidisciplinary cohort to benefit from peer-to-peer learning and transferable skills development.
Studentship information
Entry requirements Minimum of a 2:1 bachelor's degree in a relevant discipline to the research topic (please consult with the potential supervisors), and a strong enthusiasm for artificial intelligence research. Studentships are open to home students only
Start date 1st October 2024
Funding Annual tax-free stipend based on the UKRI rate (currently £19,237) plus fully-funded PhD tuition fees for the four years
Duration 4 years


The deadline to have completed and submitted your application to NottinghamHub has now closed.

Research Topics

Rooted in the exceptional research environments of our Schools/Faculties at the University of Nottingham, the third cohort of the AI DTC will be organised around seventeen multidisciplinary research topics. It is important that you identify a research topic aligned with the expected skill set, your background and particular areas of interest. You will need to obtain support from the supervisors associated with your research topic choice before submitting your official application. You can do this by exploring the research projects below and contacting the main supervisor of the project that is of interest to you, directly, to discuss the further details and to arrange an interview as appropriate. In your PhD studentship application, you will be asked to provide your CV, and a personal statement including a research/project topic from the following list and explaining why you are interested in that research/project topic and your motivation for doing a PhD, and the names of the supervisors you have support from. We encourage applicants to complete the personal statement in their own words based on their background and experience. Please follow the instructions above on how to apply.

 ai dtc school pairs correct


AI for additive manufacture of complex flow devices

AI based generative methods can allow us to generate designs of structures with optimal or new functionality. These structures are often complex, which means it is difficult to manufacture them, but with additive manufacturing and its design freedoms such constraints do not apply. Consequently, the combination of AI, numerical methods and additive manufacturing can lead to a new paradigm in functionality, with any device dependent on shape for its function being available for augmentation in behaviour. 

Our approach is to use recent developments in generative design, such as AI diffusion models, to identify designs that match the requirements of the user, allowing us to ‘dial up’ or select function and have the design provided to us. The building and training of this model will be via numerical or computational models, validated by experiment, that will allow us to provide training data and indeed, augment that with data over the course of the PhD. As a final part of the jigsaw, additive manufacturing built parts will be used to validate both the numerical models and the AI model. 

Our initial focus will be on the automatic generation of mixing devices, commonly used for process intensification and downstream processing in multiple high value manufacturing operations such as pharmaceutical and chemical synthesis, but a general methodology is sought that can be applied across multiple applications. 

This project will in part be supported by a Programme Grant hosted by the University of Nottingham ‘Dialling up performance for on demand management’, with over 10 industrial partners supporting the projects. The studentship will have access to resources (physical, chemical and financial) to support the research, and also access to an exchange fund that enables extended research visits to collaborative institutions, including UC Berkeley, ETH Zurich and CSIRO.

Supervisors: Prof Ricky Wildman (Faculty of Engineering), Dr Mirco Magnini (Faculty of Engineering), Prof Ender Ozcan (School of Computer Science).

For further details and to arrange an interview please contact Prof Ricky Wildman (Faculty of Engineering)


AI-based decoding of evoked neural activity to study bilingual language processing

There are more people in the world who can speak two or more languages fluently than people who speak only one language. How are the languages of these bilinguals represented and processed in their brain? Traditionally, neuroimaging and behavioural techniques have been used to develop theories about language storage and processing. Recent studies have shown that insight about how multiple languages are represented in the brain can be gained from decoding language from neural activity. A particular interesting approach is cross-language decoding, which involves decoding brain activity of words in one language and use that to decode words of another language. Whereas most studies in the literature have used functional magnetic resonance imaging (fMRI) data for decoding language, recent studies have shown that decoding language from evoked non-invasive electroencephalography (EEG) data is also possible. However, decoding accuracies obtained have generally been low. 

This project aims to study the time course of the activation of within- and between-language representations in the bilingual brain by decoding evoked EEG activity across multiple modalities (visual and auditory). In particular, the project will focus on semantic representations. The project will involve designing and conducting EEG experiments with bilinguals to obtain data for decoding. Furthermore, the project will make use of state-of-the art machine learning techniques and advanced large language models to improve decoding accuracy. 

Applicants are expected to have a degree in Psychology, Mathematics, Computer Science, Physics, or related areas, and have knowledge of programming and a strong interest in machine learning, cognitive neuroscience, and psycholinguistics. Experience with EEG and experimental techniques are desirable, but not essential. 

Supervisors: Dr Walter van Heuven (School of Psychology), Dr Matias Ison (School of Psychology), Dr Ruediger Thul (School of Mathematical Sciences)

For further details and to arrange an interview please contact Dr Walter van Heuven (School of Psychology). 


AI-based digital twins and active control in composites manufacturing

 Applications are invited for a PhD studentship to conduct interdisciplinary research in the rapidly evolving area of AI for manufacturing of fibre-reinforced polymer composites. A student working on this project will develop and test novel AI algorithms for digital twins of manufactured composite parts and for active control of composites manufacturing processes. The project will include developing effective surrogate models (e.g., via physics-informed neural networks (PINNs)) and active learning of underlying physical processes. The algorithms will be used for on-the-fly estimation of properties of composites parts and for real-time active control to produce parts according to specification and avoiding defects. The project will include working with real data from experiments conducted in our Composites Research lab at the Faculty of Engineering and a range of our industrial partners. 

Supervisors: Prof Michael Tretyakov (School of Mathematical Sciences), Dr Mikhail Matveev (Faculty of Engineering), Dr Marco Iglesias (School of Mathematical Sciences). 

For further details and to arrange an interview please contact Prof Michael Tretyakov (School of Mathematical Sciences).


Artificial-Intelligent-Aided Energy Management for Future Electric Aircraft

The COP26 has set a clear goal of securing global net zero by mid-century and keep 1.5oC within reach. To deliver this target , it is critical that the transportation sector speeds up the electrification process. For Aviation, the Intergovernmental Panel on Climate Change has set the target of at least 50% reduction of CO2 emissions by 2050 within which aviation is responsible for about 20% of overall emissions. Hybrid/Electric aircraft, which are powered by batteries, fuel cells and generators, are with great potential to reduce the CO2 emission in the Aviation Sector. 

This PhD project aims to optimise energy management on board future electric aircraft using the recently advanced machine learning and artificial intelligent (AI) technologies. The student will be working within two world-leading research groups in the University of Nottingham. The applicant should have electrical engineering background with strong math skills. Basic AI knowledge would be expected from applicant. 

Supervisors: Dr. Tao Yang (Electrical Engineering), Dr Grazziela Figueredo (Faculty of Medicine & Health Sciences) 

For further details and to arrange an interview please contact Dr. Tao Yang (Electrical Engineering).


Camera-based movement analysis to provide real-time optimisation of therapeutic non-invasive neuromodulation for neurological diseases: A Human-in-the-Loop and Machine Learning Approach

Involuntary movements, such as tremor, tics, sudden jerks (chorea) and muscle spasms (dystonia) occur in many neurological disorders. These involuntary movements are difficult to treat, requiring medications (which often have unpleasant side effects) or invasive deep brain stimulation (requiring electrodes to be implanted in the brain). Recent innovative research by Prof Stephen Jackson (UoN Psychology) found that non-invasive median nerve stimulation (MNS), which involves stimulating a nerve in the wrist using a wearable electronic device, suppresses involuntary movements in Tourette syndrome by entraining oscillations in relevant brain circuits when the device is active. Clinical trials are now underway, or being developed, to test the therapeutic effect in a number of neurological disorders, including Parkinson disease (PD) ataxia telangiectasia (A-T) and restless legs syndrome (RLS). 

However, involuntary movements are intermittent and highly variable, meaning that continuous stimulation by the MNS device is not required or desirable. Real-time detection of involuntary movements could allow personally-tailored therapeutic stimulation strategies. optimised to detect different types of involuntary movements seen in diseases such as PD, AT and RLS, and used to optimise the stimulation regime on an individual basis. 

In this project, we will analyse videos of trial participants to understand how MNS affects the patients' symptoms, and to optimise the device in real time to provide the maximum benefit for each individual. To achieve this, we will utilise state-of-the-art marker-less pose estimation in combination with multi-modal machine learning to better understand the complex movements of those with movement disorders. This will enable us to tailor the treatment to each individual's specific needs in real time and maximise its effectiveness by tuning the MNS device. Demonstrating the feasibility of the human-in-the-loop approach will directly enable clinical trials of the effectiveness of personalised home-administered MNS stimulation in reducing the unwanted movements, and allow exploration of the potential benefits of this technology in improving quality of life for individuals with movement disorders. 

This PhD project will benefit from a strong multidisciplinary approach, combining computer science, psychology, and neuroscience. Applicants are expected to have a combination of programming experience (python) and an interest/background in neuroscience. 

Supervisors: Dr Alexander Turner (School of Computer Science), Prof Stephen Jackson (School of Psychology), Prof Robert Dineen (Faculty of Medicine & Health Sciences) 

For further details and to arrange an interview please contact Dr Alexander Turner (School of Computer Science).


Computer vision for dynamic materials analysis

Electronic materials development is reliant upon the use of computational chemistry tools for elucidating structure-property relationships on the atomic scale. Molecular dynamics simulations can capture macroscopic material changes involving over 100 million atoms, such as catastrophic electrical breakdown, whilst retaining information down to the level of individual atoms. As electronic device miniaturisation has reached the nanoscale, using dynamic simulations as a theoretical microscope will be vital for overcoming the materials bottleneck we are facing at the end of Moore’s Law. 

One of the challenges in calculations of this scale is the analysis of structural changes including crystallinity, grain boundaries and nanodomain formation. These features are critical in determining properties from electronic and thermal conductivity to optical processes and response to electric fields. However, their complex dynamical behaviour involves thousands of atoms moving over hundreds of thousands of timesteps, making them unsuited to traditional modes of analysis developed for regular crystalline materials. 

This project will take advantage of AI tools in computer vision as a new approach to analyse structural features in dynamical materials simulations. These have advanced to enable analysis of images of self-similar repeating patterns, which will be highly applicable to subtle changes in atomic configurations. Deep learning methods will be used to develop classification, segmentation, and regression models to identify structural transitions, anomaly detection to identify point defects, and edge detection for revealing grain boundaries. These will be applied to the most pressing materials chemistry problems hindering electronic device miniaturisation. 

Potential applicants are expected to have some background in machine learning and/or computational materials/chemistry, in addition to some experience in (and desire to learn) scientific programming such as Python, scikit-learn, and relevant deep learning libraries, such as PyTorch. 

Supervisors: Dr Katherine Inzani (School of Chemistry), Dr Valerio Giuffrida (School of Computer Science), Dr Julie Greensmith (School of Computer Science)

For further details and to arrange an interview please contact Dr Katherine Inzani (School of Chemistry).


Computer Vision-Based Monitoring of Equine Welfare and Wellness

We are seeking a dynamic and passionate PhD student to join our team and help drive enhancements to equine health and welfare through AI based monitoring. Through this PhD, you will have the unique opportunity to gain an exceptional skillset through interdisciplinary collaboration between the UoN School of Veterinary Medicine and Science, School of Computer Science, and industry partner Vet Vision AI; helping to shape the future of the animal health and welfare. 

Numerous potential welfare issues impact equine welfare when stabled. However, many of these are preventable. Current processes to improve welfare are extremely limited, often relying on owner “know-how” rather than objective and continuous monitoring. The use of AI to automatically monitor welfare outcome will provide a step change in animal health and welfare monitoring, empowering vets and horse owners to revolutionise equine welfare monitoring. 

This PhD aims to develop and deploy cutting-edge computer vision algorithms alongside veterinary insights to monitor and improve equine welfare outcomes accurately and automatically. Successful applicants will be based in the School of Veterinary Medicine, with a close-knit team environment within which mentoring, collaboration and idea sharing is strongly promoted, with input from world-leading computer scientists from the School of Computer Science. They will also work directly with experienced computer vision developers alongside partner company Vet Vision AI; a spinout from the School of Veterinary Medicine and Science on a mission to revolutionise animal health and welfare by combining veterinary insights and computer vision technology. 

This PhD aims to combine world leading veterinary expertise and equine domain knowledge with advanced skills in computer vision and machine learning. Their aim is to translate this knowledge into cutting edge solutions that will help owners and veterinary surgeons improve the lives of millions of animals worldwide. 

Supervisors: Supervisors: Dr Robert Hyde (School of Veterinary Medicine and Science), Prof Sarah Freeman (School of Veterinary Medicine and Science), Dr Katie Burrel (School of Veterinary Medicine and Science), Dr Zhun Zhong (School of Computer science),  Prof Andrew French (School of Computer Science) 

For further details and to arrange an interview please contact Dr Robert Hyde.


Energy requirements of neuromorphic learning systems

Large neural networks are rapidly invading many parts of science, and have yielded some very exciting results. However, particularly the training of large networks requires a lot of energy. This energy is needed to compute, but also to store information in the synaptic connections between neurons. Interestingly, also biological systems require substantial amounts of energy to learn. Under metabolically challenging conditions, these requirements can be so large that in small animals learning reduces the lifespan. Based on these findings we have started to design algorithms that reduce the energy needed to train neural networks. 

This project will explore energy requirements for learning in neuromorphics system with memristors. Neuromorphic systems mimic the biological nervous system in their design principles and are currently being explored to create highly energy efficient neural networks. Memristors that are a key technology in such devices. Specifically, we will 1) develop models that describe the energy needs for learning in neuromorphic networks, 2) use inspiration from biology to design efficient algorithms that are more energy efficient and test them in simulations, and 3) contrast energy requirements to the energy needs in biology as well as conventional hardware. 

The ideal applicant will have a strong background in physics, mathematics, computer science or engineering with both analytical and programming skills. Interest in biology and/or engineering will be beneficial. 

Supervisors: Prof Mark van Rossum (School of Psychology), Dr Neil Kemp (School of Physics and Astronomy)  

For further details and to arrange an interview please contact Prof Mark van Rossum (School of Psychology).


Imaging Molecules in Action: A Journey from Atoms to Energy Materials

Over 300 years scientists have devised many ways to illustrate atoms and molecules and their behaviour, using everything from elemental symbols to sophisticated 3D models. It’s easy to think these representations are reality, but how we picture molecules is usually based on bulk measurements where information is averaged over billions of billions of molecules (spectroscopy) or reciprocal space (diffractometry). Recently, electron microscopy began revolutionising the way we see matter and allowed us to glimpse at atomically resolved images of individual molecules. Early breakthroughs are exciting, but they raised three key questions: 

• What does it mean to ‘see a molecule’? 

• How can we reconstruct the 3D shape of a molecule from 2D micrographs? 

• How can we achieve resolution in space and time for tracking the movement of individual molecules? 

In this project, we aim to answer fundamental questions related to electron microscopy by advancing image analysis methods. We plan to make a significant step-change by applying advanced image analysis methods (Xin Chen) to problems related to single-molecule imaging (Andrei Khlobystov). While sub-Angstrom resolution can be achieved routinely, there's a pressing need for robust techniques to denoise electron microscopy images that allow comparing experimental and theoretically simulated images. For example, image denoising using a diffusion probabilistic model has the potential to identify atom positions while simultaneously enhancing the temporal resolution of electron microscopy. This breakthrough in spatiotemporal continuity of atomic imaging could allow us to watch molecules in action and film chemical reactions with atomic resolution in real-time. 

The Nanoscale & Microscale Research Centre at Nottingham, along with leading European centres in Ulm, Germany and Diamond Light Source in the UK, will provide training for the student to image molecules with atomic resolution. In addition, the student will learn to develop modern artificial intelligence algorithms for image processing and analysis to enhance the data to the highest level of spatiotemporal resolution. 

The project aims to revolutionize the imaging of molecules, turning it into a valuable tool for discovering new chemical processes. This will be particularly useful in catalysis and net-zero technologies, which are crucial for 70% of the UK chemical industry. The University of Nottingham holds the largest EPSRC grant in this area (EP/V000055/1). 

Supervisors: Prof Andrei Khlobystov (School of Chemistry), Dr Xin Chen (School of Computer Science).  

For further details and to arrange an interview please contact Prof Andrei Khlobystov (School of Chemistry).


Intelligent Modelling of RNA Nanomedicines using Cryo-OrbiSIMS, Machine Learning and Molecular Simulations

Motivation: Imagine a new virus emerges, causing a global disease outbreak. Scientists quickly sequence the virus's genome, but developing a traditional vaccine or treatment can take months to years, and a faster solution is needed to control the outbreak. Our project addresses this critical challenge by developing an innovative approach to aid rapid and intelligent design of RNA nanomedicines. 

Challenges to be addressed: Designing RNA-based nanomedicines, such as mRNA vaccines, presents a unique challenge for computational scientists. RNA molecules adopt complex 3D folds that significantly impact their function within a cell. For instance, the way the mRNA folds within the vaccine can affect how well the body can read its instructions and elicit an effective immune protection. Thus, it is important to determine the 3D structure of the mRNA within the vaccine. However, current experimental and computational techniques struggle to resolve its 3D architecture and flexibility. 

To address this technology gap, we have pioneered the integration of cryo-OrbiSIMS, a unique mass spectroscopic imaging capability of University of Nottingham, with molecular modelling and computer simulations to model the three-dimensional structures of RNAs at atomic resolution. 

However, studying RNA nanomedicines, such as mRNA vaccines and viral vectors, using our method presents two specific challenges: 

1. The cryo-OrbiSIMS data becomes significantly complex and hinders efficient data analysis using traditional computer programming. 

2. The RNA molecules adopt increasingly complex folds and interactions, which are challenging to model using 3D structure prediction algorithms. 

Proposed solution: Through this AI DTC project, we aim to use AI/ML techniques to learn the complex relationships between RNA structures and its corresponding cryo-OrbiSIMS data to enhance our data analysis and interpretation pipelines. 

Skillsets, Training and Development: The project sits at an interdisciplinary interface of structural biophysics and computational sciences. Thus, a strong background in data handling, statistical analysis or computer programming would be essential in the DTC candidate. Alternatively, experience in computational structural biology would be desirable. 

The core-supervisory team for this project comprises of experts from RNA biology, structural biology, mass spectroscopy and computer science. Thus, they are very well suited to provide a cross-institutional, dynamic, intellectually stimulating and highly resourceful research environment. The doctoral student will be based within the Wolfson Centre for Global Virus Research at University of Nottingham with supervisory links to the School of Pharmacy, School of Computer Science and Nanoscale and Microscale Research Centre, where a number of PhDs, postdocs and technical staff will provide day-to-day assistance in state-of-art, interdisciplinary techniques. 

Supervisors: Dr Aditi Borkar, (School of Veterinary Medicine and Science), Dr Grazziela Figueredo (Faculty of Medicine & Health Sciences), Dr David Scurr (School of Pharmacy), Prof Morgan Alexander (School of Pharmacy), Dr Anna Kotowska (School of Pharmacy).

For further details and to arrange an interview please contact Dr Aditi Borkar, (School of Veterinary Medicine and Science). 


"Learning by Doing": A Cross-Disciplinary Exploration of Human Motor Learning, AI, and Robot Learning

AI and Robotics technologies are becoming pervasive in everyday human life, e.g. telepresence robots, robot vacuums, humanoids in the factories, etc. To operate in and interact with humans and made-for-human environments, robot learning plays a vital role, allowing robots to be more easily extended and adapted to novel situations. An example of applying AI and machine learning to robotics is learning-by-demonstration, a supervised learning technique, where robots acquire new skills by learning to imitate an expert. 

The research challenge lies in developing robot learning models by modelling the different stages of motor learning in humans, i.e., modelling humans as naïve learners, rather than experts. The fundamental research in human motor learning (HML), by which humans acquire and refine motor skills through practice, feedback, and adaptation, proves critical in this regard. This can allow the robot to mimic human motor control mechanisms, such as joint flexibility and compliance, more naturally, with the benefit of refining the acquired skill over time and input from humans. Example applications include: an assistive robotic arm assisting stroke patients guide their movements and providing corrective cues; therapy patients learning to adjust their motor patterns for better outcomes, based on the AI models, etc. 

This research will address the challenge noted, focussing on three measurable outcomes – (i) novel research into AI to model HML in subjects, including adults and children, especially with the goal of mapping the HML model to robot motion; (ii) implementing the developed HML models in a robot learning architecture (in robotic manipulation or navigation) evaluated against novel benchmarking metrics (to be investigated) for the applicability and utility of developed research in real-world; and (iii) annotated and labelled dataset (videos, sensor data, etc.) of human as well as robot motions made publicly available, for the benefit of the scientific communities in psychology, AI, and robotics. 

Prospective PhD applicants may have a degree in Computer Science or Robotics (or Psychology with an experimental focus), and with knowledge of Machine Learning, Deep Learning, AI, and Robotics (preferably). This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, Python, ROS. 

Supervisors: Dr Nikhil Deshpande (School of Computer Science), Dr Deborah Serrien (School of Psychology)

For further details and to arrange an interview please contact Dr Nikhil Deshpande (School of Computer Science). 


Learning Heuristics for Computer Algebra Software

Computer Algebra Systems (CASs) play an increasingly important role in pure mathematics research. These are immensely complicated pieces of software that allow the user to represent and handle abstract mathematical objects within a computer. Handing these objects requires expensive computations and involves heuristics that choose the most appropriate computational methods. 

Each heuristic attempts to predict which of the available computational methods will be most effective in the current circumstances. The choice does not affect the correctness of the solution – it is essential that the output remains correct – but affects running times. Because of the exponential nature of many of the algorithms used by a CAS, a poor choice can make the difference between a computation finishing in under a second, to requiring years to complete. 

Traditionally, such heuristics are designed by humans who study at most a few hundred examples and produce common-sense-based algorithms. They are strongly influenced by the use cases they are familiar with. It has been shown that computers usually outperform humans in such predictions, although it is often a non-trivial task to employ machine learning within computational software due to the complexity of the patterns and the huge variability in use cases. 

This project will develop machine learning tools for algorithm selection, which can be embedded in an existing CAS. These tools will replace the existing hard-coded heuristics, and a life-long-learning mechanism will ensure that they develop over time in response to real-world use cases. 

This pioneering project will place the student at the forefront of an exciting new area of research: the application of machine learning techniques to exact symbolic computations. This work has enormous potential for collaborative projects with CAS research groups worldwide: for example, the hugely successful SAGE open source project; the DFG-funded Singular and OSCAR groups in Germany; and the Magma group in Australia. 

An ideal candidate will have computer science or engineering background, with interest in mathematics and artificial intelligence. Good programming skills are essential. 

Supervisors: Dr Daniel Karapetyan (School of Computer Science), Dr Alexander Kasprzyk (School of Mathematical Sciences). 

For further details and to arrange an interview please contact Dr Daniel Karapetyan (School of Computer Science).


Machine learning-assisted growth of atomically thin semiconductors

Development of quantum systems and understanding of their complex behaviour - from quantum tunnelling to entanglement - have led to revolutionary discoveries in science. Quantum science has great potential, but future progress requires a shift towards transformative materials and advanced fabrication methods. This project will use a bespoke cluster (EPI2SEM) for EPitaxial growth and In-situ analysis of two dimensional semiconductors (2DSEM) to create the high-purity materials and research tools required to advance the field beyond present state-of-the art. By using computational modelling and machine learning (ML), we aim to realise an artificial intelligence semiconductor-synthesis control system. The successful growth of 2DSEM demands strict control of many conditions, including temperature, pressure, atom fluxes and their ratio, growth rate, etc. 

We will explore the thermodynamics and growth kinetics including reaction pathways, surface migration, and reaction rate of 2DSEM on specific substrate surfaces. The proposed approach of “cooking” (i.e. define recipes for the growth) and “tasting” (i.e. growth and measurement) will be applied to the fabrication of atomically thin-semiconductors with ultra-high electron mobilities for nanoelectronics. Advanced computational and ML simulations (with Prof. Elena Besley, School of Chemistry), combined with complementary experimental tests (with Prof. Amalia Patanè, School of Physics), will provide a powerful toolkit for unveiling the real-time growth mechanism. 

In this project, machine learning (ML) methods will be utilised to predict thin-film growth of semiconductor materials. The ML predictions will be tested in a unique in the world, bespoke facility EPI2SEM (School of Physics) and by our industrial partner Paragraf (Greater Cambridge Area). Synthesis-by-design (with Prof. Amalia Patanè, School of Physics) guided by computation (with Prof. Elena Besley, School of Chemistry) will reduce the need for cost- and time- consuming trial-and-error experiments. 

Applicants will be expected to have a numerate background from a first degree in Maths, Chemistry, Physics or similar, strong interest in applied machine learning and developing new coding and data science skills. 

Supervisors: Prof Elena Besley (School of Chemistry), Prof Amalia Patanè (School of Physics). 

For further details and to arrange an interview please contact Prof Amalia Patanè (School of Physics).


Multimodal machine learning of parent-child interactions as a predictor of child cognitive functions

The first five years of the child’s life play a critical role for cementing cognitive functions. Several studies have shown that the quantity and quality of parent-child interactions can affect children’s cognitive development later on. Periods of shared attention between caregivers and children have important implications for developing the child’s attention span and language skills. Recent neuroimaging research has also found that some parts of the brain may be activated in similar ways for both parents and children during these interactions, and interestingly, the extent of this analogous brain activation might be influenced by factors like how stressful the home environment is. Despite what is known about the association between child cognitive functions and the modalities (i.e., audio-visual and brain activity) involved in parent-child interactions, the extent to which these can be combined to inform better cognitive developmental outcomes for infants is unknown. If we can better understand these associations, we can help caregivers interact with young children in more effective ways that could potentially transform their development in the crucial early stages of life. 

In this project you will work to better understand the interactions between young children and caregivers as they interact during exploratory play. You will apply state-of-the art machine learning techniques to analyse videos of interactions, detecting poses, activities and key events. You will explore novel deep learning methods for integrating multi-modal information sources, to combine video events, audio data and FNIRS data and extract perceptual, verbalization, affect, brain function information (to name a few) during parent-child interactions to predict cognitive functions in children. Video, questionnaire and neuroimaging data are already available from an ongoing, longitudinal project assessing neurocognition in children in the School of Psychology at the University of Nottingham. Separately, it might be possible to design and collect more data in the future. 

Applicants will be expected to have a good working experience with current machine learning and image processing tools and techniques. Prior knowledge of biomedical signal processing and natural language processing is desirable but not essential. 

Supervisors: Dr Joy Egede (School of Computer Science), Dr Sobana Wijeakumar, (School of Psychology), Dr Aly Magassouba (School of Computer Science)

For further details and to arrange an interview please contact Dr Joy Egede (School of Computer Science)


Physics-informed Machine Learning for Climate Wind Change

Machine learning simulation strategies for fluid flows have been extensively developed in recent years. Particular attention has been paid to physics-informed deep neural networks in a statistical learning context. Such models combine measurements with physical properties to improve the reconstruction quality, especially when there are not enough velocity measurements. In this project, we will develop novel methods to reconstruct and predict the velocity field of incompressible flows given a finite set of measurements. Specifically, using wind data from the Met Office, we aim at reconstructing the wind in the UK for the last 50 years and predict the main features of the wind in the UK in the upcoming decades and compare it against climate change models (CMIP6 and ERA5) based on classical data assimilation. For the spatiotemporal approximation, we will further develop the Physics-informed Spectral Learning (PiSL) framework [1,2,3] which has controllable accuracy. Our computational framework thus combines supervised (wind data) and unsupervised (physical conservation laws) learning techniques. From a mathematical standpoint, we will study the stability and robustness of the method whereas, from a computer science standpoint, we will develop efficient algorithms for the adaptive construction of the sparse spectral approximation. 

We are looking for master's degree holders who are interested in interdisciplinary research projects that revolve around computational methods such as mathematical models, simulation methods, and data science techniques. Applicants are expected to have a numerate background from a first degree in Maths, Physics, Computer Science or Engineering; and an interest in developing novel physics-informed machine learning approaches and in developing their coding and data science skills. It is essential that the applicant is proficient in using either Python or similar high-level programming languages. 

[1] Espath, L., Kabanov, D., Kiessling, J. and Tempone, R., 2021. Statistical learning for fluid flows: Sparse Fourier divergence-free approximations. Physics of Fluids, 33(9), p.097108.

[2] Kabanov, D.I., Espath, L., Kiessling, J. and Tempone, R.F., 2021. Estimating divergence‐free flows via neural networks. PAMM, 21(1), p.e202100173.

[3] Saidaoui, H., Espath, L. and Tempone, R., 2022. Deep NURBS–admissible neural networks. arXiv preprint arXiv:2210.13900.

Supervisors: Dr Luis Espath (School of Mathematical Sciences), Dr Xin Chen (School of Computer Science).  

For further details and to arrange an interview please contact Dr Luis Espath (School of Mathematical Sciences). 


Uncertainty quantification for machine learning models of chemical reactivity

In this PhD project, we will develop and implement approaches for estimating the uncertainty in AI predictions of chemical reactivity, to help strengthen the interaction between human chemists and machine learning algorithms and to assess when AI predictions are likely to be correct and when, for example, first principles quantum chemical calculations might be helpful. 

Predicting chemical reactivity is, in general, a challenging problem and one for which there is relatively little data, because experimental chemistry takes time and is expensive. Within our research group, we have a highly automated workflow for high-level quantum chemical calculations and we have generated thousands of examples relating to the reactivity of molecules for a specific chemical reaction. This project will evaluate a variety of machine learning algorithms trained on these data and, most crucially, will develop and implement techniques for computing the uncertainty in the prediction. 

The algorithms developed in the project will be implemented in our ai4green electronic lab notebook, which is available as a web-based application: and which is the focus of a major ongoing project supported by the Royal Academy of Engineering. The results of the project will help chemists to make molecules in a greener and more sustainable fashion, by identifying routes with fewer steps or routes involving more benign reagents. 

Applicants should have, or expected to achieve, at least a 2:1 Honours degree (or equivalent if from other countries) in Chemistry or Mathematics or a related subject. A MChem/MSc-4-year integrated Masters, a BSc + MSc or a BSc with substantial research experience will be highly advantageous. Experience in computer programming will be essential. 

Supervisors: Prof Jonathan Hirst (School of Chemistry), Prof Simon Preston (School of Mathematical Sciences).

For further details and to arrange an interview please contact Prof Jonathan Hirst (School of Chemistry).


Using AI to create ‘super-glasses’ to see cells deep inside living tissue for biomedicine

Imaging with cellular resolution (~1 micron) is a critically important tool for understanding biological processes that underpin diseases from Alzheimer’s to cancer. Images are usually captured with light (e.g. with microscopes), but this can penetrate less than 1mm into tissue. This limitation prevents imaging deep inside large organs of the body, such as the brain, without greatly reducing resolution. The main reasons for the limited penetration depth of light into tissue is aberrations and scattering, which cause blurry images, akin to what a glasses-wearer might see without corrective lenses, or diffuse images, akin to trying to see through fog. 

Recent work has shown that special ‘glasses’, in the form of adaptive optical systems, can be created for microscopes that reverse the distortion caused by aberrations and scattering to produce clear images. Scattering and aberrations are a result of the light travelling through the inhomogeneous tissue, the properties of which vary in spatial but also in time for living samples. To become a truly transformative technology these glasses must in future be able to correct in real-time to account for the sample, and hence the image distortions, changing with time. To do this, they must incorporate real-time measurements that help with estimating changes to the sample. In other words, they must be upgraded to ‘super-glasses’. Such technology has only recently become feasible due to advances in generative image AI. 

In this Ph.D. you will apply and extend one of the most successful generative image reconstruction techniques, Diffusion models, to correct for scattering and aberrations in two important real-world case studies: imaging deep in biological tissue models (e.g. for personalized medicine), and imaging through ultra-thin optical fibers to make medical endoscopes for disease diagnosis. You will implement an autoregressive learning model to incorporate recent measurement data into your diffusion models via a conditioning approach. Working with collaborators in the lab you will also explore options to control microscope wavefront shaping using reinforcement learning. The final system will maintain a real-time physical approximation of the sample state in a dynamically changing environment. 

Applicants are expected to have a strong background in computing and programming, and some knowledge of machine learning and deep learning would be idea. Any knowledge of the relevant physics (optics/electromagnetism) would be useful but it is not essential and can be learned on the job. The candidate will be supported in the different areas of knowledge required by a multidisciplinary team from Computer Science and Engineering, and will work with datasets measured in-house. 

Supervisors: Dr Michael Pound (School of Computer Science), Prof Amanda Wright (Faculty of Engineering), Dr George Gordon (Faculty of Engineering).

For further details and to arrange an interview please contact Dr Michael Pound (School of Computer Science).


Further information

For further enquiries, please contact Professor Ender Özcan - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit: