Faculty of Science Doctoral Training Centre in Artificial Intelligence
The Faculty of Science AI DTC is a new initiative by the University of Nottingham to train future researchers and leaders to address the most pressing challenges of the 21st Century through foundational and applied AI research on a cohort basis. The training and supervision will be delivered by a team of outstanding scholars from different disciplines cutting across Biosciences, Chemistry, Computer Science, Mathematical Sciences, Pharmacy, Physics and Astronomy, and Psychology.
The Faculty of Science invites applications from Home students for up to 6 fully-funded PhD studentships to carry out multidisciplinary research in the world-transforming field of artificial intelligence. The PhD students will have the opportunity to:
- Choose from a wide choice of AI-related multidisciplinary research projects available, working with world-class academic experts in their fields;
- Benefit from a fully-funded PhD with an attractive annual tax-free stipend;
- Join a multidisciplinary cohort to benefit from peer-to-peer learning and transferable skills development.
||Minimum of a 2:1 bachelor's degree in a relevant discipline to the research topic (please consult with the potential supervisors), and a strong enthusiasm for artificial intelligence research. Studentships are open to Home students only
||1st October 2023
||Annual tax-free stipend based on the UKRI rate (currently £17,668) plus fully-funded PhD tuition fees for the four years
The deadline to have completed and submitted your application to NottinghamHub is 31st March 2023.
For information on how to apply, click here
Rooted in the exceptional research environments of the Faculty of Science Schools at the University of Nottingham, the second cohort of the AI DTC will be organised around 24 multidisciplinary research topics. It is important that you identify a research topic aligned with the expected skill set, your background and particular areas of interest. You will need to obtain support from the supervisors associated with your research topic choice before submitting your official application. You can do this by exploring the research projects below and contacting the main supervisor of the project that is of interest to you, directly, to discuss the further details and to arrange an interview as appropriate. In your PhD studentship application, you will be asked to provide your CV, and a personal statement including a research/project topic from the following list, the names of the supervisors you have support from, and explaining why you are interested in that research/project topic and your motivation for doing a PhD.
AI insight into the bionic (wo)man
The central role of electricity in biological systems is gaining prominence. The cell is increasingly accepted as a mass of bioelectrical interconnected circuits and in disease these circuits malfunction. However, our ability to electrically communicate with such systems is limited by a mismatch in materials and technological understanding and selective targeting bio-interfacing of electrical reporting systems. This PhD will focus on formulating 100s of new conducting biomaterials and interfacing with 100s of cell types in vitro to elucidate the material-electrical interactions that allow or seamless integration of electronics with biology. Artificial intelligence will be utilised to elucidate the material-cell directed communication.
There are several data challenges associated with the above area that require the use of AI to facilitate bioelectronic integration. This includes integration of data from various sources (such as genomics, transcriptomics, proteomics, imaging) in response stimulation of cell bioelectronic interfaces and subsequent bioelectrical alterations, into a single platform that can be analysed. This is a challenge that requires AI algorithms to process, integrate, and make sense of large and complex data sets. Signal Processing of Bioelectrical signals, such as electrophysiology and imaging data, are noisy and have a high-dimensional structure that requires AI algorithms for signal processing, noise reduction, and feature extraction. And lastly a challenge remains in the context of Predictive Modelling. This is needed for developing predictive models that can accurately predict the behaviour of cells, tissues, or organisms in response to bioelectronics interfaces and requires machine learning to aid our predicative ability allowing us to program biology. These challenges highlight the need for AI in bioelectronic research, as it will enable us to process, analyse, and make predictions about complex biological systems. This will facilitate a technological revolution towards the movement of bionics with applications in diagnostics, bioelectronics medicine and healthcare more broadly.
Supervisors: Dr Frankie Rawson (School of Pharmacy), Prof Juan P. Garrahan (School of Physics), Prof Morgan Alexander (School of Pharmacy).
For further details and to arrange an interview please contact Dr Frankie Rawson.
AI-based quantitative methods to investigate spinal cord regeneration in the axolotl
In this interdisciplinary PhD project, the student will investigate how the axolotl regenerates the spinal cord after injuries by combining AI-based image analysis with computational modelling and using experimental data generated by our collaborators. In contrast to humans, salamanders like the axolotl can resolve severe and extreme injuries of spinal cord throughout complete and faithful regeneration. Although more than 250 years have passed since the original discovery of salamander tail regeneration after amputation by Spallanzani, the governing mechanisms underlying these unparalleled regeneration capabilities are not yet understood.
This project is part of an international collaboration between the lab of Elly Tanaka, a world leader in regeneration in the axolotl at the Institute of Molecular Pathology in Vienna and the Chara lab at the UoN, which is the only modelling lab in the UK and possibly in the world investigating regeneration of this salamander. Recently, our two labs demonstrated that tail amputation leads to a particular spatiotemporal distribution of cycling cells in the axolotl. By combining a new transgenic axolotl using FUCCI technology (AxFUCCI) with the first cell-based computational model of the regenerative spinal cord, we found that regeneration is orchestrated by a particular spatiotemporal pattern of neural stem cell recruitment along the anterior-posterior (AP) axis. The goal of this PhD project is to build on these results to quantitatively and mechanistically investigate the axolotl regenerative response. The student will quantitatively analyse confocal images of AxFUCCI to develop AI methods to accurately estimate for the first time the architecture of the axolotl spinal cord during regeneration, building on image-analysis software developed by co-supervisor and computer scientist, Prof Andrew French. Then, the student will use this dynamical tissue architecture to develop a cell-based computational model of the axolotl spinal cord during regeneration. The image analysis will generate detailed knowledge of cell geometries and growth that will be embedded within a computational multicellular model, making use of the multicellular modelling framework developed by co-supervisor Dr Leah Band, thus enabling accurate simulations of transport and signalling mechanisms.
Applicants are expected to have experience of python, machine learning / deep learning. Knowledge of modelling is desirable, but not essential.
Supervisors: Dr Osvaldo Chara (School of Biosciences), Dr Leah Band (School of Mathematical Sciences and School of Biosciences) Prof Andy French (School of Computer Science and School of Biosciences).
For further details and to arrange an interview please contact Dr Osvaldo Chara.
AI-Enabled Reaction Design and Discovery
This is an interdisciplinary project involving both Chemistry and Computer Science which will work towards the challenge of computational chemical reaction discovery. Discovery of new reactions is critical for access to new, improved pharmaceuticals and agrochemicals. This is generally achieved through experimental trial-and-error, and is therefore slow and wasteful. Traditional computations can be used to understand known reaction mechanisms, but are too slow to rapidly search the vast chemical space for novel, feasible reactions. This project will develop AI methods for molecule energy prediction, which can be up to million times faster.
A particular focus will be on modelling of transition states between stable molecules, as they are critical to understand the speed and therefore the feasibility of a chemical transformation. This is a little-explored area, because generating the training data from traditional slow computations is more challenging for transition states than for stable molecules. We have developed workflows for automated transition state computation and AI methods for stable molecule energy prediction. This project will leverage the experience in both and tackle the challenge of transition state prediction, by first assembling a large training dataset using automated computational tools and then exploring a wide variety of AI methods for their energy prediction. The new AI methods will then be applied to the rapid exploration and discovery of new pericyclic reactions with a focus on applications in the synthesis planning for drug and agrochemical candidates, and new bioorthogonal reaction discovery. Pericyclic reactions include click chemistry, which earned their inventors 2022 Nobel Prize in Chemistry.
Candidates are expected to have a minimum of a 2:1 bachelor's degree in a Chemistry or related discipline, and a strong enthusiasm for artificial intelligence research.
Supervisors: Dr Kristaps Ermanis (School of Chemistry), Dr Grazziela Figueredo (School of Computer Science).
For further details and to arrange an interview please contact Dr Kristaps Ermanis.
Artificial gene regulatory networks as a new AI paradigm
Gene regulatory networks (GRNs) are the primary ways by which living cells are programmed to respond to their environment in real time. They allow for a population of genetically identical cells to behave differently, for example the way the cells in our eyes behave differently from cells in our skin. GRNs evolve in a specific way, allowing them to learn new responses or behaviours from previous patterns without losing existing knowledge. Artificial GRNs (aGRNS), that is, computer implementations of GRNs, have been used to help understand the biology of GRNs. However, they have not been considered as a computational paradigm in their own right.
The aim of this project is to establish aGRNs as a computational AI paradigm. It will involve the implementation of aGRNs using both deterministic and stochastic formulations, and the identification and testing of problem types for which this paradigm is likely to be especially valuable. These include systems that need to switch rapidly between different contexts, and systems that need to transfer learning from one domain to another. These are both important challenges for improving the generalisation of AI systems.
Applicants are expected to have strong computer programming skills and a broad knowledge of artificial intelligence. Some biosciences background would also be beneficial to help understand the concepts, but this could be learned as part of the project if needed.
Supervisors: Dr Colin Johnson (School of Computer Science), Prof Dov Stekel (School of Biosciences).
For further details and to arrange an interview please contact Dr Colin Johnson.
Artificial Synapses with Dual Opto-Electronic control for Ultra-Fast Neuromorphic Computer Vision
Memristors (or resistive memory) are a new generation of electronic devices that directly emulate the chemical and electrical switching of biological synapses, i.e., the key learning and memory components of the human brain. Memristors also have the advantage of ultra-fast switching, low-power consumption, and nanoscale size, and therefore have the potential to usher in a whole new era of artificial intelligence, devices, and applications. The aim of this project is to develop new state-of-the-art memristor devices that can switch optically as well as electronically, thereby enabling these “optically switching synapses” to be used as “in-memory” computing elements in neuromorphic circuits for computer vision applications. This PhD project will develop new optically active materials, based on semiconducting nanowires/nanotubes coupled with metal nanoclusters and/or photoactive molecules, with enhanced light sensing capabilities that are suitable for integrating with memristor materials and devices. You will learn materials synthesis and deposition techniques, nanoscale device fabrication as well as advanced electrical and optical characterization methods.
Supervisors: Dr Neil Kemp (School of Physics and Astronomy), Professor Andrei Khlobystov (School of Chemistry), Dr Jesum Alves Fernandes (School of Chemistry).
For further details and to arrange an interview please contact Dr Neil Kemp.
Developing a robust methodology to analyse taste buds (fungiform papillae) using smart phones by consumers
Fungiform papillae (FP) are ‘mushroom-like’ papillae that appear as pinkish spots, located on the anterior part of the tongue, containing taste buds. Research has suggested that the anatomical structure of FP varies greatly across individuals and could be a marker for taste sensitivity, and further linked to food preference and choice. Until now, manual FP counting from digital photography was the most popular method of quantification. This is extremely time consuming and error prone. Automated methods have started to be developed in recent years; however, this requires the image to be high quality and taken under very strict conditions using professional cameras.
This project aims to:
Fully automate the quantification of FP using cutting edge computer vision methods, such that we are able to provide reliable counts from lower quality images.
Develop an interactive imaging app that can be used by to guide the self-photography of FP at home. We will use the app platform to additionally explore automated capture of food choices and nutritional information from plated food (which can add new dimensions to future studies).
Integrate this new technology in a food sensory study investigating the relationship between FP, taste sensitivity and taste preference.
The successful applicant will develop algorithms using deep machine learning to quantify FP on the tongue utilising images collected via smart phones with an interactive app, so that consumers can easily take this information by themselves. The app itself will use computer vision techniques to interactively help the user take a high-quality photograph. These two novelties together contribute to a new platform for conducting taste research in the general public.
Although applicants are expected to have a computer science background, they will be integrated into a team of food scientists to help create a new image dataset to train the machine learning models, and to co-develop the app with domain input to ensure the system delivers the best quality data it can.
Supervisors: Prof Andrew French (School of Computer Science), Dr Qian Yang (School of Biosciences).
For further details and to arrange an interview please contact Prof Andrew French.
Digital twins for quantum microscopy
Superresolution microscopy is a rapidly developing field that provides the means to study biological and nanoscale structures with unprecedented detail. One of the most promising techniques for superresolution microscopy is spatial mode demultiplexing (SpaDe), which involves collecting information about the structure of the sample encoded in a suitable basis of spatial modes of light. This has been shown to enable unprecedented resolution enhancements compared to conventional direct imaging and has the potential to push microscopy towards the ultimate precision limits established by quantum mechanics. However, optimising the measurement setup and image reconstruction for SpaDe microscopy and surface analysis on real samples can be challenging and time-consuming.
The objective of this project is to develop a software framework for comprehensive simulation of a quantum superresolution microscope -- a digital twin -- to benchmark different experimental approaches and investigate the resolution improvements enabled by SpaDe in practical settings. Digital twins have been used successfully for task-specific uncertainty evaluation in surface and dimensional metrology, but their application in optical imaging remains largely unexplored.
The digital twin will be powered by physical models derived from first principles, including surface-scattering models, three-dimensional imaging theory, spatial mode demultiplexing, photon counting, and error-generation models. By incorporating the influences of various error sources (both intrinsic and environmental) via appropriate stochastic modelling before reconstructing the image, the virtual instrument will simulate the response of the real instrument in tunable conditions. The virtual instrument will be also used for uncertainty evaluation. This process will include the determination of relevant ISO metrological characteristics, such as noise, resolution, fidelity, which will be important to validate the emerging SpaDe imaging technology.
This project will involve a combination of theoretical and computational work, as well as interdisciplinary collaboration with experts in the fields of quantum physics, material science, and engineering. The successful candidate will have the opportunity to work with cutting-edge technology and contribute to the advancement of both digital twin frameworks and superresolution microscopy.
Supervisors: Prof Gerardo Adesso (School of Mathematical Sciences), Dr Katherine Inzani (School of Chemistry).
For further details and to arrange an interview please contact Prof Gerardo Adesso.
Energy requirements of neuromorphic learning systems
Very large neural networks are rapidly invading many parts of science, and have yielded some very exciting results. However, particularly the training of large networks requires a lot of energy. This energy is needed to compute, but also to store information in the synaptic connections between neurons. Interestingly, also biological systems require substantial amounts of energy to learn. Under metabolically challenging conditions, these requirements can be so large that in small animals learning reduces the lifespan. Based on these findings we have started to design algorithms that reduce the energy needed to train neural networks.
This project will explore energy requirements for learning in neuromorphics system with memristors. Neuromorphic systems mimic the biological nervous system in their design principles and are currently being explored to create highly energy efficient neural networks. Memristors that are a key technology in such devices. Specifically, we will 1) develop models that describe the energy needs for learning in neuromorphic networks, 2) use inspiration from biology to design efficient algorithms that are more energy efficient and test them in simulations, and 3) contrast energy requirements to the energy needs in biology as well as conventional hardware.
The ideal applicant will have a strong background in physics, mathematics, computer science or engineering with both analytical and programming skills. Interest in biology and/or engineering will be beneficial.
Supervisors: Prof Mark van Rossum (School of Psychology and School of Mathematical Sciences), Dr Neil Kemp (School of Physics and Astronomy).
For further details and to arrange an interview please contact Prof Mark van Rossum.
Enhanced artificial intelligence for retrosynthesis planning
In this PhD project, we will develop innovative enhancements of Monte Carlo tree search (MCTS) algorithm for the problem of retrosynthesis. Retrosynthesis is the process of repeatedly breaking down a ‘target’ molecule using valid chemical reactions to attain a series of more simple start molecules and several reaction routes which lead to the initial target molecule. The MCTS is an efficient search algorithm, most notably known for its use in Google Deepmind’s AlphaGo. The algorithms developed in the project will be implemented in our ai4green electronic lab notebook, which is available as a web-based application: http://ai4green.app and which is the focus of a major ongoing project supported by the Royal Academy of Engineering. Improvements to the MCTS algorithm in the context of retrosynthesis will help chemists to make molecules in a greener and more sustainable fashion, by identifying routes with fewer steps or routes involving more benign reagents.
Applicants should have, or expected to achieve, at least a 2:1 Honours degree (or equivalent if from other countries) in Chemistry or Computer Science or a related subject. A MChem/MSc-4-year integrated Masters, a BSc + MSc or a BSc with substantial research experience will be highly advantageous. Experience in computer programming will also be beneficial.
Supervisors: Prof Jonathan Hirst (School of Chemistry), Dr Kristian Spoerer (School of Computer Science).
For further details and to arrange an interview please contact Prof Jonathan Hirst.
Explainable Generative Models for Biomaterials Discovery
This project aims to develop interpretable machine learning and artificial intelligence (AI) approaches to the design of novel biomaterials to be used in medical devices.
Advanced biomaterials are urgently needed to address the healthcare challenges faced by societies associated with ageing populations. High throughput technologies generate substantial volumes of data on large numbers of biomaterials, with diverse chemical and topographical properties. Machine learning methods are highly successful in making predictions of useful properties driving the biological phenomena within complex materials. This project aims at exploring AI for new biomaterials design. It involves three stages:
Extracting materials’ chemical and topographical properties using deep learning and few shot learning;
Designing new improved materials using generative methods;
Machine learning decision interpretation to further inform biomaterials researchers about the design decisions made by the machines.
Applicants are expected to have experience of machine learning, python, and software engineering. Knowledge of evolutionary algorithms and agile methodology is desirable, but not essential.
Supervisors: Dr Graziella Figueredo (School of Computer Science), Prof Morgan Alexander (School of Pharmacy).
For further details and to arrange an interview please contact Dr Graziella Figueredo.
Guided Image Generation for Artists (GIGA) – Making Deep Learning-Based Image Generators Accessible to Artists
Deep learning-based image generators such as Stable Diffusion and DALL-E 2 promise to revolutionise the artistic process, allowing the creation of breath-taking images from simple text prompts. However, the black-box nature of such AI systems and the technical expertise required to steer such models create significant obstacles for their adoption in the arts community. Working with a group of artists, this project will develop a human-AI teaming tool that makes image generators more accessible to non-experts. The aim of this tool will be to guide artists through the creative process, expose lesser known features, provide interactive visualisations to help manage the process, and to adapt to the different intentions and preferences of individual artists.
An important part of this project will be to study how artists adapt to, and use this tool. To capture (and quantify) how non-expert observers interpret the outputs of machine learning models prompted with different input parameter choices, we will use psychophysical and behavioural techniques. These methods are widely used in cognitive neuroscience and the collaboration with colleagues in the School of Psychology will allow us to leverage the best experimental paradigms for this aspect of the work.
Your role in this project will be to develop new techniques and interfaces to state-of-the-art generative models. You will work with artists to explore the use of these models, capturing and analysing the steps artists take in using the tools, and their results. Working with colleagues in psychology, you will gain an understanding of psychophysical and behavioural techniques, providing important insights into the role of AI in art.
Supervisors: Dr Kai Xu (School of Computer Science), Dr Michael Pound (School of Computer Science), Dr Jan Derrfuss (School of Psychology), Dr Denis Schluppeck (School of Psychology).
External Partner (Artist): Richard Ramchurn (AlbinoMosquito)
For further details and to arrange an interview please contact Dr Kai Xu.
Human-Robot Teamwork for Adaptive Motor Rehabilitation
This PhD project aims to develop innovative stroke rehabilitation methodologies with robots. Motor rehabilitation requires patients and therapists to coordinate and adapt their motions, and as such, relies heavily on social and personalised touch interactions. However, modelling the constantly evolving nature of human sensorimotor actions and interactive behaviours is a challenging research problem, hence, state-of-the-art therapies provided by robots lack proactivity and personalisation to handle changing human needs. A robot, if employed with intelligent mechanisms to instantaneously infer about how well it interacts with a human would better complement human movements during rehabilitation exercises.
This PhD project will study and quantify human interactions using an immersive haptic setup, with a view of modelling physical human-human coordination and teamwork paradigms for rehabilitation. The aim of the project is to develop novel proactive embodied intelligence mechanisms where a robotic device will appropriately and safely work with a patient undergoing stroke therapy, while being aware of and responding to human-robot interaction states.
The project will target technological and psycho-sociological challenges related to AI to investigate the following objectives:
Develop indicators of user characteristics relevant to stroke (e.g. coordination, dexterity, range of arm motion), as well as general human states (e.g. effort, workload, fatigue) through modelling individual and interactive behaviours. A number of multimodal features will be considered (e.g. body-worn sensors, facial feature tracking, forces, kinematics, gaze estimation).
Implement teamwork and role allocation strategies for human-AI collaboration, based on agreement on movement patterns in trajectory following and pulling/pushing tasks, which capture the continuous nature of interactive behaviours. Coordinated motion intentions, goals and human states will be estimated in real-time and will be used to support performance gains.
Evaluate the outcomes of the proactive AI methodologies in controlled user studies.
The student will be given full access to Cobot Maker Space facilities (https://cobotmakerspace.org/) to work with commercial robots and develop software to implement (semi-) autonomous robotic behaviours. Projects will require working with real robots interacting with real humans in challenging environments. The student must have good experience with programming (C++, machine learning and robotics knowledge is desirable) and a strong interest in conducting user studies.
Supervisors: Dr Ayse Kucukyilmaz (School of Computer Science), Dr Deborah Serrien (School of Psychology).
For further details and to arrange an interview please contact Dr Ayse Kucukyilmaz.
Intelligent sensing and data fusion in a smart environment for human activity recognition to support self-management of long-term conditions
Given the pressure on health and social care resources, there is a growing incentive to explore methods for self-management for long-term conditions. Smart environments, realised through a range of ambient integrated sensors and service robotics, could people with long-term conditions improve their quality of life. There is emerging research on intelligent data fusion to combine a range of ambient and wearable data sensors for modelling and analysing physiological and behavioural data collected over time. This can be used to provide early warning or guidance for the patient themselves, or their healthcare professionals.
The research challenges lie in developing person-specific machine learning models, which are verifiable and robust in the face of noisy real-world sensor data that will change over time, as the person’s condition changes. There is also a gap in knowledge on how best to select and integrate multiple types of sensor data, in a way that preserves the integrity of the different streams of information, while also providing a meaningful representation of the person’s activity.
This research will address the challenges noted, and also explore the design of interactive systems that can incorporate user input for semantic labelling and modelling, using an active learning approach. Keeping the user in the loop can improve engagement, while offering improved reasoning and confidence in sensor selection and fusion techniques. This research will explore multi-modal user-input approaches for eliciting and integrating user input for semantic labelling, using a combination of supervised, un-supervised and self-learning techniques to address the challenges of noisy data and reliably tracking changes in long-term conditions over time.
This research will be informed by, and related to, ongoing preclinical work being conducted by members of the interdisciplinary supervisory team, exploring behavioural and physiological changes in response to pregnancy, the ageing process and age-related diseases such as stroke, diabetes and cardiovascular dysfunction.
Prospective PhD applicants are expected to have a degree in Computer Science or Maths with knowledge of Data Science, Machine Learning and AI. This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, C, Java, Python, ROS.
Supervisors: Prof Praminda Caleb-Solly (School of Computer Science), Dr Matthew Elmes (School of Biosciences), Prof Claire Gibson (School of Psychology).
Clinical partners: Alison Wildt (National Rehabilitation Centre Clinical Support Manager), Chrishanti Thornton (Extracare Charitable Trust)
For further details and to arrange an interview please contact Prof Praminda Caleb-Solly.
Learning Heuristics for Computer Algebra Systems
Computer Algebra Systems (CASs) play an increasingly important role in pure mathematics research. These are immensely complicated pieces of software that allow the user to represent and handle abstract mathematical objects within a computer. Handling these objects requires expensive computations and involves heuristics that choose the most appropriate computational methods.
Each heuristic attempts to predict which of the available computational methods will be most effective in the current circumstances. The choice does not affect the correctness of the solution – it is essential that the output remains correct – but affects running times. Because of the exponential nature of many of the algorithms used by a CAS, a poor choice can make the difference between a computation finishing in under a second, to requiring years to complete.
Traditionally, such heuristics are designed by humans who study at most a few hundred examples and produce common-sense-based algorithms. They are strongly influenced by the use cases they are familiar with. It has been shown that computers usually outperform humans in such predictions, although it is often a non-trivial task to employ machine learning within computational software due to the complexity of the patterns and the huge variability in use cases.
This project will develop machine learning tools for algorithm selection, which can be embedded in an existing CAS. These tools will replace the existing hard-coded heuristics, and a life-long-learning mechanism will ensure that they develop over time in response to real-world use cases.
This pioneering project will place the student at the forefront of an exciting new area of research: the application of machine learning techniques to exact symbolic computations. This work has enormous potential for collaborative projects with CAS research groups worldwide: for example, the hugely successful SAGE open source project; the DFG-funded Singular and OSCAR groups in Germany; and the Magma group in Australia.
An ideal candidate will have computer science or engineering background, with interest in mathematics and artificial intelligence. Good programming skills are essential.
Supervisors: Dr Daniel Karapetyan (School of Computer Science), Dr Alexander Kasprzyk (School of Mathematical Sciences).
For further details and to arrange an interview please contact Dr Daniel Karapetyan.
Long-term Autonomy and Mobile Inspection with Spot
Although autonomous mobile robots and related AI technologies are increasingly being adopted by the service sector, their use in extreme environments under time constraints is still a grand challenge in robotics research. In addition to difficulties in mapping, perception, exploration and navigation in such domains, there are mobility challenges due to environmental features, such as stairs, uneven terrains and risky, obstacle-prone zones. This PhD project will focus on legged locomotion using a quadruped mobile robot (Boston Dynamics Spot) to alleviate such traversal challenges.
Even with increased locomotion and payload capabilities, facilitating long-term robot deployment in naturalistic setting is challenging. Long-term autonomy and recovering from errors is an essential skill, still missing in modern robotic applications. To a large extend this is due to the pitfall of existing AI solutions to detect and adjust their performance in changing uncontrolled environments. This project will develop novel formalisms for identifying and alleviating errors during navigation in the prolonged use of quadruped mobile robots in a wide range of indented use scenarios.
The objectives of this PhD project are:
To enable incremental learning methodologies to develop context-based policies, not only for navigation, but error recovery in long-term automation. Causal graphical models will be used to encode situation dependent adjustment formulas based on a range of simulated failure scenarios.
To develop effective human-robot interaction methodologies for efficient management of day-to-day operation of the mobile inspection robot. Human-in-the-loop and teleoperated control methods will be used as the backbone strategy to ensure increasing levels of autonomy during inspection. Novel AI paradigms based on Reinforcement Learning, Learning from Demonstration and Prototypical neural networks will be combined to develop effective human-AI interaction.
To test the developed outputs on a complex industrial use case scenario.
This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Mathematical Sciences as well as industrial involvement with the sponsor company, RACE. Applicants are expected to have strong programming skills and be interested in working with embodied intelligent systems, i.e. robots. They will implement technological advancements in AI on robots, including using machine-learning for generating probabilistic models for life-long learning of different situations.
Supervisors: Dr Ayse Kucukyilmaz (School of Computer Science), Dr Yordan Raykov (School of Mathematical Sciences), Dr Wasiur Khuda Bukhsh (School of Mathematical Sciences).
For further details and to arrange an interview please contact Dr Ayse Kucukyilmaz.
Machine Learning for complex 3D data structures
Plant canopy architecture, the arrangement of plant structural material in 3-dimensions (3D), determines plant function, resource capture and performance. The ability to measure and apply architecture is of great importance. It is, however, governed by a complex number of traits and whilst the tools for its study have advanced, there are still numerous limitations preventing key breakthroughs. Generation of accurate, 3D digital models of plant structures is difficult, with many challenges relating to the complexity of plants. An efficient and accurate method for obtaining such traits is urgently required.
This project seeks to combine computer vision and machine learning in a biological setting to combine accurate plant model generation with an automatic phenotyping pipeline. While current state of the art illustrates machine learning can be applied to 3D models of simple structures, its application biological objects is in its infancy. Expanding on existing machine learning techniques, the successful candidate will generate novel neural networks and apply them to complex objects consisting of mesh surfaces to automatically extract plant traits relating to architecture and repair errors in the underlying mesh representation. The methodology will be evaluated using the µX-ray CT-scanning facilities available at the University’s Hounsfield Facility.
Applicants are expected to have knowledge of programming (Python or C++) plus a strong interest in machine learning and computer vision. Experience of deep learning and an interest in biological systems are desirable, but not essential.
Supervisors: Dr Alexandra Burgess (School of Biosciences), Prof Erik Murchie (School of Biosciences), Prof Tony Pridmore (School of Computer Science).
For further details and to arrange an interview please contact Dr Alexandra Burgess.
Machine learning for first-principles calculation of physical properties
The physical properties of all substances are determined by the interactions between the molecules that make up the substance. The energy surface corresponding to these interactions can be calculated from first principles, in theory allowing physical properties to be derived ab-initio from a molecular simulation; that is by theory alone and without the need for any experiments. Recently we have focussed on applying these techniques to model carbon dioxide properties, such as density and phase separation, for applications in Carbon Capture and Storage. However, there is enormous potential to exploit this approach in a huge range of applications. A significant barrier is the computational cost of calculating the energy surface quickly and repeatedly, as a simulation requires. We have recently developed a machine-learning technique that, by using a small number of precomputed ab-initio calculations as training data, can efficiently calculate the entire energy surface. This project will involve extending the approach to more complicated molecules and testing its ability to predict macroscopic physical properties.
Applicants will be expected to have a numerate background from a first degree in Maths, Chemistry, Physics or similar; and interest in learning about applied machine learning and in developing their coding and data science skills.
Supervisors: Prof Richard Graham (School of Mathematical Sciences), Dr Richard Wheatley (School of Chemistry).
For further details and to arrange an interview please contact Prof Richard Graham.
Machine learning for gravitational wave astronomy: beyond vanilla black holes
Gravitational waves are propagating fluctuations of space and time created by accelerating objects in Einstein's theory of general relativity. For strongly gravitating objects undergoing highly dynamical motion---like the merger of two black holes---the emitted radiation is strong enough to propagate across the universe to Earth, where it is detected by the LIGO-Virgo-KAGRA (LVK) network of gravitational-wave observatories. These signals encode the properties of the source, which we can decipher by comparing to theoretical models. Gravitational waves were first detected in 2015, and since then nearly 100 such events have been observed. Together these have informed our understanding of astrophysics, cosmology, and fundamental physics---ushering in the new era of gravitational wave astronomy.
As detectors are improved, analysis of observational data becomes more challenging: this is due to the complexity of the signal and noise models, the growing rate of detections, and a constant desire for rapid results. To address these challenges, new approaches including machine learning are being explored. In particular, probabilistic deep learning architectures such as normalising flows have demonstrated orders-of-magnitude speed-ups. This opens an opportunity to perform new types of analyses that were previously far too expensive. These include searching for gravitational waves from topological defects and phase transitions in the early universe, as well as black holes in alternative theories of gravity. These are currently limited by the number of models that can be investigated, whereas there is large uncertainty in the production of gravitational waves beyond standard model physics. This project will develop the relevant machine learning algorithms and use them to analyse real gravitational wave data and probe theories of gravity and cosmology.
Applicants are expected to have a strong background in either physics, astronomy, mathematics, or computer science, as well as experience with Python. Experience with deep learning, PyTorch, and gravitational waves is desirable, but not essential.
Supervisors: Dr Stephen Green (School of Mathematical Sciences), Dr Adam Moss (School of Physics and Astronomy), Prof Thomas Sotiriou (School of Mathematical Sciences).
For further details and to arrange an interview please contact Dr Stephen Green.
Modelling Human-Robot Interaction in Social Spaces
Robotics and related AI technologies are rapidly gaining presence in different areas of our everyday life, e.g. cleaning robots vacuuming floors, warehouse robots carrying pallets, robotic vehicles with cruise control. An exciting use of robotics is social and telepresence robots, which are intended to work in public and social contexts, including educational and museum settings, and to provide support for older adults and populations with accessibility issues.
This PhD project will study and quantify human interactions with commercially available robots in different contexts (participants/robots/places/functions) with a view to creating models of human-robot interaction (HRI) in these contexts. These models will help to improve design of spaces optimising human-robot interaction and also inform the development of best practice guidelines for robot embodiment, interaction strategies and autonomous behaviour.
In line with this goal, this PhD project aims to model sustainable human-robot interaction strategies for socially capable robots designed to function in public spaces. The project will target technological and psycho-sociological challenges related to AI to investigate the following overarching research questions:
How can social and telepresence robots be used to connect groups of remote humans and mediate the interaction between them?
What kind of personalisation methods and input/output modalities are useful to improve the interaction between humans and robots and enable long term sustainability of the communications?
How do the attitudes and perceptions toward robots change in children and adults over time?
Are these attitudes and perceptions affected by cultures, communities and the interaction environments?
This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Psychology. Applicants are expected to develop technological advancements in AI and Interaction Design, including using machine-learning for generating personalised user models for children and adults, adaptive motion planning in social environments, feedback generation. In addition, the successful student will design, conduct and analyse experiments to investigate the socio-psychological effects of the technologies.
Supervisors: Prof. Praminda Caleb-Solly (School of Computer Science), Dr Emily Burdett (School of Psychology), Dr Ayse Kucukyilmaz (School of Computer Science).
For further details and to arrange an interview please contact Prof. Praminda Caleb-Solly.
Multimodal integration during parent-child interactions as a predictor of later child executive functions
The first five years of the child’s life play a critical role for cementing cognitive functions. Several, however, disparate strands of work have shown that the quality and quantity of parent-child interactions are predictive of later cognitive development. From an attentional perspective, periods of joint attention between caregivers and children during interactions are integral for sustained attention and the nature of these dyadic interactions has important implications for developing a rich vocabulary. More recent evidence using neuroimaging research has shown that some brain regions might also be synchronized between parents and children during interactions and critically, the extent of this synchrony might be associated with home environmental factors such as life stress. Despite the association between a multitude of modalities and processes engaged in parent-child interactions and child cognitive functions, the extent to which these findings can be integrated to inform individual differences in, and critically, predict later cognitive success is unknown. If we can better understand these mechanisms, we can guide caregivers in interacting with young children, and transform their development in the crucial early stages of life.
In this project you will work to better understand the interactions between young children and caregivers as they interact during exploratory play. You will apply state-of-the-art machine learning techniques to analyse videos of interactions, detecting poses, activities and key events. You will explore novel deep learning methods for integrating multi-modal information sources, to combine both video events and audio data and extract perceptual, verbalization, affect, brain function information (to name a few) during parent child interactions to predict cognitive functions in children. Video, questionnaire, experimental and neuroimaging data are already available from an ongoing, longitudinal project assessing neurocognition in children in the School of Psychology at the University of Nottingham. Separately, it might be possible to design and collect more data in the future.
Applicants will be expected to have a good working experience with current machine learning and image processing tools and techniques. Prior knowledge of biomedical signal processing is desirable but not essential.
Supervisors: Dr Sobana Wijeakumar (School of Psychology), Dr Joy Egede (School of Computer Science), Dr Michael Pound (School of Computer Science).
For further details and to arrange an interview please contact Dr Sobana Wijeakumar.
Physics-informed Machine Learning for Climate Wind Change
Machine learning simulation strategies for fluid flows have been extensively developed in recent years. Particular attention has been paid to physics-informed deep neural networks in a statistical learning context. Such models combine measurements with physical properties to improve the reconstruction quality, especially when there are not enough velocity measurements. In this project, we will develop novel methods to reconstruct and predict the velocity field of incompressible flows given a finite set of measurements. Specifically, using wind data from the Met Office, we aim at reconstructing the wind in the UK for the last 50 years and predict the main features of the wind in the UK in the upcoming decades and compare it against climate change models (CMIP6 and ERA5) based on classical data assimilation. For the spatiotemporal approximation, we will further develop the Physics-informed Spectral Learning (PiSL) framework which has controllable accuracy. Our computational framework thus combines supervised (wind data) and unsupervised (physical conservation laws) learning techniques. From a mathematical standpoint, we will study the stability and robustness of the method whereas, from a computer science standpoint, we will develop efficient algorithms for the adaptive construction of the sparse spectral approximation.
We are looking for master's degree holders who are interested in interdisciplinary research projects that revolve around computational methods such as mathematical models, simulation methods, and data science techniques. Applicants are expected to have a numerate background from a first degree in Maths, Physics, Computer Science or Engineering; and an interest in developing novel physics-informed machine learning approaches and in developing their coding and data science skills. It is essential that the applicant is proficient in using either Python or similar high-level programming languages.
Supervisors: Dr Luis Espath (School of Mathematical Sciences), Dr Xin Chen (School of Computer Science)
For further details and to arrange an interview please contact Dr Luis Espath.
Probing the Probe: Classifying Single Atom Spectra using Unsupervised Machine Learning
In the forty or so years since its invention, scanning probe microscopy  has revolutionised almost every aspect of condensed matter physics, solid state chemistry, materials science, and, of course, nanoscience. Probe microscopists can now routinely not only image individual atoms and molecules -- with single chemical bond resolution in many cases – but can position these building blocks of matter with exquisite precision.
There is, however, a frustratingly persistent problem with probe microscopy: the probe itself. Interpretation of SPM data and next-generation atomic/ molecular manipulation experiments increasingly necessitate fine control and detailed understanding of the atomistic structure of the scanning probe’s apex. Although some of this understanding can be gleaned from a consideration of atomic resolution images, probe spectroscopy represents a much richer information source. Spectroscopic signals acquired with probe microscopes span a variety of channels arising from the electronic, vibrational, and chemical structure of not just the sample, but the probe itself, and are thus a powerful diagnostic and analysis tool .
In this project, you will develop unsupervised machine learning (ML) methods -- involving protocols based on, for example, principal component analysis, k-means clustering , deep learning feature extraction, and/or Voronoi segmentation techniques – to automatically classify spectroscopic data from scanning tunnelling microscopy and atomic force microscopy experiments. The project will involve a combination of computational and experimental work; in addition to developing ML approaches based on the extensive datasets previously acquired by the Nottingham Nanoscience Group, you will also be trained in state-of-the-art ultrahigh vacuum and low temperature SPM so as to carry out atomic resolution imaging, spectroscopy, and manipulation for yourself.
 See O. Gordon and P. Moriarty, Mach. Learn. Sci. Tech. 1 023001 (2020) for a brief introduction to scanning probe microscopy in context of machine learning and artificial intelligence.
 See, for example, S. Kalinin et al., ACS Nano 10, 9068 (2016)
 As a recent (and rare) example of this type of unsupervised approach to SPM spectral classification, see P. Wahl et al., Phys. Rev. B 101 115112 (2020)
Supervisors: Prof Philip Moriarty (School of Physics and Astronomy), Dr Michael Pound (School of Computer Science), Dr Brian Kiraly (School of Physics and Astronomy).
For further details and to arrange an interview please contact Prof Philip Moriarty.
Quantifying the risk of serious harms among people prescribed opioids for chronic pain using federated analytics with big healthcare data
Although opioids are beneficial for acute pain and in end-of-life care, their use for chronic pain remains controversial. Opioid prescribing in the UK has greatly increased in the past twenty years. Studies show that opioids have been prescribed too frequently for many patients with chronic pain incurring substantial healthcare costs. It has also become apparent that long-term use of opioids can be associated with serious harms, including addiction, overdose and death. Electronic health records, big data from CPRD, the UK Biobank, and federated infrastructures that collect big data sources regarding pain management have created opportunities to understand opioid use nationally and to develop advanced intelligent big data approaches for predicting opioid usage risks.
Our aim is therefore to develop intelligent federated analytics approaches to assess whether these serious harms can be predicted using routinely collected data from UK primary care electronic health records and pain management data. The project will determine which information about the person, their medicine-taking behaviours and prescribing patterns that are associated with serious harms.
Applicants will be expected to have a background in Statistics, Mathematics, Computer Science, Epidemiology or a relevant discipline with a significant data analysis component. Previous experience of analysis of large data sources ideally in a healthcare background would be an advantage.
Supervisors: Dr Grazziela Figueredo (School of Computer Science), Dr Roger Knaggs (School of Pharmacy).
For further details and to arrange an interview please contact Dr Grazziela Figueredo.
“What the Cell!?” – Building interactive AI approaches to identify cellular features in microscopic images of plants and crops
Characterising plant anatomy (the structure and organisation of cells and tissues) is essential to understanding fundamental processes such as water and nutrient transport, biomechanics and photosynthesis. Anatomical features have been traditionally studied using sectioned material, and imaged in 2D. Recent advances in tomography allow the high-throughput acquisition of 3D datasets, shifting the bottleneck to the extraction of meaningful information from these complex images. The School of Biosciences is generating anatomical images from a range of plant species and at multiple scales (from inside cells up to whole tissues and organs) to answer fundamental and applied questions in plant biology.
This project will use existing data and new datasets, annotated by plant science experts, to develop an interactive segmentation and classification pipeline to enable us to train and use new machine learning methods to extract 3D anatomical information from plant samples. The interactive aspect of this is key – the images are very challenging, and plant species look very different from each other. This PhD will develop new approaches enabling us to quickly build and retrain AI models with new datasets of plant cells. Images will be generated by the novel technique of Laser Ablation Tomography (LAT) and by more traditional confocal laser scanning microscopy. The project will be strongly transdisciplinary, with the successful applicant based in both the Computer Vision Laboratory (CVL) in the School of Computer Science and the Hounsfield Facility in the School of Biosciences.
Skill set: Applicants should have strong programming skills; there is no strict programming language requirement, although Python would be a particular advantage. Some experience in computer vision (e.g. via an undergraduate module or project) is desirable but not essential. An interest in mathematics is helpful, as many computer vision approaches are mathematical in nature. Applicants would benefit from an interest in biology, but do not need biological experience.
Supervisors: Dr Darren Wells (School of Biosciences), Prof Andy French (School of Computer Science), Dr Jonathan Atkinson (School of Bioscience), Dr Valerio Giuffrida (School of Computer Science).
For further details and to arrange an interview please contact Dr Darren Wells.
For further enquiries, please contact Professor Ender Özcan - School of Computer Science