School of Computer Science

Faculty of Science Doctoral Training Centre in Artificial Intelligence Cohorts and Projects

 

UoN-cohort-720

 

2024

Computer Vision-Based Monitoring of Equine Welfare and Wellness

We are seeking a dynamic and passionate PhD student to join our team and help drive enhancements to equine health and welfare through AI based monitoring. Through this PhD, you will have the unique opportunity to gain an exceptional skillset through interdisciplinary collaboration between the UoN School of Veterinary Medicine and Science, School of Computer Science, and industry partner Vet Vision AI; helping to shape the future of the animal health and welfare. 

Numerous potential welfare issues impact equine welfare when stabled. However, many of these are preventable. Current processes to improve welfare are extremely limited, often relying on owner “know-how” rather than objective and continuous monitoring. The use of AI to automatically monitor welfare outcome will provide a step change in animal health and welfare monitoring, empowering vets and horse owners to revolutionise equine welfare monitoring. 

This PhD aims to develop and deploy cutting-edge computer vision algorithms alongside veterinary insights to monitor and improve equine welfare outcomes accurately and automatically. Successful applicants will be based in the School of Veterinary Medicine, with a close-knit team environment within which mentoring, collaboration and idea sharing is strongly promoted, with input from world-leading computer scientists from the School of Computer Science. They will also work directly with experienced computer vision developers alongside partner company Vet Vision AI; a spinout from the School of Veterinary Medicine and Science on a mission to revolutionise animal health and welfare by combining veterinary insights and computer vision technology. 

This PhD aims to combine world leading veterinary expertise and equine domain knowledge with advanced skills in computer vision and machine learning. Their aim is to translate this knowledge into cutting edge solutions that will help owners and veterinary surgeons improve the lives of millions of animals worldwide. 

Supervisors: Supervisors: Dr Robert Hyde (School of Veterinary Medicine and Science), Prof Sarah Freeman (School of Veterinary Medicine and Science), Dr Katie Burrel (School of Veterinary Medicine and Science), Dr Zhun Zhong (School of Computer science),  Prof Andrew French (School of Computer Science) 

Postgraduate Researcher: Martyna Jankowska (School of Veterinary Medicine and Science)

Computer vision for dynamic materials analysis

Electronic materials development is reliant upon the use of computational chemistry tools for elucidating structure-property relationships on the atomic scale. Molecular dynamics simulations can capture macroscopic material changes involving over 100 million atoms, such as catastrophic electrical breakdown, whilst retaining information down to the level of individual atoms. As electronic device miniaturisation has reached the nanoscale, using dynamic simulations as a theoretical microscope will be vital for overcoming the materials bottleneck we are facing at the end of Moore’s Law. 

One of the challenges in calculations of this scale is the analysis of structural changes including crystallinity, grain boundaries and nanodomain formation. These features are critical in determining properties from electronic and thermal conductivity to optical processes and response to electric fields. However, their complex dynamical behaviour involves thousands of atoms moving over hundreds of thousands of timesteps, making them unsuited to traditional modes of analysis developed for regular crystalline materials. 

This project will take advantage of AI tools in computer vision as a new approach to analyse structural features in dynamical materials simulations. These have advanced to enable analysis of images of self-similar repeating patterns, which will be highly applicable to subtle changes in atomic configurations. Deep learning methods will be used to develop classification, segmentation, and regression models to identify structural transitions, anomaly detection to identify point defects, and edge detection for revealing grain boundaries. These will be applied to the most pressing materials chemistry problems hindering electronic device miniaturisation. 

Potential applicants are expected to have some background in machine learning and/or computational materials/chemistry, in addition to some experience in (and desire to learn) scientific programming such as Python, scikit-learn, and relevant deep learning libraries, such as PyTorch. 

Supervisors: Dr Katherine Inzani (School of Chemistry), Dr Valerio Giuffrida (School of Computer Science), Dr Julie Greensmith (School of Computer Science)

Postgraduate Researcher: Saffron Luxford (School of Chemistry)

AI-based decoding of evoked neural activity to study bilingual language processing

There are more people in the world who can speak two or more languages fluently than people who speak only one language. How are the languages of these bilinguals represented and processed in their brain? Traditionally, neuroimaging and behavioural techniques have been used to develop theories about language storage and processing. Recent studies have shown that insight about how multiple languages are represented in the brain can be gained from decoding language from neural activity. A particular interesting approach is cross-language decoding, which involves decoding brain activity of words in one language and use that to decode words of another language. Whereas most studies in the literature have used functional magnetic resonance imaging (fMRI) data for decoding language, recent studies have shown that decoding language from evoked non-invasive electroencephalography (EEG) data is also possible. However, decoding accuracies obtained have generally been low. 

This project aims to study the time course of the activation of within- and between-language representations in the bilingual brain by decoding evoked EEG activity across multiple modalities (visual and auditory). In particular, the project will focus on semantic representations. The project will involve designing and conducting EEG experiments with bilinguals to obtain data for decoding. Furthermore, the project will make use of state-of-the art machine learning techniques and advanced large language models to improve decoding accuracy. 

Applicants are expected to have a degree in Psychology, Mathematics, Computer Science, Physics, or related areas, and have knowledge of programming and a strong interest in machine learning, cognitive neuroscience, and psycholinguistics. Experience with EEG and experimental techniques are desirable, but not essential. 

Supervisors: Dr Walter van Heuven (School of Psychology), Dr Matias Ison (School of Psychology), Dr Ruediger Thul (School of Mathematical Sciences) ruediger.thul@nottingham.ac.uk.

Postgraduate Researcher: Tobias Meeks (School of Psychology)

Machine learning-assisted growth of atomically thin semiconductors

Development of quantum systems and understanding of their complex behaviour - from quantum tunnelling to entanglement - have led to revolutionary discoveries in science. Quantum science has great potential, but future progress requires a shift towards transformative materials and advanced fabrication methods. This project will use a bespoke cluster (EPI2SEM) for EPitaxial growth and In-situ analysis of two dimensional semiconductors (2DSEM) to create the high-purity materials and research tools required to advance the field beyond present state-of-the art. By using computational modelling and machine learning (ML), we aim to realise an artificial intelligence semiconductor-synthesis control system. The successful growth of 2DSEM demands strict control of many conditions, including temperature, pressure, atom fluxes and their ratio, growth rate, etc. 

We will explore the thermodynamics and growth kinetics including reaction pathways, surface migration, and reaction rate of 2DSEM on specific substrate surfaces. The proposed approach of “cooking” (i.e. define recipes for the growth) and “tasting” (i.e. growth and measurement) will be applied to the fabrication of atomically thin-semiconductors with ultra-high electron mobilities for nanoelectronics. Advanced computational and ML simulations (with Prof. Elena Besley, School of Chemistry), combined with complementary experimental tests (with Prof. Amalia Patanè, School of Physics), will provide a powerful toolkit for unveiling the real-time growth mechanism. 

In this project, machine learning (ML) methods will be utilised to predict thin-film growth of semiconductor materials. The ML predictions will be tested in a unique in the world, bespoke facility EPI2SEM (School of Physics) and by our industrial partner Paragraf (Greater Cambridge Area). Synthesis-by-design (with Prof. Amalia Patanè, School of Physics) guided by computation (with Prof. Elena Besley, School of Chemistry) will reduce the need for cost- and time- consuming trial-and-error experiments. 

Applicants will be expected to have a numerate background from a first degree in Maths, Chemistry, Physics or similar, strong interest in applied machine learning and developing new coding and data science skills. 

Supervisors: Prof Elena Besley (School of Chemistry), Prof Amalia Patanè (School of Physics). 

Postgraduate Researcher: Rayen Ben Ismail (School of Physics)

"Learning by Doing": A Cross-Disciplinary Exploration of Human Motor Learning, AI, and Robot Learning

AI and Robotics technologies are becoming pervasive in everyday human life, e.g. telepresence robots, robot vacuums, humanoids in the factories, etc. To operate in and interact with humans and made-for-human environments, robot learning plays a vital role, allowing robots to be more easily extended and adapted to novel situations. An example of applying AI and machine learning to robotics is learning-by-demonstration, a supervised learning technique, where robots acquire new skills by learning to imitate an expert. 

The research challenge lies in developing robot learning models by modelling the different stages of motor learning in humans, i.e., modelling humans as naïve learners, rather than experts. The fundamental research in human motor learning (HML), by which humans acquire and refine motor skills through practice, feedback, and adaptation, proves critical in this regard. This can allow the robot to mimic human motor control mechanisms, such as joint flexibility and compliance, more naturally, with the benefit of refining the acquired skill over time and input from humans. Example applications include: an assistive robotic arm assisting stroke patients guide their movements and providing corrective cues; therapy patients learning to adjust their motor patterns for better outcomes, based on the AI models, etc. 

This research will address the challenge noted, focussing on three measurable outcomes – (i) novel research into AI to model HML in subjects, including adults and children, especially with the goal of mapping the HML model to robot motion; (ii) implementing the developed HML models in a robot learning architecture (in robotic manipulation or navigation) evaluated against novel benchmarking metrics (to be investigated) for the applicability and utility of developed research in real-world; and (iii) annotated and labelled dataset (videos, sensor data, etc.) of human as well as robot motions made publicly available, for the benefit of the scientific communities in psychology, AI, and robotics. 

Prospective PhD applicants may have a degree in Computer Science or Robotics (or Psychology with an experimental focus), and with knowledge of Machine Learning, Deep Learning, AI, and Robotics (preferably). This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, Python, ROS. 

Supervisors: Dr Nikhil Deshpande (School of Computer Science), Dr Deborah Serrien (School of Psychology) deborah.serrien@nottingham.ac.uk.

Postgraduate Researcher: Zakaria Taghi (School of Computer Science)

 

 

Training a Nanorobot to Build a Molecule

The invention of the scanning probe microscope (SPM) in the early 1980s revolutionised the science of the ultrasmall. Atoms, molecules, and nanostructures are now routinely probed with a spatial resolution all the way down to the single chemical bond limit. Measurements of quantum phenomena that, as recently as a few decades ago, were thought to be so far beyond our capabilities that they would forever be only gedankenexperiments have been made possible, in spectacular fashion, by the probe microscope.

Since the groundbreaking work of IBM Research Labs thirty years ago, a key focus of scanning probe microscopy has been the controlled manipulation of single atoms and molecules to form what have been described as designer states of matter. In this context, an SPM is better thought of as a robot capable of targeting and positioning single atoms, rather than a microscope alone.

But despite this unique ability to manipulate matter on the smallest scales, SPM has a big problem: it’s painfully slow to position single atoms, not least because the human operator represents a major bottleneck in the process. This PhD project represents the next stage in atom manipulation: the integration of machine learning with probe microscopy to automate the assembly of matter from its constituent atomic and molecular building blocks. Building on recent work in the Nanoscience Group in the School of Physics & Astronomy, and in collaboration with the School of Computer Science, you will develop algorithms, architectures, and protocols to automate atomic manipulation, with the ultimate objective of building a molecule, an atom at a time, without human intervention.

Supervisors: Prof Philip Moriarty (School of Physics & Astronomy), Dr Michael Pound (School of Computer Science), Dr. Brian Kiraly (School of Physics & Astronomy). Industry partners: Unisoku.

Postgraduate Researcher: Georgina Locke (Physics)

 

2023

Artificial gene regulatory networks as a new AI paradigm

Gene regulatory networks (GRNs) are the primary ways by which living cells are programmed to respond to their environment in real time. They allow for a population of genetically identical cells to behave differently, for example the way the cells in our eyes behave differently from cells in our skin. GRNs evolve in a specific way, allowing them to learn new responses or behaviours from previous patterns without losing existing knowledge. Artificial GRNs (aGRNS), that is, computer implementations of GRNs, have been used to help understand the biology of GRNs. However, they have not been considered as a computational paradigm in their own right. 

The aim of this project is to establish aGRNs as a computational AI paradigm. It will involve the implementation of aGRNs using both deterministic and stochastic formulations, and the identification and testing of problem types for which this paradigm is likely to be especially valuable. These include systems that need to switch rapidly between different contexts, and systems that need to transfer learning from one domain to another. These are both important challenges for improving the generalisation of AI systems. 

Applicants are expected to have strong computer programming skills and a broad knowledge of artificial intelligence. Some biosciences background would also be beneficial to help understand the concepts, but this could be learned as part of the project if needed. 

Supervisors: Dr Colin Johnson (School of Computer Science), Prof Dov Stekel (School of Biosciences).

Postgraduate Researcher: Catarina Gomes da Costa (School of Computer Science)

Artificial Synapses with Dual Opto-Electronic control for Ultra-Fast Neuromorphic Computer Vision

Memristors (or resistive memory) are a new generation of electronic devices that directly emulate the chemical and electrical switching of biological synapses, i.e., the key learning and memory components of the human brain. Memristors also have the advantage of ultra-fast switching, low-power consumption, and nanoscale size, and therefore have the potential to usher in a whole new era of artificial intelligence, devices, and applications. The aim of this project is to develop new state-of-the-art memristor devices that can switch optically as well as electronically, thereby enabling these “optically switching synapses” to be used as “in-memory” computing elements in neuromorphic circuits for computer vision applications. This PhD project will develop new optically active materials, based on semiconducting nanowires/nanotubes coupled with metal nanoclusters and/or photoactive molecules, with enhanced light sensing capabilities that are suitable for integrating with memristor materials and devices.  You will learn materials synthesis and deposition techniques, nanoscale device fabrication as well as advanced electrical and optical characterization methods. 

Supervisors: Dr Neil Kemp (School of Physics and Astronomy), Professor Andrei Khlobystov (School of Chemistry), Dr Jesum Alves Fernandes (School of Chemistry).

Postgraduate Researcher: Thomas Braben (School of Physics and Astronomy)

Enhanced artificial intelligence for retrosynthesis planning

In this PhD project, we will develop innovative enhancements of Monte Carlo tree search (MCTS) algorithm for the problem of retrosynthesis. Retrosynthesis is the process of repeatedly breaking down a ‘target’ molecule using valid chemical reactions to attain a series of more simple start molecules and several reaction routes which lead to the initial target molecule. The MCTS is an efficient search algorithm, most notably known for its use in Google Deepmind’s AlphaGo. The algorithms developed in the project will be implemented in our ai4green electronic lab notebook, which is available as a web-based application: http://ai4green.app and which is the focus of a major ongoing project supported by the Royal Academy of Engineering. Improvements to the MCTS algorithm in the context of retrosynthesis will help chemists to make molecules in a greener and more sustainable fashion, by identifying routes with fewer steps or routes involving more benign reagents. 

Applicants should have, or expected to achieve, at least a 2:1 Honours degree (or equivalent if from other countries) in Chemistry or Computer Science or a related subject. A MChem/MSc-4-year integrated Masters, a BSc + MSc or a BSc with substantial research experience will be highly advantageous. Experience in computer programming will also be beneficial. 

Supervisors: Prof Jonathan Hirst (School of Chemistry), Dr Kristian Spoerer (School of Computer Science).   

Postgraduate Researcher: Ton Blackshaw (School of Chemistry)

Digital twins for quantum microscopy

Superresolution microscopy is a rapidly developing field that provides the means to study biological and nanoscale structures with unprecedented detail. One of the most promising techniques for superresolution microscopy is spatial mode demultiplexing (SpaDe), which involves collecting information about the structure of the sample encoded in a suitable basis of spatial modes of light. This has been shown to enable unprecedented resolution enhancements compared to conventional direct imaging and has the potential to push microscopy towards the ultimate precision limits established by quantum mechanics. However, optimising the measurement setup and image reconstruction for SpaDe microscopy and surface analysis on real samples can be challenging and time-consuming. 

The objective of this project is to develop a software framework for comprehensive simulation of a quantum superresolution microscope -- a digital twin -- to benchmark different experimental approaches and investigate the resolution improvements enabled by SpaDe in practical settings. Digital twins have been used successfully for task-specific uncertainty evaluation in surface and dimensional metrology, but their application in optical imaging remains largely unexplored. 

The digital twin will be powered by physical models derived from first principles, including surface-scattering models, three-dimensional imaging theory, spatial mode demultiplexing, photon counting, and error-generation models. By incorporating the influences of various error sources (both intrinsic and environmental) via appropriate stochastic modelling before reconstructing the image, the virtual instrument will simulate the response of the real instrument in tunable conditions. The virtual instrument will be also used for uncertainty evaluation. This process will include the determination of relevant ISO metrological characteristics, such as noise, resolution, fidelity, which will be important to validate the emerging SpaDe imaging technology. 

This project will involve a combination of theoretical and computational work, as well as interdisciplinary collaboration with experts in the fields of quantum physics, material science, and engineering. The successful candidate will have the opportunity to work with cutting-edge technology and contribute to the advancement of both digital twin frameworks and superresolution microscopy. 

Supervisors: Prof Gerardo Adesso (School of Mathematical Sciences), Dr Katherine Inzani (School of Chemistry). 

Postgraduate Researcher: Quentin Muller (School of Mathematical Sciences)

Intelligent sensing and data fusion in a smart environment for human activity recognition to support self-management of long-term conditions

Given the pressure on health and social care resources, there is a growing incentive to explore methods for self-management for long-term conditions. Smart environments, realised through a range of ambient integrated sensors and service robotics, could people with long-term conditions improve their quality of life. There is emerging research on intelligent data fusion to combine a range of ambient and wearable data sensors for modelling and analysing physiological and behavioural data collected over time. This can be used to provide early warning or guidance for the patient themselves, or their healthcare professionals.  

The research challenges lie in developing person-specific machine learning models, which are verifiable and robust in the face of noisy real-world sensor data that will change over time, as the person’s condition changes. There is also a gap in knowledge on how best to select and integrate multiple types of sensor data, in a way that preserves the integrity of the different streams of information, while also providing a meaningful representation of the person’s activity.  

This research will address the challenges noted, and also explore the design of interactive systems that can incorporate user input for semantic labelling and modelling, using an active learning approach. Keeping the user in the loop can improve engagement, while offering improved reasoning and confidence in sensor selection and fusion techniques. This research will explore multi-modal user-input approaches for eliciting and integrating user input for semantic labelling, using a combination of supervised, un-supervised and self-learning techniques to address the challenges of noisy data and reliably tracking changes in long-term conditions over time.  

This research will be informed by, and related to, ongoing preclinical work being conducted by members of the interdisciplinary supervisory team, exploring behavioural and physiological changes in response to pregnancy, the ageing process and age-related diseases such as stroke, diabetes and cardiovascular dysfunction. 

Prospective PhD applicants are expected to have a degree in Computer Science or Maths with knowledge of Data Science, Machine Learning and AI. This project will require excellent programming skills with evidence of proficient working knowledge in one or more of the following: C++, C, Java, Python, ROS. 

Supervisors: Prof Praminda Caleb-Solly (School of Computer Science), Dr Matthew Elmes (School of Biosciences), Prof Claire Gibson (School of Psychology).  

Clinical partners: Alison Wildt (National Rehabilitation Centre Clinical Support Manager), Chrishanti Thornton (Extracare Charitable Trust)  

Postgraduate Researcher: Gabriel Leach (School of Computer Science)

Machine learning for gravitational wave astronomy: beyond vanilla black holes

Gravitational waves are propagating fluctuations of space and time created by accelerating objects in Einstein's theory of general relativity. For strongly gravitating objects undergoing highly dynamical motion---like the merger of two black holes---the emitted radiation is strong enough to propagate across the universe to Earth, where it is detected by the LIGO-Virgo-KAGRA (LVK) network of gravitational-wave observatories. These signals encode the properties of the source, which we can decipher by comparing to theoretical models. Gravitational waves were first detected in 2015, and since then nearly 100 such events have been observed. Together these have informed our understanding of astrophysics, cosmology, and fundamental physics---ushering in the new era of gravitational wave astronomy. 

As detectors are improved, analysis of observational data becomes more challenging: this is due to the complexity of the signal and noise models, the growing rate of detections, and a constant desire for rapid results. To address these challenges, new approaches including machine learning are being explored. In particular, probabilistic deep learning architectures such as normalising flows have demonstrated orders-of-magnitude speed-ups. This opens an opportunity to perform new types of analyses that were previously far too expensive. These include searching for gravitational waves from topological defects and phase transitions in the early universe, as well as black holes in alternative theories of gravity. These are currently limited by the number of models that can be investigated, whereas there is large uncertainty in the production of gravitational waves beyond standard model physics. This project will develop the relevant machine learning algorithms and use them to analyse real gravitational wave data and probe theories of gravity and cosmology. 

Applicants are expected to have a strong background in either physics, astronomy, mathematics, or computer science, as well as experience with Python. Experience with deep learning, PyTorch, and gravitational waves is desirable, but not essential. 

Supervisors: Dr Stephen Green (School of Mathematical Sciences), Dr Adam Moss (School of Physics and Astronomy), Prof Thomas Sotiriou (School of Mathematical Sciences).

Postgraduate Researcher: Alexander Roussopoulos (School of Mathematical Sciences)

Modelling Human-Robot Interaction in Social Spaces

Robotics and related AI technologies are rapidly gaining presence in different areas of our everyday life, e.g. cleaning robots vacuuming floors, warehouse robots carrying pallets, robotic vehicles with cruise control. An exciting use of robotics is social and telepresence robots, which are intended to work in public and social contexts, including educational and museum settings, and to provide support for older adults and populations with accessibility issues. 

This PhD project will study and quantify human interactions with commercially available robots in different contexts (participants/robots/places/functions) with a view to creating models of human-robot interaction (HRI) in these contexts. These models will help to improve design of spaces optimising human-robot interaction and also inform the development of best practice guidelines for robot embodiment, interaction strategies and autonomous behaviour. 

In line with this goal, this PhD project aims to model sustainable human-robot interaction strategies for socially capable robots designed to function in public spaces. The project will target technological and psycho-sociological challenges related to AI to investigate the following overarching research questions: 

  1. How can social and telepresence robots be used to connect groups of remote humans and mediate the interaction between them? 

  1. What kind of personalisation methods and input/output modalities are useful to improve the interaction between humans and robots and enable long term sustainability of the communications? 

  1. How do the attitudes and perceptions toward robots change in children and adults over time? 

  1. Are these attitudes and perceptions affected by cultures, communities and the interaction environments?  

This PhD project will benefit from a strong multidisciplinary approach at the interface of Computer Science, Robotics, and Psychology. Applicants are expected to develop technological advancements in AI and Interaction Design, including using machine-learning for generating personalised user models for children and adults, adaptive motion planning in social environments, feedback generation. In addition, the successful student will design, conduct and analyse experiments to investigate the socio-psychological effects of the technologies. 

Supervisors: Prof. Praminda Caleb-Solly (School of Computer Science), Dr Emily Burdett (School of Psychology), Dr Ayse Kucukyilmaz (School of Computer Science).

Postgraduate Researcher: Adam Biggs (School of Computer Science)

 

2022

A Generic Image Segmentation Platform for Novel Feature Exploration in Multimodal MR Imaging using Minimally Supervised Machine Learning

This PhD will develop novel AI approaches to image segmentation to explore meaningful clinical features in multimodal Magnetic Resonance Imaging (MRI).  We will drive this by tackling several important imaging scenarios characterised by small datasets and challenging image quality. 

Often in advanced MRI we can visually identify a feature of interest and segment it manually with a-priori knowledge about anatomical features and expected MR contrast. However, this can be extremely difficult and time consuming, particularly when we are interested in ‘difficult’ to segment features, for instance very small objects with low contrast, or objects that have ill-defined anatomy or variable image contrast. For example, layer 4 of the cortex is this very fine continuous layer that runs through much of the grey matter of the brain and is pivotal in furthering our understanding of brain function and dysfunction. However, it is hard to distinguish reliably in all subjects and there are currently no methods to automatically identify it from MR images. 

The aim of this PhD project is to:

  • Develop machine learning methods to segment the tissue of interest with minimal high-level supervision (e.g. shape, topology, connectivity etc.).
  • Supplement the machine learning results with model-based approach development where appropriate, e.g. to ensure the results maintain a complete surface rather than broken patches.
  • Optimise the multimodal MRI acquisition, informed by initial AI results to maximize the efficiency of the automatic segmentation, in terms of image quality and data harmonisation. 

Supervisors: Dr Xin Chen (School of Computer Science), Dr Karen Mullinger (School of Physics and Astronomy), Dr Caroline Hoad (School of Physics and Astronomy), Prof Andrew French (School of Computer Science), Prof Penny Gowland (Physics and Astronomy). 

Postgraduate Researcher: Stephen Lloyd-Brown (School of Computer Science)

Age-related changes in brain connectivity and cognition studied through machine learning

This PhD project aims to study how the brain/cognition relationship changes across lifespan, using an open data archive containing neuroimaging data from 600+ participants (Cambridge Centre for Ageing Neuroscience). The project will combine information obtained from multiple brain imaging methods:

1.     Functional MRI (fMRI), which measures brain activity indirectly through its effects on blood vessels;

2.     Magnetoencephalography (MEG), which measures the magnetic field around the head generated by brain activity as a direct measure of neural activation;

3.     Diffusion-weighted MRI (DWI), which measures the anatomical “wiring” of the brain; and

4.     Structural MRI (sMRI), which measures shape-related properties of cortical folding and subcortical structure.

To combine information from the various neuroimaging modalities, several approaches will be explored, among them methods based on recent theoretical advances in complex networks that employ multilayer networks to describe multiple interacting network structures simultaneously. Machine learning methods, including neural networks and relevance vector machines, will be used to determine which aspects of brain networks (across all imaging modalities studied) best predict individual cognitive ability. Finally, this approach will be used to test existing theories of how brain networks reorganize with age, with hypotheses about age-related changes of brain lateralization and about shifts between anterior and posterior brain activation.

Applicants are expected to have basic Matlab or Python programming skills, and a quantitative background (physics, mathematics, computer science or engineering) is desirable.

Supervisors: Dr Christopher Madan (School of Psychology), Dr Reuben O'Dea (School of Mathematics), Dr Andrew Reid (School of Psychology), Dr Martin Schürmann (School of Psychology).

Postgraduate Researcher: Kinga Korek (School of Psychology)

AI for Optimisation of Chemical Reactions

This is an interdisciplinary project cutting across Chemistry and Computer Science which will look into the practical goals of maximising the efficiencies and selectivities of chemical reactions using AI optimisation and machine learning techniques. Traditionally, chemists have done this by empirical experimental procedures, which have three critical failings: that it’s slow, it’s a poor way to explore chemical data space, and critically ‘fail’ results are often just completely ignored (because they are frequently beyond human interpretation).

An initial collaborative study carried out using a limited data set describes featurisation of seven components of a chemical catalyst producing chiral pharmaceutical building blocks. Interpretable featurisation allows Quantitative Structure-Property Relationships (QSPR) between the catalyst structure and the chiral purity of the pro-pharmaceutical products in these rhodium-catalysed asymmetric Michael additions (RhCASA). Our overall approach based on machine learning favours easier human interpretation and the realisation of improved catalysts, some currently on preliminary trial within GlaxoSmithKline. 

This PhD project will extend the previous study, investigating recent advances in optimisation and machine learning methods focusing on active learning, explainable machine learning and generative methods, and how they can be applied effectively to the challenging data at hand. 

Applicants will be expected to have a Chemistry or Chemical Science relevant background. Knowledge of machine learning and/or optimisation methods is desirable, but not essential. 

Supervisors: Prof Simon Woodward (School of Chemistry), Prof Ender Ozcan (School of Computer Science), Dr Grazziela Figueredo (School of Computer Science). Industrial External Advisors/Mentors: GlaxoSmithKline (Dr Katherine Wheelhouse), molecule.one (Piotr Byrski).

Postgraduate Researcher: Eduardo Aguilar Bejarano (School of Chemistry)

Life of a sperm whale: an AI support for its preservation

The sperm whale is a long-lived pelagic mammal with a worldwide range which has been listed as endangered in the Mediterranean sea. Knowledge of sperm whale social organisation and movement in the basin is still scarce. Non-invasive techniques to study their lives, habits, and migration patterns include geotagged photographic data. Single-subject identity is reconstructed through a visual investigation of unique marks and pigmentation patterns on the dorsal fin, tail and other body parts. However, this process is still primarily done manually, which inhibits the ability of researchers to track individuals across large geographic areas and time scales.

This project will develop advanced deep learning approaches to automate the identification of single subjects (individual whales), combining this with picture geolocation data to track interactions between subjects and whale pods’ evolution over time. The project will use a combination of image recognition, machine learning, and statistical inference methodologies to address questions about the social structure, habits and movements of sperm whales that populate the Mediterranean Sea. The outcome of this project will contribute to supporting the activity of the NGO OceanoMare Delphis in the education and influence of national/international policies for the conservation of whales' critical habitats and migration corridors.

Supervisors: Dr Silvia Maggi (School of Psychology), Dr Michael Pound (School of Computer Science), Prof Theodore Kypraios (School of Mathematical Sciences). External Advisor: Barbara Mussi, President of OceanoMare Delphis Onlus.

Postgraduate Researcher: Sam Fuller (School of Psychology)

Seeking a better view: Guiding cameras for optimal imaging via reinforcement learning

In Biosciences, image capture is often used as the first step in obtaining reliable scientific measurements of plants. Good image capture is fundamental to many of these experiments, by measuring plants you can determine which are healthier, more robust, or producing more food. However, you can’t measure what you can’t see, and, capturing a better image could be crucial in measuring the subtle differences which indicate a higher yielding crop, or a resistance to pests. When humans take photos they adjust their position and angle to better see the subjects being captured, particularly when imaging physically complex and highly variable objects. Taking better pictures is something photographers aspire to for years, and yet in science most image capture is performed using static cameras that don’t move, using a “one size fits all” approach for convenience. This introduces significant risk that crucial information will not be captured.

Advances in machine learning and robotics mean we can now train machines to look for better views before taking pictures, this is active vision. This PhD will explore reinforcement learning techniques to move robotically controlled cameras to new positions, views and zoom in in order to better capture the subject of interest.

You will explore approaches to train robotic systems to capture better images with no human interaction. You will work across disciplines, imaging a variety of plant subjects ranging in size and shape, with a view to improving performance on a range of tasks including 3D reconstruction, feature detection and counting. Access will be provided to automated imaging systems equipped with varied movement capability ranging from linear actuators to 6-dof robotic manipulators, and you will work closely with bioengineers in designing new systems. Key to this work will be an exploration of different reward systems, how do we determine which view is better than another, or which view is the best? We will then explore the most appropriate and powerful reinforcement learning regimes, including those guided by human examples. Though the PhD will focus on plants, the results of your work will ultimately be used to drive imaging systems across the University, and will be applicable to many other areas of image capture, such as physics, astronomy and medical imaging.

Supervisors: Dr Michael Pound (School of Computer Science), Dr Darren Wells (School of Biosciences), Dr Jonathan Atkinson (School of Biosciences),  Prof Tony Pridmore (School of Computer Science).

Postgraduate Researcher: Lewis Stuart (School of Computer Science)

 

Improving 3D Small Scale Medical Image Segmentation using AI

Image segmentation is a common task in computer vision and has many useful applications in life science imaging, particularly in medical research. The process of segmenting images has moved towards a more automated approach since the introduction of artificial intelligence (AI) and deep learning (DL) because manual annotation of images is tedious and, in the case of life science imaging, often requires expert knowledge. These AI methods require some level of annotation in order to learn so the burden of manual annotation is not removed entirely, but is reduced significantly.

The objective of this PhD project is to implement novel AI methods to improve the segmentation of cell-scale images by enhancing the segmentation quality, while reducing the time and the effort required from experts for manual segmentation for pre-annotated data. The project will focus on tackling some of the difficulties associated with 3D medical image segmentation such as limited training data, loss of global image context and large computational resource requirements. We will also look at modality specific data augmentation and expanding datasets with synthetic images. The objective is for solutions resulting from this project to be adopted by the members of the Rosalind Franklin Institute to apply to new datasets. 

Supervisors: Dr Michael Pound (School of Computer Science), Prof Andrew French (School of Computer Science) and Mark Basham (Rosalind Franklin Institute).

Postgraduate Researcher: Victoria Hann 

 
 

Further information

For further enquiries, please contact Professor Ender Özcan - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit:
www.nottingham.ac.uk/enquire