School of Computer Science
  

Nottingham Doctoral Training Centre in Artificial Intelligence

PhD studentships: Nottingham Doctoral Training Centre in Artificial Intelligence

  • Choose a multidisciplinary fully-funded PhD in the world-transforming field of artificial intelligence
  • A wide choice of AI research topics available, working with world-class academic experts in the field
  • Benefit from a fully-funded PhD with an attractive annual tax-free stipend and an individual research training grant
  • Studentships incorporate an internship placement within the AI industry
  • Join a multidisciplinary cohort to benefit from peer-to-peer learning and transferable skills development

Please check this webpage for announcements as to when recruitment for the AI DTC PhD studentships will commence again. Application round is currently on hold.

Studentship information
Entry requirements Minimum of a 2:1 bachelor's degree in computer science or a related discipline, and a strong enthusiasm for artificial intelligence research. Studentships are open to home and EU students only
Start date TBC
Funding Annual tax-free stipend of £15,285 (2020/21 rate) plus fully-funded PhD tuition fees for the four years 
Duration 4 years

 

Research topics

In your application you will be asked to rank your top three research topics from the list below that you are interested in.

Explainable AI

Supervisor: Professor Christian Wagner, Computer Science

The development of increasingly Explainable AI (XAI) is one of the most important research frontiers in AI and science generally.

As part of an ongoing body of work targeting the combination of statistical machine learning with white-box AI techniques such as linguistic rules and interpretable aggregation operators, we are looking for PhD candidates with a quantitative background such as computer science or statistics.

You should be excited about focussing on the development of next-generation AI techniques which do not solely provide strong performance (eg in classification), but can also teach us about how/and why their decisions came to be.

 

Verifiable AI

Supervisor: Professor Graham Hutton, Computer Science

We are recruiting one PhD student to carry out research with a focus on 'Verifiable AI'.

The aim of this project is to investigate the kind of questions below and to make progress towards providing answers.

  • What does it mean for a machine-learned algorithm to be correct?
  • How can we go about providing some kind of formal guarantee that such an algorithm is correct?
  • More generally, are these questions even meaningful, or do we need to rethink the issue of verifiability in this setting?

This project seeks to combine two areas that have traditionally been quite separate: machine learning and formal verification.

In the last few years, progress in the area of machine-learned algorithms has been dramatic, and there are many impressive examples of such algorithms in common use. At the same time, progress in formally verified software has now reached the point where it is feasible to give formal guarantees about the correctness of real-world software systems. There are again many impressive example of this being achieved.

You are typically expected to have:

  • a first-class masters and bachelor’s degree in computer science, mathematics, or a closely related discipline
  • experience in subjects such as mathematical logic, formal semantics, functional programming, type theory, proof assistants, and machine learning
 

AI methods and advanced biomaterials

Supervisors: Professor Morgan Alexander, Pharmacy and Dr Grazziela Figueredo, Computer Science

Advanced biomaterials are urgently needed to address healthcare challenges faced by society associated with aging population.

High throughput technologies generate substantial volumes of data for large numbers of biomaterials, with diverse chemical and topographical properties. Machine learning methods are highly successful in making predictions of useful properties driving the biological phenomena within complex materials.

However, these models for materials design are limited by the lack of efficient descriptors (mathematical objects capturing relevant structural, physicochemical and topographical properties of materials) for training the models.

Next-generation screening platforms have been designed to screen topographies, chemistries and their combinations. However, effective use of large data generated by these platforms is yet to be fully realised.

New, efficient and interpretable descriptors to help deliver next-generation performance are needed. 

This project aims at using AI (mathematical algorithms, deep learning, and sparse feature selection) to generate descriptors and designs of biomaterials structures and surface topography images that will map to desired biological responses. The objectives is to understand important properties of materials and use it to develop AI methods to create new improved materials that will affect immune responses and bacterial infection.

 

Normative generative models of brain connections for personalised disease mapping in mental illness

Supervisor: Dr Stam Sotiropoulos, Medicine and Health Sciences and Dr Xin Chen, Computer Science

The diagnosis and treatment of mental health disorders remain particularly challenging. Magnetic resonance imaging provides unique possibilities towards these challenges, allowing us to map the brain’s connectional architecture and explore pathology-induced network disruptions and dysfunction.

However, lack of objective, quantitative measures in brain connectivity mapping, restrict the potential for clinical applications.

In the absence of references, case-control studies are commonly-used. These include comparing a group of healthy controls with a group of patients but inferences are limited to the group level. Their applicability also becomes problematic when studying mental health, where patient groups are highly heterogeneous and symptoms can overlap between diseases.                                                           

In this project, a new approach is introduced to tackle these problems. To untangle the complexity of mental illness, a clearer understanding of the brain’s normal variation is key. To achieve this, we will devise novel normative frameworks for brain connectivity, providing reference standards to characterise disorders in individual patients.                                                           

We will use population-level neuroimaging and clinical data (UK-Biobank and Human Connectome Project) and artificial intelligence generative modelling methods such as generative adversarial networks to:

  • develop multidimensional normative models that quantitatively characterise healthy variation of brain connections against clinical covariates
  • make personalised predictions on disease severity and variability and map pathology-affected brain networks, by determining deviations from the normative ranges
  • validate using independent datasets and demonstrate feasibility in mood disorders, by characterising the normative range of brain connections given clinical measures
 

Automated algorithm design for transport logistics scheduling supported by machine learning

Supervisor: Dr Rong Qu, Computer Science

The recent advances on machine learning and optimisation produced highly effective solutions for real world transport scheduling, which presents an increasing demand in logistic supply chain.

Knowledge and advances of intelligent optimisation algorithms have been obtained; however, scattered in the literature.

In transport scheduling, a large amount of data has also been collected with the development of Internet of Things (IoT) at companies including Microlise Ltd.

This theme of research proposes to integrate machine learning into optimisation from two aspects:

  • enhance intelligent algorithm design
  • produce cost-effective solutions based on transport big data
 

Neural networks with frugal learning

Supervisor: Professor Mark Van Rossum, Psychology

Both biological and artificial neural networks learn by changing the strength of connections between their neurons. Recently, it has emerged that in biology this modification of connections is an energy-costly process (Mery et al).

As energy constraints are believed to have shaped many aspects of the brain’s design (see refs), this raises the question - how can the brain learn while being frugal with the modifications?

In this project, we will use analytical calculations and simulations to see how different network architectures require different amount of energy to learn to perform well on standard tasks. Next, we will examine how different modification rules (learning rules) affect energy consumption. We hope that the outcomes of this research will lead to deeper insight in how the brain implements learning. At the same time, we hope that the findings will inspire machine learning.

Requirements for this project:

  • A strong quantitative background
  • An affinity to biology and machine learning/AI
  • Python/matlab skills are desirable
  • No prior biology knowledge is required

Suggested reading:

  • Lennie, Peter. "The cost of cortical computation." Current biology 13.6 (2003): 493-497.
  • Laughlin, Simon B. "Energy as a constraint on the coding and processing of sensory information." Current opinion in neurobiology 11.4 (2001):475-480.
  • Mery, F., & Kawecki, T. J. (2005). A cost of long-term memory in Drosophila.Science, 308 (5725), 1148-1148.
 

Inhibition stabilised networks with synaptic depression

Supervisor: Professor Mark Van Rossum, Psychology

It still is not very clear excitation and inhibition are coordinated in cortex.

Recently there has been revival of an older idea, that the excitation by itself is unstable, and only inhibition limits it. This ‘inhibition-stabilized’ regime has some interesting properties that match some data (Hennequin et al 2018).

Here, we include in this network the effect that synapses typically exhibit short term synaptic depression (van Rossum et al 2008), which by itself also explains a number of phenomena, such contrast and flash-duration independence of visual responses in higher visual areas, and the contrast-latency relation. Here we will undertake simulations of this hybrid circuit to examine its properties.

Suggested reading:

  • Adaptive integration in the visual cortex by depressing recurrent cortical circuits, MCW van Rossum, MAA van der Meer, D Xiao, MW Oram, Neural Computation 20 (7), 1847-1872.
 

Smart 3D

Supervisors: Professor Richard Leach and Dr Samanta Piano, Engineering

The 3D scanning of objects is fundamental in a range of human activities, from heritage to construction to manufacturing. Current optical scanning techniques require measurement and computational steps that can be time consuming and compromise the accuracy of the data. Artificial intelligence can be applied to address both of these limitations by using machine learning approaches to enhance the brute computation and vast amounts of measurements that are used for conventional approaches.

Using existing 3D scanning technology, the project will tackle a number of measurement and computational challenges including how to determine the minimum number and the optimum position of views, how to optimise the number of points for a given feature and how to estimate the confidence in the data.

These challenges will be applied with and without a prior model of the object being measured; sometimes requiring the development of reinforced learning methods.

The project will be supported by the Manufacturing Technology Centre and Taraz Metrology.

 

High accuracy robotic system for precise object manipulation

Supervisors: Professor David Branson III and Professor Richard Leach, Engineering

This project will combine state-of-the art metrology and computer science to increase by an order of  agnitude the positional accuracy of industrial robots operating within working volumes exceeding 1 m3.

This will enable precise object manipulation across many application areas including precision manufacturing and assembly systems.

You will develop an intelligent framework, derived from a machine learning algorithm for accurate real-time object tracking, based on the utilisation and integration of advanced measurement data from a new interferometer and non-contact 3D coordinate measuring instruments, to compensate for systematic and random errors.

Results from this research will take industrial robots to the next level of high-accuracy manufacturing to reduce costs, increase production efficiency and improve product quality.

The project will be supported by the Manufacturing Technology Centre and Taraz Metrology.

 

Machine learning for first-principles calculation of physical properties

Supervisors: Dr Richard Graham, Mathematical Sciences and Dr Richard Wheatley, Chemistry

The physical properties of all substances are determined by the interactions between the molecules that make up the substance. The energy surface corresponding to these interactions can be calculated from first-principles, in theory allowing physical properties to be derived ab-initio from a molecular simulation; that is by theory alone and without the need for any experiments.

Recently, we have focussed on applying these techniques to model carbon dioxide properties, such as density and phase separation, for applications in carbon capture and storage.

However, there is enormous potential to exploit this approach in a huge range of applications. A significant barrier is the computational cost of calculating the energy surface quickly and repeatedly, as a simulation requires.

We have recently developed a machine-learning technique that, by using a small number of precomputed ab-initio calculations as training data, can efficiently calculate the entire energy surface.

This project will involve extending the approach to more complicated molecules and testing its ability to predict macroscopic physical properties.

The project will be based at the University of Nottingham in the School of Mathematical Sciences and the School of Chemistry.

 

Learning collective behaviour

Supervisor: Dr Dante Kalise, Mathematical Sciences

This project is at the interface between data-driven modelling, dynamical systems, and control theory. Over the last years, the study of multi-agent systems has become a topic of increasing interest in mathematics, biology, sociology, and engineering, among many other disciplines.

Multi-agent systems are usually modelled as a large-scale set of particles interacting under simple binary rules, such as attraction, repulsion, and alignment forces. The wide applicability of this setting ranges from modelling the collective behaviour of bird flocks, to the study of data transmission over communication networks, including the description of opinion dynamics in human societies, and the formation control of platoon systems.

Borrowing a leaf from statistical mechanics, many of these applications admit a multiscale description. The system can be described in terms of its microscopic/particle dynamics, or through evolution of meso/macroscopic state such as the density of agents.                                                                      

In this project, we will develop a statistical machine learning framework for the construction of multi-agent dynamical systems from real data. Large datasets originating from collective behaviour experiments with animals, pedestrians, and social networks, will be the basis for calibrating a suite of multiscale interacting particle systems. This will allow us to generate predictive models to be used in model-driven policy making and control.                                                           

References:

  • [1] Y.P. Choi, D. Kalise, A. Peters and J. Peszek. A collisionless singular Cucker-Smale model with decentralized forcing and applications to formation control for UAVs, SIAM Journal on Applied Dynamical Systems 18(4)(2019):1954–1981.
  • [2] G. Albi, M. Bongini, E. Cristiani, and D. Kalise. Invisible control of self-organizing agents leaving unknown environments, SIAM Journal on Applied Mathematics 76(4) (2016):1683- 1710.
  • [3] Strombom, D., Mann, R., Wilson, A., Hailes, S., Morton, A., Sumpter, D., King, A., King, A. Solving the shepherding problem: heuristics for herding autonomous, interacting agents Journal of The Royal Society Interface 11 100 20140719 20140719 (2014).
  • [4] S. Rudy, A. Alla, S. L. Brunton, and J. N. Kutz. Data-Driven Identification of Parametric Partial Differential Equations, SIAM J. Appl. Dyn. Sys. 18(2):643–660, 2019.
 

Efficient computational methods in high dimensional optimal transport

Supervisor: Dr Dante Kalise, Mathematical Sciences

Optimal transport (OT) is a topic dating back to the seminal work of Gaspard Monge in 1781 [1] with modern applications including:

  • image processing
  • neural networks
  • resource allocation
  • weather forecast
  • statistical inference

among many other areas.

The objective of this project is to develop new computational techniques for the efficient numerical realisation of solutions to the OT problem in high-dimensions, and its applications in data science.

For this, we will study two different numerical approaches:

  • the realisation of the OT problem as a fluid flow control problem [2]
  • a method based on the entropic regularisation of the OT problem, the so-called Sinkhorn algorithm [3]

We will study efficient numerical techniques for the approximation of the underlying computational optimisation problems based on proximal splitting methods [4].                                                                    

References:

  1. G. Monge. M ́emoire sur la th ́eorie des d ́eblais et des remblais, Histoire de l’Acad ́emie Royale des Sciences de Paris, avec les M ́emoires de Math ́ematique et de Physique pour la mˆeme ann ́ee, pages 666–704, 1781.
  2. J.D. Benamou and Y. Brenier. A computational fluid mechanics solution to the Monge- Kantorovich mass transfer problem, Numerische Mathematik 84(3), 375–393, 2000.
  3. M. Cuturi. Sinkhorn distances: lightspeed computation of optimal transport, NIPS’13, pp. 2292-2300.
  4. L.M. Bricen ̃o-Arias, D. Kalise, and F.J. Silva. Proximal methods for stationary Mean Field Games with local couplings, SIAM Journal on Control and Optimization 56(2), 801-836, 2018.
  5. G. Peyr ́e and M. Cuturi. Computational Optimal Transport, ArXiv:1803.00567, 2018.  
 

Plant phenotyping: Guiding deep networks for expert image labelling using human gaze

Supervisor: Dr Michael Pound, Biosciences/Computer Science

Deep learning has proven to be extremely effective in supervised tasks, surpassing the state-of-the-art in most areas, including segmentation, classification and object localisation. This has quickly built up a reliance on high-quality annotation data.

In some domains such as general object classification, manual annotation of images is still cost-effective, even when transferring to completely new domains. In biological imaging, this is often not the case.

Experts examining images will consider numerous separate metrics, weighing them together before arriving at an image-level decision. These complex and often ambiguous problems pose a real challenge to deep networks, where there are many possible ways to interpret an image and weigh up its features. This inevitably places an extremely heavy burden on annotators, who must label even more features in each image in order to provide effective supervision.

This PhD will explore techniques to leverage human gaze and fixation information, captured while annotation takes place, to more effectively guide the training of deep neural networks.

Tools will be developed to allow experts to quickly analyse large sets of images, while information on where and when they look is recorded.

A core part of the PhD will be the development of deep networks able to exploit this information through novel attention-driven techniques. A key measure of success will be the general nature of the approaches; the datasets and images used during this project will be widely varied. These will range from new large-scale datasets plants under heat and drought stress, through to generalised problems over widely-used public datasets. This work will have wide impact in a variety of fields.

 

 

Creating privacy data for clinically relevant behaviour analysis

Supervisor: Professor Michel Valstar, Computer Science

External party: BlueSkeye AI Ltd

Recent advances in AI have shown that it is possible to use objective expressive behaviour as a cue to recognise poor mental health. However, the audio-visual data that is required to learn such mental health analysis machine learning models is highly sensitive, often portraying vulnerable people talking about their personal lives. In addition, such data is very scarce and difficult to obtain.

Recent methods in automatic data synthesis using e.g. Generative Adversarial Networks (GANs) offer the opportunity to create more data, which could at the same time be synthesised in such a way as to not look like the original person. How to do this is still an open research question, as is how to do so for audiovisual data.

This research would be in partnership with BlueSkeye AI Ltd, a spin-out company from the University of Nottingham that is building and commercialising mental health assessment technology.

 

Developing AI methods for 3D cell analysis

Supervisor: Dr Andrew French, Computer Science

External party: The Rosalind Franklin Institute

High resolution 3D imaging of cells is becoming more mainstream in the biological community due to the increased availability of high-electron microscopes, and a growing number of Nano-focus X-ray sources becoming accessible around the world.

The Rosalind Franklin Institute is one such group who is driving forward these developments to increase the size and resolution of the resulting datasets. With increased datasets though, comes the exacerbation of an already significant problem: that of data analysis.  It can take months for a research scientist to manually analyse one dataset collected via one of these machines currently, whilst in comparison actual image capture takes significantly less than an hour in most cases. With next generation machines planned to create an order of magnitude more data, this problem intensifies.

The University of Nottingham's Computer Vision Lab has worked with Diamond Light Source in the past to attempt to address some of these problems, with the very effective SuRVoS[1] software application, which reduces the time for a researcher to analyse their data from weeks to days. 

This project offers a unique opportunity to try to reduce this time further, investigating the advances in deep-learning methods and how they can be applied as effectively as possible to this challenging data.

 

Further information

For further enquiries, please contact Professor Michel Valstar - School of Computer Science

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit:
www.nottingham.ac.uk/enquire