The following is a sample of the typical modules that we offer as at the date of publication but is not intended to be construed and/or relied upon as a definitive list of the modules that will be available in any given year. Due to the passage of time between commencement of the course and subsequent years of the course, modules may change due to developments in the curriculum and the module information in this prospectus is provided for indicative purposes only.
This module provides an introduction to probability by developing a mathematical framework for the logic of uncertainty. The language of set theory is used to describe random events. An axiomatic definition of probability is introduced and used as a basis for developing material such as conditional probability, independence and Bayes' Theorem. Random variables are introduced, including definitions and manipulations involving mass, density and distribution functions, and expectation and variance. Standard discrete and continuous random variables are presented. A number of additional topics are considered, such as sums of random variables, simple transformations of random variables, bivariate discrete random variables, and an introduction to the central limit theorem.
Analytical and Computational Foundations
This module is one of three linked 20-credit year-long modules introducing students to a broad range of core mathematical concepts and techniques that underpin all the School of Mathematical Sciences' degree programmes. It has three components:
- Mathematical reasoning (the language of mathematics, the need for rigour, and methods of proof)
- The computer package MATLAB and its applications
- Elementary analysis.
In this module a range of statistical ideas and skills are developed, building on the foundations of probability covered in the Probability module. It describes mathematical concepts and techniques for modelling situations involving uncertainty and for analysing and interpreting data. In particular, exploratory data analysis, point estimation, confidence intervals, hypothesis testing, linear regression and analysis of categorical data are covered. Use is made of an appropriate statistical package to apply the principles and methods described in the lectures.
Calculus and Linear Algebra
The module consolidates core GCE mathematical topics in the differential and integral calculus of a function of a single variable and used to solving some classes of differential equations. Basic theory is extended to more advanced topics in the calculus of several variables. In addition, the basic concepts of complex numbers, vector and matrix algebra are established and extended to provide an introduction to vector spaces. An emphasis in the module is to develop general skills and confidence in applying the methods of calculus and developing techiniques and ideas that are widely applicable and used in subsequent modules. Major topics are:
- differential and integral calculus of a single variable;
- differential equations;
- differential calculus of several variables;
- multiple integrals;
- complex numbers;
- matrix algebra;
- vector algebra and vector spaces.
Databases and Interfaces
Databases are everywhere and we interact with many different databases every day, using the web, using electronic calendars, diaries or timetables, making appointments, searching for contact details, shopping online, looking up directions, and many more things. These databases need to be both easy to use and fast. This module considers both the structure of databases, including how to make them fast, efficient and reliable, and the appropriate user interfaces which will make them easy to interact with for users. You will start by looking at how to design a database, gaining an understanding of the standard features that management systems provide and how you can best utilise them and then develop an interactive application to access your database.
Throughout the lectures and computing sessions you will learn how to design and implement systems using a standard database management system, web technologies and GUI interfaces through practical programming/system examples. You will spend around three hours per week in lectures and two hours per week in organised computer labs studying for this module, and will be expected to spend additional time practising and completing your coursework.
Programming and Algorithms
You will be introduced to principles of programming and algorithms. It covers fundamental programming constructs, such as types and variables, expressions, control structures and functions. The module also teaches how to design and analyse simple algorithms and data structures that allow efficient storage and manipulation of data. Finally, it familiarises students with software development methodology, including documentation, testing, debugging, and the use of software development tools, such as integrated development environments (IDEs) and version control systems.You will spend around 6 hours per week in lectures, computer classes and tutorials.
This module gives a basic understanding of the fundamental architecture of computers and computer networks. This module will introduce how the simple building blocks of digital logic can be put together in different ways to build an entire computer. It will also show how modern computer systems and networks are constructed of hierarchical layers of functionality which build on and abstract the layers below. You will spend five hours per week in tutorials, lectures and computer classes for this module.
Fundamentals of Artificial Intelligence
You will gain a broad overview of the fundamental theories and techniques of Artificial Intelligence (AI). You’ll explore how computers can produce intelligent behaviour, and will consider topics such as the history of AI, search techniques, data mining, machine learning, game playing techniques, neural networks, philosophical issues, and knowledge representation and reasoning.
Algorithms Correctness and Efficiency
This module covers important aspects of algorithms, namely their correctness and efficiency. To address correctness we use a mathematically rigorous approach to formal verification using an interactive proof system. You’ll study topics such as:
- proofs in propositional logic and predicate logic
- classical vs. intuitionistic reasoning
- basic operations on types
- verification of list based programs
- and introduction to program specification and program correctness.
To address the issue of efficiency we cover the use of mathematical descriptions of the computational resources needed to support algorithm design decisions. You’ll study topics such as: sorting algorithms, heaps, binary search trees, hashmaps, and graph algorithms. The emphasis is upon understanding data structures and algorithms so as to be able to design and select them appropriately for solving a given problem.
Introduction to Scientific Computation
This module introduces basic techniques in numerical methods and numerical analysis which can be used to generate approximate solutions to problems that may not be amenable to analysis. Specific topics include:
- implementing algorithms in Matlab
- discussion of errors (including rounding errors)
- iterative methods for nonlinear equations (simple iteration, bisection, Newton, convergence)
- Gaussian elimination, matrix factorisation, and pivoting
- iterative methods for linear systems, matrix norms, convergence, Jacobi, Gauss-Siedel.
Probability Models and Methods
In the first part of this module, the ideas of probability introduced in G11PRB are extended to provide a more formal introduction to the theory of probability and random variables, with particular attention being paid to continuous random variables. Fundamental concepts, such as independence, conditioning, moments, joint distributions, transformations and generating functions are discussed in detail. This part concludes with an introduction to some famous limit theorems and the multivariate normal distribution.
The second part of the module gives an introduction to stochastic processes, i.e. random processes that evolve with time. The focus is on discrete-time Markov chains, which are fundamental to the wider study of stochastic processes. Topics covered include transition matrices, recurrence and transience, irreducibility, periodicity, equilibrium distributions, ergodic theorems, absorption probabilities, mean passage times and reversibility. Discrete-time renewal processes and branching processes are considered as applications. The module finishes with an introduction to simple one-dimensional random walks, including sample path diagrams, the reflection principle, recurrence and transience, first passage probabilities and arcsine laws.
Statistical Models and Methods
The first part of this module provides an introduction to statistical concepts and methods. A wide range of statistical models will be introduced to provide an appreciation of the scope of the subject and to demonstrate the central role of parametric statistical models. The key concepts of inference including estimation and hypothesis testing will be described. Special emphasis will be placed on maximum likelihood estimation and likelihood ratio tests. While numerical examples will be used to motivate and illustrate, the content will emphasis the mathematical basis of statistics. Topics include maximum likelihood estimation, confidence intervals, likelihood ratio tests, categorical data analysis and non-parametric procedures.
The second part of the module introduces a wide class of techniques such as regression, analysis of variance, analysis of covariance and experimental design which are used in a variety of quantitative subjects. Topics covered include the general linear model, least squares estimation, normal linear models, simple and multiple regression, practical data analysis, and assessment of model adequacy.
As well as developing the theory, practical experience will be obtained by the use of a statistical computer package.
Operating Systems and Concurrency
This course covers the fundamental principles that underpin operating systems and concurrency. Topics in operating systems that are covered include the architecture of operating systems, process and memory management, storage, I/O, and virtualisation. The principles of concurrency will be introduced from both the perspective of an operating system and user applications. Specific topics on concurrency include: hardware support for concurrency; mutual exclusion and condition synchronisation; monitors; safety and liveness properties of concurrent algorithms, and the use of threads and synchronisation.
This module builds on your basic Java programming and software engineering skills developed in year one, extending it to working with larger third party software systems, and the challenges associated with this. Topic examples include: design diagrams and modelling; GUI programming; testing software engineering methodologies (including agile development and tools), all in the context of understanding and refactoring third-party code. You will spend around two hours per week in lectures, two hours per week in computer classes, and one hour per week in workshops studying for this module.
Artificial Intelligence Methods
This module builds on the first year Fundamentals of AI, which covers the ACM learning outcomes, and introduces new areas. The emphasis is on building on the AI research strengths in the school. It gives brief introductions to topics including AI techniques, fuzzy logic and intelligent agents, and modern search techniques such as Genetic Algorithms, Tabu Search, Simulated Annealing, and Genetic Programming, etc. Students will also explore the implementation of some AI techniques.
Advanced Functional Programming
Building upon the introductory Functional Programming module in year one, you’ll focus on a number of more advanced topics such as: programming with effects; reasoning about programs; control flow; advanced libraries; improving efficiency; type systems; and functional pearls. You’ll spend around four hours per week in lectures and computer classes for this module.
Introduction to Human Computer Interaction
This module is part of the Human Computer Interaction theme in The School of Computer Science. This module aims to teach an understanding of people's interactions with technology and how to apply this knowledge in the design of usable interactive computer systems. The module will introduce the concept of usability and will examine different design approaches and evaluation methods. Specifically, this module will cover an understanding of different styles of interaction with technology, an analysis of user needs, design standards, low fidelity prototyping techniques and a comparison of evaluation techniques.
Introduction to Image Processing
This module introduces the field of digital image processing, a fundamental component of digital photography, television, computer graphics and computer vision. You’ll cover topics including: image processing and its applications; fundamentals of digital images; digital image processing theory and practice; and applications of image processing. You’ll spend around three hours in lectures and computer classes each week for this module.
You will cover the programming material and concepts necessary to obtain an understanding of the C++ programming language. You will spend around four hours per week in lectures and computer classes for this module and will be expected to take additional time to practice and to produce your coursework.
Building on the material presented in the Foundations of Software Engineering module, you will cover two main aspects of the software engineering process in depth: requirements and design. This will cover modern approaches to large scale requirements and engineering and specification, and approaches to systems and architectural design. You will spend around two hours per week in lectures and one hour in labs for practical experience for this module.
Languages and Computation
You'll investigate classes of formal language and the practical uses of this theory, applying this to a series of abstract machines ultimately leading to a discussion on what computation is, what can and cannot be computed, and computational complexity. You'll focus in particular on language recognition, but will study a range of topics including: finite state machines, regular expressions, context-free grammars, Turing machines, Lambda calculus. Practical applications includes parsing. This module builds on parts of G52ACE and introduces concepts which are important to understand the analysis of algorithms in terms of their complexity.
This module is part of the Operating Systems and Networks theme. This module covers the following topics: overview of parallel and distributed computing; applications of distributed systems; fundamental concepts of distributed systems (processes and message passing, naming and discovery, fault tolerance and partial failure, consistency and cacheing, security); reliable network communication; distributed system design approaches (direct vs indirect communication, client-server vs peer-to-peer, stateful vs stateless interfaces); introduction to distributed data management (3-tier(+) architecture, distributed RDBMSs and no-SQL DBs); introduction to distributed algorithms.
In this module a variety of techniques and areas of mathematical optimization will be covered. The module not only will tackle the problems from a rigorous mathematical background, but also then develop the techniques for application through computational examples. The module will contain sections covering the following topics:
- Lagrangian methods for optimization, overview of uses in a variety of applications – some of which appear in later sections;
- linear programming including the Simplex Algorithm;
- dynamic programming both deterministic control problems and stochastic problems;
- network and graph algorithms
Providing you with an introduction to machine learning, pattern recognition, and data mining techniques, this module will enable you to consider both systems which are able to develop their own rules from trial-and-error experience to solve problems, as well as systems that find patterns in data without any supervision. In the latter case, data mining techniques will make generation of new knowledge possible, including very big data sets. This is now fashionably termed 'big data' science. You'll cover a range of topics including:
- machine learning foundations;
- pattern recognition foundations;
- artificial neural networks;
- deep learning;
- applications of machine learning;
- data mining techniques and evaluating hypotheses.
You'll spend around six hours each week in lectures and computer classes for this module.
Optional computer science modules
Students undertake a project in Computer Science that is relevant to their programme of study; in particular, projects undertaken by Artificial Intelligence (AI) students must have a strong AI focus, and projects undertaken by Software Engineering (SE) students must have a strong SE focus. Each project is supervised by an academic member of staff. A project may be based on theoretical or empirical research or software development. Students must relate their project work to current research and/orprofessional practice, and a suitable evaluation must be included.
Further, relevant professional and ethical aspects must always be considered. It is normally expected that projects will involve software development, but the extent of this depends on the nature of the project. If no or very little software is being developed, then the project must encompass other aspects of similar rigour and intellectual challenge (eg. mathematical proofs, rigorously conducted research, proper statistical analysis of empirical results, etc.).
Guidelines on the word length of dissertations are flexible to accommodate differing types of project work undertaken. All projects must be agreed with the concerned Supervisor and Course Director.
Designing Intelligent Agents
You’ll be given a basic introduction to the analysis and design of intelligent agents, software systems which perceive their environment and act in that environment in pursuit of their goals. Spending around four hours each week in lectures and tutorials, you’ll cover topics including task environments, reactive, deliberative and hybrid architectures for individual agents, and architectures and coordination mechanisms for multi-agent systems.
Spending four hours a week in lectures and computer classes, you’ll cover the following topics:
- security of the computer;
- network security;
- internet security;
- software and hardware security;
- mobile security;
- and cryptography.
You will gain familiarity with the most common attacks on modern computer systems, and defences against these.
You’ll examine current techniques for the extraction of useful information about a physical situation from individual and sets of images. You’ll cover a range of methods and applications, with particular emphasis being placed on the detection and identification of objects, recovery of three-dimensional shape and analysis of motion. You’ll learn how to implement some of these methods in the industry-standard programming environment MATLAB. You’ll spend around four hours a week in lectures, tutorial and laboratory sessions.
Knowledge Representation and Reasoning
You will examine how knowledge can be represented symbolically and how it can be manipulated in an automated way by reasoning programs. Some of the topics you’ll cover include: first order logic; resolution; description logic; default reasoning; rule-based systems; belief networks. You’ll have two hours of lectures each week for this module.
You’ll learn the principles of 3D computer graphics, focusing on modelling and viewing objects/scene in a three-dimensional (3D) world on the computer, rendering the objects/scene to give it realism, and projecting objects/scene onto 2D display in analogy to your taking a photo of a 3D world using a camera. Through weekly lectures and laboratory sessions you’ll explore various computer graphics techniques and develop your OpenGL programming skills required for 3D computer graphics. The module demonstrates the benefits of linking theory and practice.
Software Quality Metrics
Through a two hour lecture each week, you’ll be introduced to concepts and techniques for software testing and will be given an insight into the use of artificial and computational intelligence for automated software testing. You’ll also review recent industry trends on software quality assurance and testing.
Fundamentals of Information Visualisation
Information Visualisation is the process of extracting knowledge from complex data, and presenting it to a user in a manner that this appropriate to their needs. This module provides a foundational understanding of some important issues in information visualisation design. You will learn about the differences between scientific and creative approaches to constructing visualisations, and consider some important challenges such as the representation of ambiguous or time-based data. You will also learn about psychological theories that help explain how humans process information, and consider their relevance to the design of effective visualisations.
Optional mathematics modules
Game theory contains many branches of mathematics (and computing); the emphasis here is primarily algorithmic. The module starts with an investigation into normal-form games, including strategic dominance, Nash equilibria, and the Prisoner’s Dilemma. We look at tree-searching, including alpha-beta pruning, the ‘killer’ heuristic and its relatives. It then turns to mathematical theory of games; exploring the connection between numbers and games, including Sprague-Grundy theory and the reduction of impartial games to Nim.
Coding and Cryptography
This module consists of two main topics of coding theory: error-correction codes and cryptography.
In digital transmission (as for mobile phones), noise that corrupts the message can be very harmful. The idea of error-correcting codes is to add redundancy to the message so that the receiver can recover the correct message even from a corrupted transmission. The module will concentrate on linear error-correcting codes (such as Hamming codes), where encoding, decoding and error correction can be done efficiently. We will also discuss cyclic codes, which are the ones most frequently used in practice.
In cryptography, the aim is to transmit a message such that an unauthorised person cannot read it. The message is encrypted and decrypted using some method, called a cipher system.
There are two main types of ciphers: private and public key ciphers. We will discuss basic classical mono and polyalphabetic ciphers as more modern public key cipher like, for instance, RSA and the elementary properties from number theory needed for them. Key exchange protocols and digital signatures (DSA) are included.
This module is concerned with the two main theories of statistical inference, namely classical (frequentist) inference and Bayesian inference. The classical inference component of the module builds on the ideas of mathematical statistics introduced in the Statistical Models and Methods module. Topics such as sufficiency, estimating equations, likelihood ratio tests and best-unbiased estimators are explored in detail. There is special emphasis on the exponential family of distributions, which includes many standard distributions such as the normal, Poisson, binomial and gamma. In Bayesian inference, there are three basic ingredients: a prior distribution, a likelihood and a posterior distribution, which are linked by Bayes' theorem. Inference is based on the posterior distribution, and topics including conjugacy, vague prior knowledge, marginal and predictive inference, decision theory, normal inverse gamma inference, and categorical data are pursued. Common concepts, such as likelihood and sufficiency, are used to link and contrast the two approaches to inference. Students will gain experience of the theory and concepts underlying much contemporary research in statistical inference and methodology.
In this module the ideas of discrete-time Markov chains, introduced in the Probability Models and Methods module, are extended to include more general discrete-state space stochastic processes evolving in continuous time and applied to a range of stochastic models for situations occurring in the natural sciences and industry. The module begins with an introduction to Poisson processes and birth-and-death processes. This is followed by more extensive studies of epidemic models and queueing models, and introductions to component and system reliability. The module finishes with a brief introduction to Stochastic Differential Equations. Students will gain experience of classical stochastic models arising in a wide variety of practical situations. In more detail, the module includes:
- homogeneous Poisson processes and their elementary properties;
- birth-and-death processes - forward and backward equations, extinction probability;
- epidemic processes - chain-binomial models, parameter estimation, deterministic and stochastic general epidemic, threshold behaviour, carrier-borne epidemics;
- queueing processes - equilibrium behaviour of single server queues;
- queues with priorities;
- component reliability and replacement schemes;
- system reliability;
- Stochastic differential equations and Ito's lemma.
In this module the concepts of discrete time Markov chains studied in the Probability Models and methods module are extended and used to provide an introduction to probabilistic and stochastic modelling for investment strategies, and for the pricing of financial derivatives in risky markets. The probabilistic ideas that underlie the problems of portfolio selection, and of pricing, hedging and exercising options, are introduced. These include stochastic dynamic programming, risk-neutral measures and Brownian motion. The capital asset pricing model is described and two Nobel Prize winning theories are obtained: the Markowitz mean-variance efficient frontier for portfolio selection and the Black-Scholes formula for arbitrage-free prices of European type options on stocks. Students will gain experience of a topic of considerable contemporary importance, both in research and in applications.
Applied Statistical Modelling
This module extends the theoretical aspects of statistical inference (first met in the Statistical Models and Methods and Fundamentals of Statistics modules) by developing the theory of the generalised linear model and its practical implementation. Initially, designing of experiments in order to explore relationship between factors and a response is viewed within the context of Linear models. The module then extends the understanding and application of statistical methodology established in previous modules to the analysis of discrete data and survival, which frequently occur in diverse applications. In the module students will be trained in the use of an appropriate high-level statistical package.
This module is concerned with the analysis of multivariate data, in which the response is a vector of random variables rather than a single random variable. A theme running through the module is that of dimension reduction. Key topics to be covered include:
- principal components analysis, whose purpose is to identify the main modes of variation in a multivariate dataset;
- modelling and inference for multivariate data, including multivariate regression data, based on the multivariate normal distribution;
- classification of observation vectors into sub-populations using a training sample;
- canonical correlation analysis, whose purpose is to identify dependencies between two or more sets of random variables.
Further topics to be covered include factor analysis, methods of clustering and multidimensional scaling.