Skip to main content

Research Areas

Approximate Computing

Computing systems are characterised by several metrics, of which performance, energy consumption and value of computation are strongly correlated.

It is well known that there is often choice in the "correct" values that result from a computation. By changing the computation and the outcome produced, one also affects performance and energy consumption. In this research, we investigate the relation between these metrics and aim to develop techniques that exploit this trade-off.

We have an interest in both problem-specific and problem-independent techniques to make this trade-off. Problem-independent techniques enable approximate computing through automated analysis of significance in computation and data and significance-aware mapping of applications to architectures through the hardware/software interface.

Problem-specific techniques explore how a system can exploit algorithmic insight on correctness to achieve energy-efficient program execution.

Energy Characterisation and Optimisation

This research explores new methods for measuring and accounting energy on many-core computing systems. The overarching goals are to improve the fidelity of energy measurement via a combination of hardware instrumentation and modelling techniques and achieve accurate accounting of energy between both software abstractions and hardware components.

Research on energy optimisation explores methods for reducing the energy cost of computation and data transfers via intelligent mapping of applications to architectures, data placement, concurrency control and coordinated cross-component power management.

We also explore the integration of energy characterisation and optimisation with
parallel programming models and languages.

Related projects: GEMSCLAIM (

High performance computational science

This research are is focused on devising and implementing numerical algorithms and software that can exploit high performance and novel architectures to enable virtual experiments to be performed in computation science application domains.

A particular focus is on the continued development of 2-D R-matrix techniques which are important in the simulation domain of electron collisions with atoms and ions.

Parallel Programming using Task Dataflow

The goal of this research is to improve productivity of parallel programming by making it more akin to sequential or single-threaded programming. This goal is approached through investigating task dataflow parallel programming languages, in particular the language concepts required, compilation techniques and runtime systems.

Task dataflow has two important properties that are absent in mainstream parallel programming languages: Correctness has priority over performance and the programmer is not burdened with specifying parallelism, (s)he only describes side effects, such as the data structures that are read from or written to. The runtime systems schedules the tasks to gain as much parallelism as possible while respecting correctness and the constraints introduced by the side effects.

This research concerns the development of these languages and their compiler and runtime system support.  It considers also how these languages can be used beneficially in the context of some long-standing problems, such as orchestrating data placement and movement in the memory hierarchy.

Non-Volatile Memory (NVRAM) and Hybrid Memory Systems

Byte-addressable non-volatile memory technology is rapidly maturing and promises to offer persistence, increased density and reduced energy consumption at speeds close to DRAM latency.

NVRAM has the potential to make disruptive changes to the design of computing systems and its software. We aim to identify novel uses of NVRAM in the memory hierarchy that exploit the particular benefits of NVRAM.

We are also investigating techniques to manage hybrid memory hierarchies, composed of both NVRAM and DRAM/SRAM components, through system software and methods to integrate hybrid memory hierarchies with programming languages and runtime systems.

Related projects: NovoSoft

Programming Models for Parallel/Distributed Systems

Historically the domain of parallel programming has largely been equated with High Performance Computing (HPC) and programmers of parallel systems have been relatively few in number.

With the coming of multicore and accelerator technologies, the demand for parallel programming skills is set to increase dramatically. It is unrealistic to think that the vast majority of programmers will find themselves equipped with the extensive business specific and system specific knowledge that HPC programmers have had in the past. Parallel programming must be made less demanding by the creation of suitable high-level abstractions for harnessing the power of parallel architectures.

Ongoing work at QUB in this area is looking at the use of parallel patterns (see, for example, the ParaPhrase project: and at employing ideas from autonomic computing to manage non-functional concerns such as performance, energy-usage, security.

Related projects: ParaPhrase (

Parallel Satisfiability Solvers

Determining the satisfiability of a boolean proposition is a problem of great theoretical and practical importance.

It is an open question whether polynomial-time algorithms exist, yet great progress has been made in the use of satisfiability solvers to address real-world problems.This progress has been driven and monitored by the SAT competition ( at which QUB has demonstrated successful solvers and extremely challenging benchmarks.

Exploitation of parallel computation for SAT solving has so far met with limited success and this is a research area with great potential.

High Performance Embedded Systems

As well as the efficient use of very powerful supercomputers, the cluster is also involved at the other end of the computing scale in the exploitation of small embedded systems.

Handhold ( is an EU-funded project to develop a mobile device to detect banned substances at border crossings and airports. QUB is involved in developing the embedded software to enable the device to work efficiently and consume little power, and also investigating the use of Feature Models to represent and predict the resources used by such systems.

Related projects: Handhold (

CPC Program Library

The Library's function is the storage and dissemination of refereed computer programs in physics and physical chemistry,whose detailed descriptions have been published in the journal Computer Physics Communications.
The Program Library currently holds over 2200 refereed computer programs which have been contributed by scientists from all over the world. As such it represents a major repository of computational knowledge and technique. These programs range in size from under one thousand to tens of thousands of lines of code. The contributed programs have been coded in a variety of programming languages including Algol, Ada, C, C++, Lisp, Mathematica, Maple, Occam and Pascal: however, the vast majority of contributions are programmed in Fortran.

Director: N.S. Scott

C. Phillips (Program Librarian) 
J. Lewis (Program Library Assistant)