Skip to main content

Other Research Projects

Non-functional Concern Management in Parallel/Distributed Systems

Supervisor: Dr. Peter Kilpatrick

In modern computing systems it is increasingly becoming the case that the application programmer is executing his/her program in an environment which is physically at some remove from the programmer and over which he/she has limited (if any) control. Thus the programmer is less able adapt the program to changes in the environment than would have traditionally been the case. As a consequence, means must be found by which an application program can manage itself with respect to changes in its environment, or indeed in relation to significant changes to the nature or volume of data being processed. This is particularly the case for long-running programs.

The concept of autonomic management has been introduced, initially by IBM, to address such self-management. The idea is that a program is equipped with an autonomic manager which monitors and adjusts program behaviour with respect to non-functional concerns such as performance, security, energy usage, etc. Such autonomic management presents a huge challenge, even in the case of sequential programs: for parallel/distributed programs the task is even greater.

In an ideal vision, an application programmer would equip his/her program with an autonomic manager and a contract specifying constraints on non-functional concerns of the program; the manager would then ensure maintenance of the contract throughout the lifetime of the program with no human interaction involved. This is a very demanding target. Work to date with colleagues in Italy has taken some early steps toward this vision. The concept of behavioural skeleton has been introduced, that is a parallel pattern/skeleton (such as pipeline, farm, etc.) together with an autonomic manager. The idea is that a programmer constructs a parallel program as a composition of behavioural skeletons. This separates the programming of the core business code from the handling of parallelism (taken care of by the skeleton structure) and the handling of the non-functional concerns (taken care of by the manager).

The aim of the project is to build a framework in which the issues associated with the above vision can be investigated: in particular, interactions among managers across the skeleton hierarchy and among managers of differing (perhaps in some way competing - e.g. performance v. energy efficiency) concerns; and also how contracts may be constructed as means of specifying constraints on behaviour.

 

Generic sensor interfaces

Supervisor; Dr. I. Spence and Dr. C. Gillan

Description:

With the proliferation of different kinds of sensor attached to low-powered embedded computing devices has come the challenge of writing dependable software to interface to such sensors. There is a lack of agreement of the use of standard protocols, the rates and formats of data can vary widely and there is the ever-present risk of errors in the hardware. There is a clear opportunity to use Software Product Line techniques to model the differences and develop generic interfaces which can then be targeted to particular sensors while retaining the essential efficiency. The project will build on current work at QUB on feature modelling of embedded systems for analysis of resource usage.

 

Parallel data structures to accelerate sat-solvers

Supervisor: Dr. I. Spence

Description:

Programs which use large data structures with essentially random data access patterns present a significant challenge for the efficient use of high performance computing technology. When this is combined with a search and learning strategy which is difficult to parallelise the result is a class of problems whose solution has not seen great benefit from recent hardware advances. These are characteristics of solvers for the satisfiability problem which is of great theoretical and practical interest. The best attempts to date which have used parallel computation to solve this problem have merely run a number of different solvers concurrently and independently, waiting for the first to find an answer.

This project will investigate carefully the data structures used by modern sat-solvers and look for opportunities to improve performance by tuning data access patterns and in particular by identifying and exploiting possibilities of collaboration amongst parallel solvers. In recent years the regular international competition for sat-solvers (www.satcompetition.org) has placed a growing emphasis on solvers running on parallel machines but results have not been very good so far. This would provide an excellent vehicle for demonstrating progress.

 

ERA: Energy Rating for Algorithms

Supervisors: Professor D Nikolopoulos, Dr P Milligan

Project overview:

Green computing, green IT, or ICT Sustainability are terms that are in use to reflect the growing concern on the amount of energy utilized in the operation of all computing equipment. The continued growth and uptake of high performance systems, whether the traditional supercomputer, or the more recently introduced multi- and many- core disruptive architectures are a target for the application of energy reduction strategies.

Traditionally users of high performance computing (HPC) systems have focused on GFlops per dollar as a measure of success – this is reflected in the listing of the top performing HPC systems (the Top500 list). However, a new list has been produced to complement Top500. Green500 – was announced at the well-known SuperComputer conference in Salt Lake City in 2007 and rates HPC systems by GFlops per watt.

To date interest in this topic has focussed on mechanisms that model and measure the energy efficiency of a target architecture and considerable progress has been made in this area with the development of a range of tools. However there is limited work addressing the energy rating of the software that utilizes HPC platforms. One avenue of research within the HPDC cluster has focussed on the characterization of an algorithm, namely the identification of repetitive, selection, etc. statements and their interaction and how this, combined with data access patterns, affects the execution of an algorithm on a given HPC system. At present the work uses systems employing the NVidia GPGPU and the Texas Instruments DSP technologies.

The goal of this project would be to extend this characterization strategy to include energy consumption considerations for algorithms and to explore the combination of such a model with the existing rating systems for hardware to produce a novel system that would guide users on balancing the energy consumption of their work against the traditional throughput measures.

Objectives:

  • develop a characterization model that reflects energy consumption,
  • encapsulate this model into a middleware solution that will capture both compile-time and run-time energy 'characteristics' and offer development guidance to the programmer, and
  • deploy the middleware within a test environment, e.g. the LLVM infrastructure, and evaluate the functionality in terms of energy performance against throughput performance for a selected range of well-known and frequently used algorithms.

 

Enabling Real-time Data-Analytics in Cloud Computing

Supervisors: Prof. Dimitrios Nikolopoulos and Dr. Mustafa Rafique, (IBM Research, Dublin.)

Project Description:

Real-time analytics in critical sectors such as the capital markets and ICU healthcare present computing systems with workloads that simultaneously require low latency, real-time response, high throughput, and in-memory data processing capabilities for both streaming data and historic (stationary) data to meet all of the aforementioned targets.

These problems raise unique challenges for computing systems, even those using bleeding-edge HPC hardware technologies. For example, high-end IBM servers sustain transaction processing latencies of 30 microseconds for high-frequency trading markets, while industry targets are set at sub-microsecond latencies to cope with load spikes, and require trading to be coupled with real-time analytics that detect events triggering spikes (e.g., from news feeds).

Moving real-time analytics workloads to the cloud is appealing because it relieves the customer from maintaining expensive IT infrastructure and reduces operational risk, along with providing flexibility and agility. Unfortunately, the Cloud imposes latencies which are not tolerable for some of the most important market domains of real-time analytics. Thus, the current state-of-the-art clouds can only be used as a platform for non-latency-sensitive workloads. Furthermore, as a virtualized computing environment, the Cloud remains ill-suited for co-hosting real-time and data-intensive workloads. The underpinning virtualization technologies lack the mechanisms and policies to provide real-time QoS guarantees and proportional resource sharing with such workloads.

Objectives:

The technical objective is to identify latency bottlenecks in system virtualization and ameliorate some of these bottlenecks, through innovation in three fundamental aspects of virtualization technology: scheduling, memory management, and I/O, along with employing hardware accelerators as on-demand computing resources to accelerate the data processing of latency-sensitive workloads.

This project would explore:

  1. the introduction of real-time execution guarantees in VM schedulers. Current VM schedulers are agnostic to the performance and latency requirements of the VMs. We plan to develop techniques for VM schedulers that incorporate the latency requirements of the hosted applications in making effective scheduling decisions; 
  2. the virtualization of hardware accelerators and their deployment in Cloud environments for latency-sensitive applications. Currently hardware accelerators, such as GPUs and FPGAs, are only provided as dedicated computing resources for cloud users. We plan to develop technologies to use them as on-demand computing resources in cloud setups;
  3. the introduction of techniques for guaranteed memory bandwidth allocation across VMs in VM monitors. Current VM monitors do not provide dedicated memory bandwidth to VMs. We plan to develop technologies for VM monitors to guarantee specific memory bandwidth to the VMs based on the requirements of the hosted applications; 
  4. the integration of the above with state-of-the-art bypassing mechanisms (including complete host processor bypassing) for low-latency I/O during processing high-frequency events.