The Research Computing team at Queen's University provide a number of centrally managed High Performance Computing (HPC) systems (detailed below). Researchers can avail of these systems in order to solve compute and data intensive problems.
This cluster is the Dell Cluster replacement and also has a number of dedicated resources for the School of Medicine.
Installation date: Nov 2015
- Compute nodes run CENTOS operating system
- HP hardware
- Compute nodes: 16 dedicated to SMDB and 45 general compute nodes (due April 2016)
- High memory nodes ranging from 128GB to 1TB of RAM
- 500TB of lustre parallel file system for scratch
- Each node has 20 cores
- Visualisation node for openGL applications.
The Windows cluster provides support for Windows applications and users who want to work in Microsoft/GUI environment.
Installation year: 2013
- Windows server 2012 operating system
- 16 x Dell c6220 servers
=> 256 cores
- Dual Intel Quad-core E5-2660 processors (64 GB memory )
- 10GB interconnect
- 30 terabytes of storage
The University's main Linux HPC cluster.
Installation year: 2009
- Compute Nodes run Scientific Linux
- Dell hardware
- 112 x R410 compute nodes (24GB memory)
- 4 x R610 compute nodes (48GB memory)
=> 928 cores for computation
- Infiniband interconnect on 40 nodes
- 48 terabyte storage system
Note: this system has a planned decommission date of Nov 201
A wide range of scientific software is available for all the systems. New software can be installed on request providing a valid software license is available.
Support for researchers
The Research Support team provide help in a number of ways:
- Help with job scripts
- Software installation
- Programming help
- Code optimization
Accessing the service
Please contact for information on using the HPC resources: email email@example.com