The Research Computing team at Queen's University provide a number of centrally managed High Performance Computing (HPC) systems (detailed below). Researchers can avail of these systems in order to solve compute and data intensive problems.
Kelvin is a scalable High Performance Computing (HPC) and Research Data Storage environment (this cluster is the Dell Cluster replacement).
Installation date: Nov 2015
- Compute nodes run CENTOS operating system
- HP hardware
- 51 HP Apollo compute nodes with Intel Haswell generation processors
- High memory nodes ranging from 128GB to 1TB of RAM
- 500TB of lustre parallel file system for scratch
- Each node has 20 cores
- All compute nodes and storage are connected by an Infiniband fabric
- Visualisation node (Nvidia K4200) for openGL applications. This must be requested.
- Private cloud provision for applications such as Galaxy and CLC Bio.
The Windows cluster provides support for Windows applications and users who want to work in Microsoft/GUI environment.
Installation year: 2013
- Windows server 2012 operating system
- 16 x Dell c6220 servers
=> 256 cores
- Dual Intel Quad-core E5-2660 processors (64 GB memory )
- 10GB interconnect
- 30 terabytes of storage
A wide range of scientific software is available for all the systems. New software can be installed on request providing a valid software license is available.
Support for researchers
The Research Support team provide help in a number of ways:
- Training - a 90min introduction to Kelvin can be organized and delivered on request.
- Help with job scripts
- Software installation
- Programming help
Accessing the service
Email email@example.com and please provide the following information:
- Staff or student number
- Name of supervisor
- What software you intend to run
- The number and size of jobs (in terms of cores) you plan to run
- Any specific data storage requirements
- In general, support requests that can be dealt with in less than an hour will be completed within 24hrs. For requests requiring more time it will be logged and dealt with within 1-2 weeks, dependent on how many support requests are current.
- New accounts are usually given out within 2 days of request.
- The cluster runs the sge job scheduler software which ensures fair use of the resources.
- Users are restricted to using no more than 64 cores at the same time.
- Home folders have a quota of 50GB.
- There is 500GB of scratch space available.
- There is 1TB of local disk on each compute node.
- If you have never used Linux before please allow time to get up to speed working with the command line.