Access to campus high performance computing (HPC) resources is available to members of the Astrophysical Sciences department on clusters maintained by Princeton Research Computing.
There are seven different systems for the campus research community with more than 45,000 total cores and over 4 PFLOPS of processing power. Small machines Nobel and Adroit are the entry point for computational research and are accessible to all members of the University community.
Faculty sponsorship is needed for access to the large clusters, and these are designated for solving computationally demanding research problems.
The large machines most used by Astro department members are Della (224 Intel CPU nodes with 28 or 32 cores/node, and 20 AMD nodes with 128 cores and 2 NVIDIA A100 GPUs per node), Tiger (408 Intel Skylake nodes with 40 cores/node, 80 Intel Broadwell nodes with 28 core and 4 GPU/node), and Stellar (296 Intel Cascade Lake CPU nodes with 96 cores/node, 187 AMD Rome nodes with 128 cores/node, and 6 AMD GPU nodes with 128 cores and 2 GPU/node ). Della is most suitable for serial jobs while Tiger and Stellar are intended for parallel jobs.
Stellar has two visualization nodes, while files on Della and Tiger scratch disks can be accessed for analysis on Tigressdata.
The /tigress and /projects file systems provide longer-term data storage.
Students and research staff members seeking to use one of the systems managed by Princeton Research Computing should read the Getting Started guide and contact their faculty mentor regarding a request for an account. Before using any of these systems, it is important to understand proper use of these shared resources, and users are expected to avoid these mistakes. Specific guidelines for department users of Stellar to ensure short queues are given in this document.
Note that the campus clusters provide access for moderate-scale computational projects. Those requiring more substantial computational resources should seek access to national supercomputing facilities.