Skip to Content

College of Arts & Sciences
Computing and Information Technology

Planck User Guide

System Overview

Planck is hybrid GPU-CPU supercomputer similar in architecture to the top hybrid GPU computers in the current Top 500 supercomputer list. The cluster was configured and purchased through USC NSF EPSCOR Track I funds.

  • The Planck cluster has total 20 HP SL250 computing nodes. Each node has 12 cores, Intel Xeon 2.8 GHz CPU and 24 GB RAM.
  • Planck has 15x3 NVIDIA M1060 with 240 cores each, 3X3 NVIDIA M2070 with 448 cores each.
  • All the nodes are connected with fast comminication, i.e. Infiniband 40 Gb/sec network. 
  • Data storage is based on 2 MSA 2312 SAS devices.
  • 4 TB of scratch space (not backed up) and 11 TB of backed up storage.
Accessing Planck
Within the USC network:
  • Login using any SSH client to with the login name and password that you acquired when your compute account was created. Port 22 should be used for connection (by default).
  • Example: ssh Your DUO login options will be prompted, and then you will need to enter your Planck password. 
  • To request DUO account, please follow for instruction
Outside the USC network:
  • In order to access Planck from home (or anywhere outside the USC) you will need to install a VPN client. To do so you are required to have Java and any web browser (FireFox, Explorer, Safari, or Google Chrome) installed on your local computer.
  • To get a VPN client go to the USC website. For students, the URL is; for faculty it is 
There are several steps for getting the VPN client installed:
  1. Enter your network username and password, then click Sign In.
  2. Wait while the initial setup begins.
  3. Click Start.
  4. Wait for the VPN client to launch for the first time.
  5. A new window will pop up. Once the status shows connected, you have successfully logged into the VPN client.
  6. If you want to terminate the VPN session, right-click on the Juniper icon in your system tray and select Sign Out.
How to setup password-less SSH access. (Useful for automating and scripting runs.)
For first time logins, a script automatically does this for you, however just in case you need to reconfigure it again correctly follow the steps below.
In your terminal type:
  • ssh-keygen -t rsa
Then the following will pop up:
  • Generating public/private rsa key pair.
  • Enter file in which to save the key (/home/user_id/.ssh/id_rsa): Click ENTER
  • Enter passphrase (empty for no passphrase): Click ENTER
  • Enter same passphrase again: Click ENTER
  • Your identification has been saved in /home/user_id/.ssh/id_rsa
  • Your public key has been saved in /home/user_id/.ssh/
  • The key fingerptint is:
Go to .ssh directory by typing
  • cd .ssh
Copy public key by typing
  • cp authorized_keys
Setting Up User Environment
Planck is one of the HPC clusters that uses modules to setup the environment. The module system can make software and related environment settings easily available. Feel free to contact cluster support to add additional modules to cover the software you use and make it available for everyone.
To get the list of available software execute:
  • module avail or module list
To unload all the modules (and clean all the settings) execute:
  • module purge
To load (add) a specific module execute:
  • module load MODULE_NAME or module add MODULE_NAME
If you are planning to use a specific module every time you login then put “module load MODULE_NAME” into your ~/.bashrc file.
Example: the following line will add Intel Compiler and Open MPI runtime to your environment
  • module load intel/12.0.4 openmpi/143-intel
Compiling on Planck
Ok, now that you have logged in to the cluster, successfully set up the environment variables and loaded the modules that you need, e.g. Intel compilers, MPI, etc. What now?
You will first need to learn how to compile your code.
This section provides an overview on the compilation of the source codes for serial and parallel (MPI, OpenMP) execution. Any compiler can be used for just compiling (using -c option) or both compiling and linking the code.
Compiling SERIAL code:
In order to compile the code you first need to make sure that you have a compiler module loaded. This can be verified by the module list command. Below are several examples of compiling and linking a SERIAL code with different compilers.
Using Intel C compiler:
  • module load intel
  • icc -o code.exe code.c
Using Intel Fortran compiler:
  • module load intel
  • ifort -o code.exe code.f
Using GNU C/fortran compilers:
  • gcc -o code.exe code.c
Using GNU Fortran compiler:
  • gfortran -o code.exe code.f
Using Portland Group C/fortran compilers:
  • module load pgi
  • pgcc -o code.exe code.c
  • pgfortran -o code.exe code.f
Compiling MPI code:
In order to compile a PARALLEL (MPI) version of your code you would need the MPI module loaded. For example, in order to load OpenMPI runtime into your environment, load the module named openmpi/143-intel. Below are several examples of compiling and linking MPI source code.
  • module load intel/12.0.4 openmpi/143-intel
With Intel C compiler:
  • mpicc -o code.exe code.c
With Intel Fortran 90 compiler:
  • mpif90 -o code.exe code.f90
Compiling OpenMP code:
Since each of Planck's nodes has 12 cores in total, applications can be executed using a shared memory model within that node. The OpenMP compiler options can be used to enable SMP support on the nodes. Below are several examples.
Using Intel Fortran compiler with OpenMP:
  • module load intel/12.0.4
  • ifort -o code.exe code.f -openmp
Using MPI Intel Fortran compiler with OpenMP:
  • module load intel/12.0.4
  • mpif77 -o code.exe code.f -openmp
Loading Libraries:
There are several libraries available on the Planck including Intel MKL libraries. These libraries provide highly optimized mathematical packages and functions. It is very useful to know how to link those libraries in order for your code to function properly. Below are several guidelines for linking the libraries.
Use -l option to link in a library. For example, using MPI C compiler:
  • mpicc code.c -l library_name
Here it is assumed that the path to the library file ( or library_name.a) can be found in the LD_LIBRARY_PATH environment variable path. However, if you want to include the library path explicitly, this can be done using -L option.
  • mpicc code.c -L/mydirectory/lib -library_name
Loading MKL libraries:
Before linking the libraries make sure that the corresponding MKL library module was loaded. You can use modules to do this for you.
  • module load intel/12.0.4
This automatically creates an environmental variable MKL_ROOT which points to the MKL libraries installed on Planck.
There are several parameters that affects the way your code links the MKL libraries. These include the compiler you use, processor architecture, dynamic or static linking, sequential of mutli-threaded type of parallel model used in your code, whether or not you need any extra linear algebra packages, such as Lapack, Blas, Scalapack, etc.
We suggest to use the Intel MKL Link advisor which can be found at the following URL.
Following that advisor you will be able to create a link line that can be then copy-pasted into your Makefile.
Running on Planck
The SGE Queue System
Sun Grid Engine (aka SGE) is the resource management service used on the Planck cluster. The SGE handles basic batch processing operations such as job submission, job monitoring, and job control. There are several features of the SGE including more efficient use of computational time, fair and optimal sharing of cluster resources, etc.
Job submission:
SGE provides a qsub command to submit job scripts to the cluster.
  • qsub job_script
Job deletion:
  • qdel job_ID
where job_ID is a identifier of a submitted job
Checking status of submitted job:
  • qstat -u user_ID
where user_ID is an ID of the user (login name)
Running serial job:
Here is an example of the script that can be used for a serial job requesting 1 CPU, 2Gb of memory, and 2 hours of runtime.
  • #!/bin/bash
  • #$ -N jobname
  • #$ -l mf=2gb
  • #$ -l h_rt=2:00:00
  • #$ -j y
  • #$ -q verylong.q
  • #$ -M ./code.exe
Where verylong.q is the name of a queue you submit to, code.exe is the executable that you run (make sure to provide a path to it).
Running parallel job:
Below is an example of the script for the parallel job requesting 16 CPUs (on two nodes), 22 Gb of memory, and 5 mins of runtime.
  • #!/bin/bash
  • #$ -pe 8way 16
  • #$ -l mf=22gb
  • #$ -cwd
  • #$ -S /bin/bash
  • #$ -q normal.q
  • #$ -l h_rt=00:05:00 #$ -M
## Set up your environment
source /share/apps/modules/ module add intel/12.0.4 openmpi/143-intel
## Run your parallel code
mpirun -np #NSLOTS ./code.exe > output