FAQ

Getting Started

How to get an account

To use the cluster you must first create an account with us. Please submit a formal request from this page New user creation

Please fill out the form and we will create your Id once your department coordinator approves.

How to share account resources

Email us at extreme@uic.edu with your requirements.

We do not let other users to access anybody else’s account (for security purposes), but we can create a shared directory for users to access and exchange data from.

Let us know your requirements accordingly. Please do not share your login credentials. Accounts will be disabled if found to violate this policy.

Logging into Extreme

Access to Extreme is through SSH(Remote access).

We do not provide local access to Extreme. If you’ve been granted access, you should log into the cluster using your UIC netID and ACCC common password.

Use an SSH client to connect to login-1.extreme.uic.edu.

Access using Unix, Linux, and OS X systems – Run ssh login-1.extreme.uic.edu -l netid (Enter your netID).

Access using Windows based systems(using X11 Forwarding)-

“X forwarding” is a feature of X where a graphical program runs on one computer, but the user interacts with it on another computer.

All you need is an X server that runs on Windows, and an SSH client, both of which are freely available.

Download Xming and Xming fonts to get started with installing an X server.

Install PuTTY

Follow these steps to configure PuTTY,

1. Enter the hostname you want to connect to: login-1.extreme.uic.edu on port 22. Make sure the connection type is SSH.

2. Scroll to Connection > SSH > X11. Check the box next to Enable X11 Forwarding. By default the X Display location is empty. You can enter localhost:0. The remote authentication should be set to MIT-Magic-Cookie-1.

3. Finally go back to Session. You can save your session too, and load it each time you want to connect.

4. Click Open to bring up the terminal and login using your netid/password .

 

Getting started with your environment

For a UNIX like environment you can find a quick tutorial at Unix Guru Universe Beginners’ Pages . Your account will be a default shell bash setup.

Resetting or lost password

If you cannot login into any of the UIC websites, follow this link to change your password.

Transferring files to Extreme

For linux/Mac users, use SCP to transfer between your workstations and extreme.

Scp /path/file netid@login-1.extreme.uic.edu:/path

For transferring files from extreme to your workstation,

Scp netid@login-1.extreme.uic.edu:/path/file /path

Downloading files and data sets off the internet

Downloading files from the internet from external ftp and web servers, use wget. Copy the link location of the file to be downloaded then,

wget <link-location>

Or downloading from git file repository use,

Git clone <github-link-location>

Setting up a bashrc file

Creating a profile is a convenient way to work on the cluster or in general any linux environment. This helps you to load all the frequently used modules and environment variables that you require for daily use.

Follow these quick steps to create a profile. bashrc file is present in all directories when a new account is created. It is a hidden file, so do the following,

ls -la ~/ | more There should be a .bashrc on the first page. If not just create it with vi ~/.bashrc And simply write following line into it. PATH=$PATH:~/bin

More environment variables can be added as required.

 Software

New software installation request

If the module you have asked is not currently installed on our cluster(Please check the list of available software before a request). To submit a request for installation, please submit a formal request through this page. Also provide instructions and download links to the package.

A thing to note here is, Extreme is not responsible for registering or purchasing licenses for software packages. The user will have to purchase/register and download the package in their home directory and we will proceed further with installation on Extreme.

Software available on extreme

Here is a list of available software for use on extreme – New software

Use module avail in your bash terminal.

Software/modules are segregated into apps, tools and compilers.

How to load or unload software/module

Use ‘module load‘ in your bash terminal.

Naming conventions that will be useful are added with the name of the software.

If intel is added then load the compilers/intel module along with the software package. If a version of python is added then load it from the compilers. Likewise, these provide you with dependencies that are required for the proper functioning of the software package.

To unload a package use ‘module unload‘ in your bash terminal.

To list out the current modules you are working on, use ‘module list’.

Compiling software in your/shared directories

Compiling softwares in the directories where you have permissions do not require super user/root privileges. Since, by default packages get installed in system libraries and binary folders, hence the need for sudo privilege. With this information out of the way, lets look at how to compile softwares.

For most packages,

  • Load the dependencies from modules provided on Extreme (compilers, tools etc).
  • Configure with ‘Prefix’ to your directory (User needs to have permissions to access that directory).

./configure –prefix=/export/home/netid/package-name/

  • To run programs give the full path to the binaries.

/export/home/netid/package-name/bin/executable

  • To avoid the long path names, append the package in your environment variables. The best way is to add the environment variable in your ./bashrc file, so it loads every time you login.

export PATH=/export/home/netid/package-name/bin:$PATH

*Note – The installation of the package is local to the user and cannot be accessed by others.

  • To install in a shared directory, just supply full path(or source your ./bashrc) to the executable to be able to access it.

Compiling Python modules in your/shared directories

  • There are two general ways to install Python modules. With ‘easy_install’ or with ‘pip’
easy_install --prefix=$HOME/package package_name
pip install --install-option="--prefix=$HOME/package" package_name
  • This will exit and respond you to create the directory,
$HOME/package/lib/pythonX.Y/site-packages
  • Also append the PYTHONPATH environment variable to include the above created directory.
export PYTHONPATH=$PYTHONPATH:$HOME/package/lib/pythonX.Y/site-packages

Compiling your program on Extreme

Compiling with Intel compiler

Extreme uses GNU’s gcc by default at login. The Intel Fortran and C++ Composer XE 2013 suite is provided to maximize performance from the Intel architecture. HPCC staff recommends using the Intel compilers whenever possible.

To load the Intel compiler module,

                                         module load compilers/intel

You can invoke the Intel® C++ Compiler on the command line either to compile C source files, to compile C++ source files or to compile Fortran source files..

  • When you invoke the compiler with icc, the compiler builds C source files using C libraries and C include files. If you use icc with a C++ source file, it is compiled as a C++ file. Use icc to link C object files.
  • When you invoke the compiler with icpc the compiler builds C++ source files using C++ libraries and C++ include files. If you use icpc with a C source file, it is compiled as a C++ file. Use icpc to link C++ object files.

The icc or icpc command does the following:

  • Compiles and links the input source file(s).
  • Produces one executable file, a.out, in the current directory.

Syntax                                {icc|icpc} [options] file1 [file2 . . .]

where file is any of the following:

  • C or C++ source file (.c, .cc, .ccp, .cxx, .i)
  • assembly file (.asm),
  • object (.obj)
  • static library (.lib)

Appropriate file name extensions are required for each compiler. By default, the executable filename is “a.out”, but it may be renamed with the “-o” option. The compiler command performs two operations: it makes a compiled object file (having a .o suffix) for each file listed on the command-line, and then combines them with system library files in a link step to create an executable. To compile without the link step, use the “-c” option.

C Prog Example:             $ icc -xhost -O2 -o flamec.exe prog.c

Fortran Example:             $ ifort -xhost -O2 -o flamef.exe prog.f90

For more information on each of the compiler flags, use

                                           $ icc -help

                                           $ icpc -help

                                           $ ifort-help

Compiling with MPICH2 & MPICH3

MPICH is an open source implementation of MPI (Message Passing Interface). Similar to Intel’s MPI, this is an alternative implementation of MPI.

To get started, load the MPICH module before working on it.

We have two version installed on the system for your use. MPICH2 and MPICH3, load them using

                                 $ module load tools/mpich2-1.5-gcc (MPICH2)

                                 $ module load tools/mpich-3.0.4-icc (MPICH3)

Once either of the above commands are executed, they will automatically add environment variables required to use Mpich, i.e PATH, MPICH2_HOME (MPICH3_HOME in the case of MPICH3) and LD_LIBRARY_PATH.

To run or compile programs with MPICH, run mpiexec.

The following scripts are available to compile and link your mpi programs:

Script

Language

mpicc

GNU C

mpicxx

GNU C++

mpif77

GNU Fortran 77

Each script will invoke the appropriate compiler.

Make a job script to reserve nodes for your job to run on. Refer to how to create a job script (FAQ)

To compile or run a program with MPICH,

                                 $ mpiexec -n <number> ./a.out

          To test that you can run an ’n’ process job on multiple nodes:

                                $ mpiexec mpi-program-name.out > output.log

          The ’machinefile’ is of the form:

                                 host1

                                 host2:2

                                 host3:4

                                 host4:1

         host1’, ’host2’, ’host3’ and ’host4’ are the hostnames of the machines you want to run the job on.

Example :               $ mpiexec mpi-program-name.out > output.log

For more information about MPICH– See the MPICH Manual

Compiling with OpenMPI

OpenMPI is another open source MPI implementation, similar to MPICH.

Its usage is similar to MPICH and Intel MPI.

To get started with OpenMPI, you do not have to load the module. It is the default MPI implementation for Rocks OS.

The following scripts are available to compile and link your mpi programs:

Script

Language

mpicc

GNU C

mpiCC

GNU C++

mpif77

GNU Fortran 77

Each script will invoke the appropriate compiler.

Syntax:

                                      mpicc <flags> <filename.c>

                                      mpiCC <flags> <filename.cpp>

                                      mpif77 <flags> <filename.f>

                                      mpif90 <flags> <filename.f90>

To get more information on specific compiler wrappers in OpenMPI, use -help with each wrapper.

Example :

		login1$ mpicc  -help
		login1$ mpiCC -help
		login1$ mpif90 -help
		login1$ mpirun -help

Compiling OpenMP Code with Intel Compilers

Load the intel compiler:

$ module load compilers/intel

To cross compile a C program:

                                         $ icc -openmp -o outfile prog.c

To cross compile a C++ program:

                                          $ icpc -openmp -o outfile prog.C

To cross compile a Fortran program:

$ ifort -openmp -o outfile prog.f90

Submit and manage jobs on Extreme

How to submit/run a job on Extreme

Extreme employs Moab and Torque to manage and control jobs on the cluster. Cluster Resources Moab Workload Manager, however Torque is used as the backend resource manager

  1. Submit a job script

#!/bin/bash

#PBS -l nodes=1:ppn=1,walltime=5:00:00 #PBS -N job_name                    

#PBS -q Queue_name                   

#PBS -m abe #PBS -M NetID@uic.edu #PBS -e localhost:/scratch/NetID/${PBS_JOBNAME}.e${PBS_JOBID} #PBS -o localhost:/scratch/NetID/${PBS_JOBNAME}.o${PBS_JOBID}

#PBS -d /scratch/NetID/jobdirectory/

module load apps/<package>

module load tools/<package>

module load compilers/<package>

./command &> output

Submit the script:

                           $ qsub job.pbs

*Be sure to substitute your own UIC NetID for NetID.

*Please make sure to transfer files from your Lustre directory into your home directory. Files are subject to removal after 90 days.

Usage notes:

  • For nodes, submit the number of nodes that your queue has permission to access. E.g., nodes=10 will reserve 10 nodes for your job. It may not even use as many resources, but it will reserve this for your job.
  • Specify the number of cores needed per processor using ppn. Similarly, you can assign cores that you require with ppn. Assigning ppn is not necessary, as the scheduler can decide that for you.
  • Keep in mind that we have different types of nodes in our cluster where G1 nodes have 16 cores, G2 nodes have 20 cores and Highmem nodes have 32 cores. So If your queue has only G1 nodes then you can not have ppn>16.
  • After you submit your job script, changes to the contents of the script file will have no effect on your job as Torque has already spooled a copy to a separate file system.
  • If your job request too many resources, showq will classify it as idle until resources become available.
  • We recommend you to always leave your email address in your scripts so you are alerted of any status changes in the job.

PBS options

Script Command Line Description/Notes
#PBS -a -a Declares the time after which the job is eligible for execution. Syntax: (brackets delimit optional items with the default being current date/time):[CC][YY][MM][DD]hhmm[.SS]
#PBS -A account -A account Defines the account associated with the job.
#PBS -d path -d path Specifies the directory in which the job should begin executing.
#PBS -e filename -e filename Defines the file name to be used for stderr.
#PBS -h -h Put a user hold on the job at submission time.
#PBS -j oe -j oe Combine stdout and stderr into the same output file. This is the default. If you want to give the combined stdout/stderr file a specific name, include the -o path flag also.
#PBS -l string -l string Defines the resources that are required by the job. See the discussion below for this important flag.
#PBS -m option(s) -m option(s) Defines the set of conditions (a=abort,b=begin,e=end) when the server will send a mail message about the job to the user.
#PBS -N name -N name Gives a user specified name to the job. Note that job names do not appear in all Moab job info displays, and do not determine how your job’s stdout/stderr files are named.
#PBS -o filename -o filename Defines the file name to be used for stdout.
#PBS -p priority -p priority Assigns a user priority value to a job. See the discussion under Setting Job Priority.
#PBS -q queue#PBS -q queue@host -q queue Run the job in the specified queue (pdebug, pbatch, etc.). A host may also be specified if it is not the local host.
#PBS -r y -r y Automatically rerun the job is there is a system failure. The default behavior at LC is to NOT automatically rerun a job in such cases.
#PBS -S path -S path Specifies the shell which interprets the job script. The default is your login shell.
#PBS -v list -v list Specifically adds a list (comma separated) of environment variables that are exported to the job.
#PBS -V -V Declares that all environment variables in the qsub environment are exported to the batch job.
#PBS -W -W This option has been deprecated and should be ignored.

 

  1. Submit an interactive job

HPCC staff recommends jobs normally be submitted using a script and the qsub. However, qsub will also allow interactive jobs, which are useful when debugging scripts and applications.

To run an interactive job, you must include the -I (capital i) flag to qsub. Additionally, any job submission parameters in your script file with #PBS prefixes should be included at the command line.

 

Syntax:

                                                   [login1]$ qsub –I –q batch

                                                   qsub: waiting for job 123.admin.extreme.uic.edu to start

                                                    qsub: job 123.admin.extreme.uic.edu ready

                                                    [compute-0-1]$

This command assigns a compute node to the user to run their jobs. Please note that if you logout or exit your interactive session, your job will be marked as completed by the scheduler.

To pass multiple options with your interactive job script, use the -l (lowercase L) option.

Example:

[login1]$ qsub -I -l nodes=1:ppn=16 -q queue_name -N job_name

Flags used at the command line follow the same syntax as those flags listed in the table above.

Monitor a Job

1. Monitor Job status:

To see the status of all your jobs submitted,

[login1]$ showq -u username

The showq command has several options. A few that may prove useful include:

  • -r shows only running jobs plus additional information such as partition, qos, account and start time.
  • -i shows only idle jobs plus additional information such as priority, qos, account and class.
  • -b shows only blocked jobs
  • -p partition shows only those jobs on a specified partition. Can be combined with -r, -i and -b to further narrow the scope of the display.
  • -c shows recently completed jobs.

To check the status of a specific job,

[login1]$ checkjob jobid

  • Displays detailed job state information and diagnostic output for a selected job.
  • The checkjob command is probably the most useful user command for troubleshooting your job, especially if used with the -v flag. Sometimes, additional diagnostic information can be viewed by using multiple “v”s: -vv or -v -v.

2. Cancel a job:

                                                             [login1]$ canceljob jobid

Cancel a running or queued job.

Run an Interactive job using screen

To run an interactive job, you must spawn a screen session first.

1. To start a session, type ‘screen’ and it will open a new session from where to run your interactive job off.

2. A simple interactive job syntax is

$ qsub -I -l nodes=1:ppn=16 -q queue_name -N job_name

3. To detach a screen, use Cltr + A + D, so now your job will keep running in the background. Now you have the freedom to logout of Extreme and the job will still be running.

4. To reattach a screen and monitor how your job is still running,

type ‘screen -r’

This will show all your active screens. To reattach use

‘screen -r <screen_id_number>’

By this method you can start a job in a screen session and detach it.The current job submission is done in batch mode, so while users start a job and wait for resources to get allocated, users can detach the screen while it waits in the background. This way your job will not quit when you try to logout of your session on Extreme.

For more information about screen. Use ‘man screen’

Miscellaneous

Why is my job taking time to start

Use ‘showq’ to check the status of your job with respect to active. idle and blocked queues. Users can also check the status of your job with ‘checkjob <jobID>’ command.

When submitting jobs to the batch queue, please have patience. Maximum wall time for a job in batch queue is 10 days. Some groups/departments have their own reserved queues, and rules for their queues are different from batch queue.

For now just submit a job in the eligible queue, once the resources requested by your job become available, you will be pushed into the active job state. There are no reservations in the batch queue. It is shared by all users. Please do not abuse the shared space.

Specific Software (Best practices)

Operating with Gaussian.

Gaussian manual states:

“It is always best to use SMP-parallelism within nodes and Linda only between nodes. For example on a cluster of 4 nodes, each with a dual quad-core EM64T, one should use %NProcShared=8 %LindaWorkers=node1,node2,node3,node4”.

If you are a part of any queue, which is shared by  users. To get a list of processor names (which will be a lot) , but when each time jobs get scheduled you wont always get the same processors assigned to you.

Follow the below steps to get the names of the nodes you are operating on,

1. Start an interactive job with the number of nodes you require.

Below is the command for starting an interactive job.

‘$ qsub -I -l nodes=1:ppn=16 -q queue_name -N job_name’

To read more about Interactive jobs, see our FAQ page (http://rc.uic.edu/resources/faq/)

2. Then once the job has started, using ‘checkjob -v ‘ to check the names of the nodes it is running on.

3. Lastly, input them in your input file and run Gaussian.

This might be tedious task, but it guarantees the performance and utilization of nodes.