Skip to content

Account needed for HPRC and different working group

Welcome to Mitchell Institute Computing! This webpage is design for all the members of the Mitchell Institute and our collaborators. You will find all the information you need to start using TAMU’s High Performance Research Computing (HPRC).

For a new user, you will need to apply various accounts in the beginning. Specifically speaking, you will need a NetID, TAMU VPN and HPRC accounts.

For CDMS working group, you can learn how to set up different accounts using the link below:

  • CDMS: Setting up Confluence, SLAC/SLUO, GitLab and other accounts.

TAMU’s High Performance Research Computing (HPRC)

TAMU’s High Performance Research Computing (HPRC) contains two clusters Grace and Terra located at Texas A&M. We recommend using Grace since it’s newer (Started from spring 2021) and more powerful.

Both clusters have good wiki pages with useful user guides and information. You can access these user guides from the links below:

You can also access their status from the HPRC Cluster Status History webpage which might be useful to check if they are down or busy.

IN CASE YOU NEED ANY HELP WITH ANYTHING, USE Contact Us PAGE. TRY TO ALWAYS ASK MITCHCOMP TO HELP YOU. We will interface with HPRC Admins if needed.

New Accounts and Account Renewals

There are two types of accounts needed for each MitchComp users:

  1. The campus-wide “NetID”: Which is associated with your A&M e-mail address, student or staff ID, and so on. For remote collaborators, you will need to apply a NetID before you can get an HPRC account.
  2. HPRC Computing Account: Which gets you access to the login and compute nodes, the HPRC OnDemand Portal, home and scratch directories, etc.

In order to use HPRC, you must have both accounts set up and active, and the accounts must be renewed annually before 1 September.

Please see our detailed instructions for requesting and renewing computing accounts.

TAMU VPN

In order to use HPRC off-campus, you need to set up a VPN.
This link will help you to set up the VPN.

Storing Files on Disk, Disk and File Quotas

Files can be stored on Grace and Terra, and there are default limits on both (Grace NOT share directories with Terra).

Directory Environment Variable Default Space Limit Default File Limit Intended Use
/home/$USER $HOME 10 GB 10,000 Small to modest amounts of processing.
/scratch/user/$USER $SCRATCH 1 TB 50,000 Temporary storage of large files for on-going computations. Not intended to be a long-term storage area.
  • Any MitchComp group member has access to the MitchComp group area /scratch/group/mitchcomp/ and may write his/her data there. You can use this area to organize your group’s works.
  • If you are using MitchComp’s group area, make sure to set premissions to group readable and writable by “chmod g+ws filename” command.
  • On Grace, every file stored in the group directory is charged to the group’s quota. However, HPRC does not support the idea of “group shared disk quotas” on Terra. Every file is owned by a person, and is charged to that person’s quota on Terra.

Need to check your quotas or request more space or allow more files?

You can check your usage at any time on the command line with the ‘showquota’ command.
If you need a higher disk space or file-count limit, you can send a request to HPRC help. You should also cc Mitchcomp Help (mitchcomp_help@physics.tamu.edu) in your request because their approval as the MitchComp/HPRC liaisons is required.

  • When you submit a request, remember that more disk space and more file counts must be specified separately.
  • Your /home directory quota CANNOT be increased!

Suggestions for the case of running out of quota space in your /home directory

Your /home directory quota cannot be increased and it means that you need to avoid storing files there. If you really want to have access to something from your /home directory, you need to store it in your $SCRATCH directory and then make symbolic links to it on your home directory using this command:

$ ln -s $SCRATCH/directory1 ~/directory2

There are multiple common directories that should be symlinked: .local, .viminfo, .cpan. We suggest you to use the following to symlink them (replace both “choose-a-directory-name” with 1 name):

$ mv ~/.local $SCRATCH/choose-a-directory-name
$ ln -s $SCRATCH/choose-a-directory-name ~/.local

Tools and Software

The first major tool to note for the HPRC clusters is the OnDemand portal (see more complete documentation here). Logging in to https://portal.hprc.tamu.edu will allow you to manage files, run jobs, launch software, etc. without using a terminal/command line (though you can also launch a command line from here).
Note that you will need a VPN to access it if you’re off-campus.
Major programs to be aware of include SLURM, ROOT, Jupyter, and Python. Notes and links for each below. Note that you do not need to download or install these; they are already present on the big computing clusters.

  • SLURM: workload managers for batch scheduling. All large/intensive processing must run through the batch system. Both Terra and Grace use SLURM.
    HPRC sometimes has short courses covering subjects like this. See this page for scheduling info and links to previous courses’ slides.
  • Python: The coding language of many important applications. Several versions are available. See the note below about modules. (Codecademy course)
  • Jupyter Notebook: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Note that this is one of the programs that can be launched using the HPRC portal mentioned above (see the ‘Interactive Apps’).
  • Geant4: A major physics toolkit from CERN.
  • ROOT: an object-oriented program and library developed by CERN.

HPRC has a lot of other software pre-installed on each cluster. Before trying to download or install anything, check these pages to see if it’s already available:

If you see that the software you need is installed on HPRC already, you will just have to load the relevant module.
For example:
If you need to use ‘matplotlib’ on Terra, you can search for it using the ‘Terra Software List’ link above. Click on the search results and you wil be shown a long list of matplotlib ‘modules’. Which one you choose will depend on what other software you need it to work with.
If you want one of the newer builds, for example, you want to load Python 3.8.2. You can type in your terminal:

$ module load matplotlib/3.2.1-intel-2020a-Python-3.8.2

Your current session will then have access to matplotlib. If you disconnect and come back later, you’ll have to load it up again.
See the HPRC module wiki page for more information.
If you need entirely new software installed, you may ask HPRC help. If it’s of wide-enough interest (see their policies here), HPRC could install it themselves as a module. You can also ask them if you need already-existing software put into a new module (if, for example, you need something built against python 3, but only a python 2 module exists).

  • Any MitchComp group member has access to the MitchComp group area /scratch/group/mitchcomp/ and may write his/her data there. You can use this area to organize your group’s works.
  • If you are using MitchComp’s group area, make sure to set premissions to group readable and writable by “chmod g+ws filename” command.
  • On Grace, every file stored in the group directory is charged to the group’s quota. However, HPRC does not support the idea of “group shared disk quotas” on Terra. Every file is owned by a person, and is charged to that person’s quota on Terra.

Clusters Details

More info from HPRC here.

  Terra Grace
Nodes 320 925
Cores 9,632 44,656
CPU Architecture x86_64
Intel 14-core 2.4GHz Broadwell
x86_64
Intel Xeon 6248R (Cascade Lake), 3.0GHz, 24-core
Interconnect Intel Omni-Path Fabric 100 Series switches Mellanox HDR 100 InfiniBand
Accelerator 1 NVIDIA K80 Accelerator2 NVIDIA 32GB V100 GPUs 2 NVIDIA A100 40GB GPU2 NVIDIA RTX 6000 24GB GPU4 NVIDIA T4 16GB GPU
Job Scheduler Slurm Slurm
File System GPFS; 5.5 PB raw (Jul 2019) Lustre and GPFS
Production Date Spring 2017 Spring 2021

Coding Resources

There are multiple computing languages used in our projects. Below are some of the external resources group members have used for introduction or reference.

  • https://www.w3schools.com: Tutorials for HTML, CSS, PHP, and other web-development topics. Includes some decent reference pages as well.
  • https://www.codecademy.com: Introductory, interactive lessons for Python, Bash/Shell, and other languages. Good if you’re new to coding. Requires an account, but you can get the low-level stuff for free (you may get lots of ‘upgrade to pro’ spam, though).
  • https://ryanstutorials.net: Introductory lessons for Linux, Bash, HTML, and other useful computing topics.
  • https://www.python.org: Everything Python—documentation and introductions and resources.
  • http://www.cplusplus.com: Everything C++—tutorials, reference, forums, etc.
  • search engines: There are many more resources than the above list; you may find others that make more sense to you. Further, once you start coding, it will be inefficient to search through official documentations and tutorials will not always have the exact answer you’re looking for. Get used to searching error codes and reading through StackExchange conversations.
  • https://linkedinlearning.tamu.edu/: Not specifically a coding resource, but TAMU gives us access to a lot of training videos here. Topics include coding, but also many other subjects relevant to your professional life.

Check through the webpages of each experiment (linked on the mitchcomp homepage) for more detailed usage information specific to our/your work (and let us know if the page for your experiment needs updating.).