The National Institute for Computational Sciences

File Systems

Summary

The below table describes the ACF file systems.
File System PurposePath to User's DirectoryQuota, Purge policy
Home Directory /nics/[a,b,c,d]/home/{username} 10GB quota, not purged
Lustre Scratch Directory /lustre/haven/user/{username} No Quota , purged
Lustre Project Directory /lustre/haven/proj/{project} By Request, not purged
Lustre Medusa Directory /lustre/medusa/proj/{project} Now in readonly, to be retired
Newton Gamma Directory/lustre/haven/gamma/{directory} Newton Amount, not purged
ACF file systems are generally very reliable, however, data may still be lost or corrupted. Users are responsible for backing up critical data unless arrangements are made in advance. Backups can be provided by request for a fee for critical data.

Backups are performed on the Home Directory file system. Lustre Haven project directories are only backed up by request for a fee. Lustre Haven scratch directories are NOT backed up.

Network File System (NFS)

NFS space is available and is used for home directories and project space mounted across many ACF resources. There is approximately 15 terabytes (TB) of space available in this file system.

NFS Home Directories

User home directories are provided by NFS and is unpurged with a default quota of 10 gigabytes (GB) of storage space. This is the location to store user files up to the quota limit. The environment variable $HOME points to your home directory path. To request an increase in home directory quota limit submit a request to help@nics.utk.edu. Project space on NFS is discussed below.

Home directories are regularly backed up.

NFS Project Space

For sharing data among a research group, project directories on NFS can be provided. To request an NFS project directory see the Project Directory Request page. NFS project directories are located at /nics/a/proj/{directory}.

NFS Project space directories are regularly backed up.

Lustre Haven

The ACF global file system was purchased in the summer of 2017 by JICS using JICS funds. This file system resides on a Data Direct Networks (DDN) 14K storage subsystem and is called Lustre Haven or simply Haven. Haven provides approximately 1.7 petabyes (PB) of usable storage and is available on all ACF login, data transfer nodes (DTNs) and compute nodes mounted at /lustre/haven. Lustre is a high performance parallel file system which can achieve up to approximately 24GB/s file system performance. Lustre Haven provides global high performance scratch space for data sets related to running jobs and global project space for the ACF resources. These are described in more detail below.

Scratch Directories on Lustre Haven

The Haven file system provides global high performance scratch space for data sets related to running jobs on the ACF resources and transferring data in and out of the DTNs. Every user has their own scratch directory created at account creation time located in /lustre/haven/user/{username}.The environment variable $SCRATCHDIR points to each users scratch directory location. Scratch space on Haven can be purged weekly, but has no storage space or quota limit associated with it.

Lustre Haven Scratch directories are NOT backed up.

Important Points for Users Using Lustre Haven Scratch

  • The Lustre Haven Scratch file system is scratch space, intended for work related to job setup, running jobs, and job cleanup and post-processing on ACF resources and not for long term data storage. Files in scratch directories are not backed up and data that has not been used for 30 days is subject to being purged. It is the user's responsibility to back up all important data to another storage resource.

    The Lustre find command can be used to determine files that are eligible to purge:

    > lfs find /lustre/haven/user/$USER -mtime +30 -type f
    
  • This will recursively list all regular files in your Lustre scratch area that are in eligible to be purged.

  • Striping is an important concept with Lustre—. Striping is the ability to break files into chunks and spread them across multiple storage targets (called OSTs). The striping defaults set up for NICS resources are usually sufficient but may need to be altered in certain use cases, like when dealing with very large files. Please see our Lustre Striping Guide for details.

  • Beware of using normal Linux commands for inspecting and managing your files and directories in Lustre scratch space. Using ls -l can cause undue load and may hang because it necessitates access to all OSTs holding your files. Make sure that your ls is not aliased to ls -l.

  • Use lfs quota to see your total usage on the Lustre system. You must specify your username and the Lustre path with this command, for example:

    > lfs quota -u <username> /lustre/haven
    

For more detailed information regarding Lustre usage, see the following pages:

Lustre Haven Project Directories

For sharing data among a research group, project directories on Lustre can be provided. To request a Lustre Haven project directory see the Project Directory Request page. Lustre Haven project directories are located at /lustre/haven/proj/{project-name}.

Lustre project directories are NOT normally backed up and can be backed up by request for a fee.

Lustre Medusa Scratch and Project Directories

The Lustre Medusa file system is older DDN equipment and is being retired. All project directories under /lustre/medusa will be moved to /lustre/haven/proj. Lustre Medusa scratch data must be moved by users. To transition storage use to Lustre Haven, the Lustre Medusa file system was set to readonly where users can no longer write files to this file system.

Lustre Medusa directories are NOT backed up.

Newton Storage Allocation Transition to ACF

i.e. Newton /data, /lustre/scratch, and /gamma directories

In order to streamline the transition from Newton to the ACF, the ACF will honor all approved Newton project storage allocations that exist in /data, /lustre/projects, and /gamma for one year until September 20, 2018. A directory will be created in the Lustre Haven file system at /lustre/haven/gamma/{directory} to correspond to the Newton directory. Users and/or workgroups will only be allowed to have one /lustre/haven/gamma directory and they can create subdirectories off of that directory as needed. The NICS User Portal has been updated to list Newton project space allocations and the directory location on the ACF. Users must have an ACF account and perform the "associate your NetID with your NICS account" process in the NICS user portal in order to get a /lustre/haven/gamma directory created. Every Monday an attempt to create /lustre/haven/gamma directories for users transitioning from Newton will be performed. Submit a ticket if you think you should have a Newton project directory transitioned to ACF but you don't see it listed in the NICS user portal. Transferring data from Newton to the ACF is the responsibility of the user.

Important Notes: Newton home directories are not included in the transfer of storage allocation space to the ACF. JICS is providing 500 TBs of projects space in Lustre Haven for UTK researchers which is greater than the space that was provided by the Newton /gpfs (/gamma) file system.

Lustre Haven directories are NOT backed up and can be backed up by request for a fee.

NICS will be developing additional storage policies and will notify users about storage policy changes several months prior to the expiration of Newton project directories transitioned to the ACF.