Peloton Upgrade Notes- 6/23

Peloton June 2023: Ubuntu 22.04 Upgrade Notes

 

  1. The SSH host key has changed, so you will see text like the following when you first SSH to peloton.hpc.ucdavis.edu:

 

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!                                                                                        @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:ydOUR2t/MX3jbd3JIHDXMJyLjdhRV4OBLr9iJfQB8lw.

 

Please verify the SHA256 fingerprint you see matches the fingerprint above. You can then follow the instructions to clear out the old SSH key fingerprint for peloton.

  1. SSH to nodes is no longer permitted. You can get an interactive shell on a node in two different ways:
    1. To get a new job with a shell:
      1. srun --partition=PartitionName --time=5:00:00 --ntasks=1 --cpus-per-task=4 --mem=1G --pty /bin/bash -l
    2. To get a shell within an existing job:
      1. srun --jobid=your-running-job_ID_here --pty /bin/bash -l
         
  2. Previous modules have an added prefix of "deprecated/" and may or may not work correctly. The modules that have seen the most usage have been reinstalled with a new system, so look for those first with module avail -l . If you need software installed, please email the Peloton Help Desk at hpc-help@ucdavis.edu. Please keep in mind that the queue will be longer than usual as we work towards smoothing out issues as a result of the upgrade.
     
  3. During job submission with Slurm, modules are now purged on nodes before the sbatch file is loaded. All required modules must be loaded in sbatch files or manually re-loaded in srun. 
     
  4. The use of the /scratch/ partition on the login node has been deprecated. If you need to store data in /scratch/ then you should be running that activity on compute nodes, all of which have /scratch/ partitions. 
     
  5. New accounts are now created via the High Performance Personnel Onboarding (HiPPO) web portal at: https://hippo.ucdavis.edu/Peloton/. HiPPO does not currently include accounts created before 7/1/23. If your sponsor is not listed, please open a support ticket with the Peloton Help Desk at hpc-help@ucdavis.edu requesting your sponsor be added. Please include their name, email address, and the cluster you are trying to access. A sponsor is a person who owns resources on Peloton, either disk space or nodes through Slurm.
     
  6. As part of the transition to the HPC Core Facility, peloton.cse.ucdavis.edu has been rebranded to peloton.hpc.ucdavis.edu. The old DNS entry will remain usable during the transition time.
     
  7. The Peloton head/login node now thinks of itself as peloton.peloton.hpc.ucdavis.edu. This is intentional even though it looks a bit odd.
     
  8. If you do not see your home directory or group/PI directory, the most likely cause is a mis-conversion by our automation system. Please open a ticket.
     
  9. For new accounts, home directories will have a 20G quota and will be provided by HPCCF. Data should be stored in a PI storage area.
     
  10. /tmp/ and /scratch/ on nodes will automatically clean when they fill up. Unless there is a risk of those file systems filling to 100%, the cleaner attempts to be smart and not remove files from jobs currently running on that node.