FAS Research Computing - Notice history

Globus Data Transfer experiencing partial outage

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Partial outage

Netscratch (Global Scratch) - Operational

Holyscratch01 (Pending Retirement) - Operational

Home Directory Storage - Boston - Operational

HolyLFS06 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

Holystore01 (Tier 0) - Operational

Holylabs - Operational

BosLFS02 (Tier 0) - Operational

Isilon Storage Boston (Tier 1) - Operational

Isilon Storage Holyoke (Tier 1) - Operational

CEPH Storage Boston (Tier 2) - Operational

Tape - (Tier 3) - Operational

Boston Specialty Storage - Operational

Holyoke Specialty Storage - Operational

Samba Cluster - Operational

Globus Data Transfer - Partial outage

bosECS - Operational

holECS - Operational

Notice history

Dec 2024

FASRC monthly maintenance - Monday December 2nd, 2024 7am-11am
  • Completed
    December 02, 2024 at 4:00 PM
    Completed
    December 02, 2024 at 4:00 PM
    Maintenance has completed successfully
  • Update
    December 02, 2024 at 12:52 PM
    In progress
    December 02, 2024 at 12:52 PM

    Due to an urgent network issue which requires a restart of some network hardware, all jobs will need to be paused.

    Interactive jobs and the ability to write to some storage may be interrupted.

  • In progress
    December 02, 2024 at 12:00 PM
    In progress
    December 02, 2024 at 12:00 PM
    Maintenance is now in progress
  • Planned
    December 02, 2024 at 12:00 PM
    Planned
    December 02, 2024 at 12:00 PM

    FASRC monthly maintenance will occur Monday December 2nd, 2024 from 7am-11am

    IMPORTANT NOTICES

    • holyscratch01 will be set to read-only during this maintenance and will be decommissioned February 1, 2025. Please move any needed scratch data to netscratch and begin using it instead if you have not done so already. The global $SCRATCH variable will be changed to /n/netscratch

    • FASRC will be switching to the Harvard ServiceNow ticket system on Dec. 2nd. Our email addresses remain the same and no action is required on your part.
      Please do not re-open old/closed tickets after Dec. 2nd and instead create a new ticket.

    • Cannon cluster: serial_requeue and gpu_requeue will be set to allow MPI/multinode jobs. Such jobs need to be able to handle preemption/being requeued. 

    Training: Upcoming training from FASRC and other sources can be found on our Training Calendar. at https://www.rc.fas.harvard.edu/upcoming-training/

    Status Page: You can subscribe to our status to receive notifications of maintenance, incidents, and their resolution at https://status.rc.fas.harvard.edu/ (click Get Updates for options).

    Upcoming holidays: Thanksgiving Nov. 28th and 29th. Winter break Dec. 23rd through January 1st 

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: NO
    FASSE cluster will be paused during this maintenance?: NO

    • Set /n/holyscratch01 scratch filesystem to read-only

      • Audience: All cluster users

      • Impact: Please adoptthe new scratch filesystem /n/netscratch prior to Dec. 2nd. The $SCRATCH variable will move to /n/netscratch during this maintenance
        Data on holyscratch01 will still be readable, but not writable, and will be fully decommissioned on Feb. 1, 2025.

    • Switch ticketing system to ServiceNow. Our email addresses remain the same.

      • Audience: All FASRC users

      • Impact: All new tickets will go to Harvard'sServiceNow,our email remains the same. Existing tickets will get moved any time someone replies.

      • NOTE: From Dec. 2nd on, please do not re-open any old tickets. Create a new one instead by emailing rchelp@rc.fas.harvard.edu

    • Login node reboots

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: Login nodes will rebooted during this maintenance window

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/upcoming-training/

Nov 2024

Oct 2024

Oct 2024 to Dec 2024

Next