FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Oct 2023

Major power event at MGHPCC (Holyoke) data center
  • Resolved
    Resolved

    FASSE login and OOD have been returned to service.

  • Monitoring
    Monitoring

    Most resources are once again available. The Cannon (including Kempner), FASSE, and Academic cluster are open for jobs. Please note that FASSE login and OpenOnDemand (OOD) nodes are not yet available. ETA Monday morning.

    Thanks for your patience through this unexpected event.

  • Update
    Update

    Power-up is progressing with, so far, only minor issues which we are addressing according to their impact on returning the cluster to service.

    Expect some remaining effects on less-essential services into tomorrow.

    Please note that login nodes will remain down until we return the cluster and scheduler to service.

  • Update
    Update

    MGHPCC has isolated the cause of the generator failure and will continue to look into the grid failure.

    At this time they will begin re-energizing the facility. Once that is complete and we have confirmed the networking is stable we can begin powering up our resources.

    Please bear with us as this is a long process given the number of systems we maintain and it must be done in stages. Watch this page for updates.

  • Identified
    Identified

    With an abundance of caution FASRC and other MGHPCC occupants will not attempt to rush to restoration but will wait until the facility has restored primary power and confirmed stable operation before attempting to resume normal operations.

    As such, we expect to begin restoring FASRC services tomorrow (Sunday). Since all Holyoke services and resources are down, this is a lengthy process similar to the startup process after the annual power-down.

    Updates will be posted here. Please consider subscribing to our status page (see 'Get Updates' up top).

  • Investigating
    Investigating

    There has been a major power event at MGHPCC, our Holyoke data center.
    We are awaiting further details

    This likely affects all holyoke resources including the cluster and storage housed in holyoke.

    More details as we learn them.

FASRC monthly maintenance Monday October 2nd, 2023 7am-11am
  • Completed
    October 02, 2023 at 3:00 PM
    Completed
    October 02, 2023 at 3:00 PM

    Maintenance has completed successfully

  • In progress
    October 02, 2023 at 11:00 AM
    In progress
    October 02, 2023 at 11:00 AM

    Maintenance is now in progress

  • Planned
    October 02, 2023 at 11:00 AM
    Planned
    October 02, 2023 at 11:00 AM

    FASRC monthly maintenance will take place Monday October 2nd, 2023 from 7am-11am

    NOTICES

    New training sessions are available. Topics include New User Training, Getting Started on FASRC with CLI, Getting Started on FASRC with OpenOnDemand, GPU Computing,  Parallel Job Workflows, and Singularity. To see current and uture training sessions, see our calendar at: https://www.rc.fas.harvard.edu/upcoming-training/

    MAINTENANCE TASKS

    Cannon cluster will be paused during this maintenance?: Yes
    FASSE cluster will be paused during this maintenance?: No

    Cannon UFM updates
    -- Audience: Cluster users
    -- Impact: The cluster will be paused while this update takes place.

    Login node and OOD/VDI reboots
    -- Audience: Anyone logged into a login node or VDI/OOD node
    -- Impact: Login and VDI/OOD nodes will rebooted during this maintenance window  

    Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
    -- Audience: Cluster users
    -- Impact: Files older than 90 days will be removed. Please note that retention cleanup can run at any time, not just during the maintenance window.

    Thanks,
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Sep 2023

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Aug 2023

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

FASRC Monthly maintenance August 7, 2023 7am-1pm *NOTE EXTENDED TIME*
  • Completed
    August 07, 2023 at 1:42 PM
    Completed
    August 07, 2023 at 1:42 PM

    Due to a vendor error we were unable to complete holyscratch01 disk shelf replacement. We will work with the vendor to reschedule.

    All other maintenance tasks have completed.

  • In progress
    August 07, 2023 at 11:00 AM
    In progress
    August 07, 2023 at 11:00 AM

    Maintenance is now in progress

  • Planned
    August 07, 2023 at 11:00 AM
    Planned
    August 07, 2023 at 11:00 AM

    August maintenance will run August 7, 2023 from 7am-1pm.

    Please note the extended timeframe.
    See tasks section below for explanation.

    NOTICES

    • CentOS 7 Support EOL: We will be dropping support for CentOS 7 support in September. If your machine or VM is CentOS 7 and connects with Slurm please contact FASRC to discuss options.

    • Test Partition Changes: We are changing test partitions based on changing needs and increasing max time to 12hrs instead of 8 hrs. A reminder that this partition is not for running jobs.

    MAINTENANCE TASKS

    • holyscratch01 Disk Shelf Replacement  All Jobs Will Be Paused
      -- Audience : All cluster and scratch users - Cannon and FASSE
      -- Impact:  Hardware issues with holyscratch01 necessitate the replacement of one of the disk shelves. As a result all jobs and scratch will need to be paused for the duration.
      -- ETA: This swap is expected to take 3-4 hours, but pausing the cluster, vendor interactions, and allowing a margin for over-run requires that we extend maintenance by 2 hours  (7am-1pm)

    • Login node and OOD/VDI reboots
      -- Audience: Anyone logged into a a login node or VDI/OOD node
      -- Impact: Login and VDI/OOD nodes will rebooted during this maintenance window  

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
      -- Audience: Cluster users
      -- Impact: Files older than 90 days will be removed.

    Thanks,
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Aug 2023 to Oct 2023

Next