FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
Documentation: https://docs.rc.fas.harvard.edu | Account Portal https://portal.rc.fas.harvard.edu
Email: rchelp@rc.fas.harvard.edu | Support Hours


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Sep 2025

FASRC monthly maintenance Monday September 8th, 2025 9am-1pm
  • Completed
    September 08, 2025 at 5:00 PM
    Completed
    September 08, 2025 at 5:00 PM
    Maintenance has completed successfully
  • In progress
    September 08, 2025 at 1:00 PM
    In progress
    September 08, 2025 at 1:00 PM
    Maintenance is now in progress
  • Planned
    September 08, 2025 at 1:00 PM
    Planned
    September 08, 2025 at 1:00 PM

    FASRC monthly maintenance will take place Monday September 8th, 2025 from 9am-1pm

    NOTICES

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: YES
    FASSE cluster will be paused during this maintenance?: YES

    • Slurm Upgrade to 25.05.2

      • Audience: All cluster users

      • Impact: Jobs and the scheduler will be paused during this upgrade

    • Domain controller work

      • Audience: Internal network 

      • Impact: No impact expected

    • Login node reboots

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: All login nodes will rebooted during this maintenance window

    • Netscratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during  the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/

Aug 2025

SMB access to shares on the FASRC samba cluster)
  • Resolved
    Resolved

    SMB access has been restored. Please disconnect and retry if you have a failed mapped drive. If you still cannot connect to a share, please contact rchelp@rc.fas.harvard.edu and let us know your username and exactly which share you are attempting to map.

  • Identified
    Identified

    We are continuing to work on a fix for this incident. No ETA.

  • Investigating
    Investigating
    Drive mapping to some shares may fail if those shares use the Samba Cluster. This includes but is not limited to share paths that begin with \\\\smbip. Known affected shares: * `anderson_lab` * `arlotta_lab` * `bellono_lab` * `bertoldi_lab` * `capellini_lab` * `dasch14` * `dasch15` * `dasch16` * `denic_lab` * `dobbie_lab` * `engert_lab` * `ferreira_lab` * `fortune_lab` * `friedman_lab` * `girguis_lab` * `grad_lab` * `hausmann_lab` * `hays_lab` * `hbs_liran` * `hbs_rcs` * `huh` * `illumina` * `jessicacohen_lab` * `lichtman_boslfs02` * `mallet_lab` * `mason_lab` * `mckinley_lab` * `mcz` * `mitrano_lab` * `moorcroftfs5` * `murraylab` * `nmr_large` * `nmr_small` * `novitsky_lab` * `pooling` * `qbrc_center` * `ramachandran_lab` * `schnapp_lab` * `schrag_lab` * `srivastava_lab` * `test` * `whited_lab` * `yau2_lab`

Jul 2025

Rolling cluster OS upgrades July 7 - 10
  • Completed
    July 11, 2025 at 4:02 PM
    Completed
    July 11, 2025 at 4:02 PM

    All upgrades are complete. A small number of nodes need clean-up, but the cluster is back to normal operation with all nodes running Rocky 8.10. Thanks for your patience.

  • Update
    July 07, 2025 at 1:00 PM
    In progress
    July 07, 2025 at 1:00 PM

    UPDATE: 7/7/25 6M FASSE is operational.

    Please be aware that FASSE jobs cannot be launched at this time due to the upgrades.
    We will return all FASSE nodes to normal services as soon as possible.

    https://www.rc.fas.harvard.edu/blog/2025-compute-os-upgrade/

  • In progress
    July 07, 2025 at 1:00 PM
    In progress
    July 07, 2025 at 1:00 PM

    Cannon rolling upgrades are in progress. Not all nodes are available.

    https://www.rc.fas.harvard.edu/blog/2025-compute-os-upgrade/

  • Planned
    July 07, 2025 at 1:00 PM
    Planned
    July 07, 2025 at 1:00 PM

    Cluster OS upgrades - July 7 -10

    • Audience: All cluster users

    • Impact: Over 4 days, July 7 through 10, we will upgrade the OS on 25% of the cluster each day.
      During that time, total capacity will be reduced across the cluster by 1/4 each day.
      This will require draining each sub-set of nodes ahead of time. 

    Work begins during the July 7th maintenance (login nogdes will be upgraded during the 7/7 maintenance window) and will continue through July 10th.

    Additional details and a breakdown of each phase: 2025 Compute OS Upgrade

FASRC Monthly maintenance July 7, 2025 9AM-1PM
  • Completed
    July 07, 2025 at 5:00 PM
    Completed
    July 07, 2025 at 5:00 PM
    Maintenance has completed successfully
  • In progress
    July 07, 2025 at 1:00 PM
    In progress
    July 07, 2025 at 1:00 PM
    Maintenance is now in progress
  • Planned
    July 07, 2025 at 1:00 PM
    Planned
    July 07, 2025 at 1:00 PM

    FASRC monthly maintenance will take place Monday July 7th, 2025 from 9am-1pm

    NOTICES

    • ​New Quota tool available (/usr/local/sbin/quota) - Works on all filesystem types (home directory, lustre, isilon, netscratch, etc.)
      Type quota -h to see the full instructions for usage o visit the usage doc.

    • Training: Upcoming training from FASRC and other sources can be found on our Training Calendar. at https://www.rc.fas.harvard.edu/upcoming-training/

    • Status Page: You can subscribe to our status to receive notifications of maintenance, incidents, and their resolution at https://status.rc.fas.harvard.edu/ (click Get Updates for options).

    • Upcoming holidays:​ Juneteenth - ​T​hur. June 19​ / Independence Day - Fri​. July 4

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: YES
    FASSE cluster will be paused during this maintenance?: YES

    • Slurm Upgrade to 24.11.5

      • Audience: All cluster users

      • Impact: Jobs and the scheduler will be paused during this upgrade

    • Login node ​OS ​upgrades

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: All login nodes will ​upgraded ​and unavailable during this maintenance window

    • ​Start of cluster OS upgrades - July 7 -10

      • Audience: All cluster users

      • Impact: Over 4 days, July 7 through 10, we will upgrade the OS on 25% of the cluster each day. During that time, total capacity will be reduced across the cluster by 1/4 each day. This will require draining each sub-set of nodes ahead of time. 

    • Netscratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/

Jul 2025 to Sep 2025

Next