FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
Documentation: https://docs.rc.fas.harvard.edu | Account Portal https://portal.rc.fas.harvard.edu
Email: rchelp@rc.fas.harvard.edu | Support Hours


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Under maintenance

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Under maintenance

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Dec 2025

Monthly Maintenance and MGHPCC Power Work - Dec. 8, 2025 6am-6pm
Scheduled for December 08, 2025 at 11:00 AM – 11:00 PM about 12 hours
  • Planned
    December 08, 2025 at 11:00 AM
    Planned
    December 08, 2025 at 11:00 AM

    Monthly maintenance will take place on December 8th. Our maintenance tasks should be completed between 9am-1pm. However: 

    Additionally, MGHPCC will be performing power upgrades on the odd side of Row 8A where much of our computer resides. This is the final upgrade for this row. Current estimate for this work is a 12 hour window 6am-6pm.

    A list of the affected partitions is provided at the bottom of this notice. The nodes in those partitions will be drained prior to the work and will be powered down. Once the work is completed, those nodes will be returned to service. 

    Notices:

    • New FASSE partition fasse_gpu_h200. This partitions has 2 H200 nodes and a 3day limit. It is available now.

    • 11/26 - 11/28 are university holidays (Thanksgiving). No on-site support, FASRC staff will return on 12/1.

    • Training: Upcoming training from FASRC and other sources can be found on our Training Calendar. at https://www.rc.fas.harvard.edu/upcoming-training/

    • Status Page: You can subscribe to our status to receive notifications of maintenance, incidents, and their resolution at https://status.rc.fas.harvard.edu/ (click Get Updates for options).

    • We'd love to hear success stories about your or your lab's use of FASRC. Submit your story here.

    MAINTENANCE TASKS

    Cannon cluster will be paused during this maintenance?: PARTIAL OUTAGE/YES
    FASSE cluster will be paused during this maintenance?: PARTIAL OUTAGE/YES

    • Power work on Row 8A odd

      • Audience: Users of the partitions listed below

      • Impact: These nodes and partitions will be fully or partially down all day

    • OneFS (Isilon) upgrade

      • Audience: All Isilon (Tier 1) shares

      • Impact: Some VMs will be impacted including Cannon OOD, CBScentral, MCZapps/MCZbase, Portal, and Rclic1 (license server)

    • Slurm upgrade to 25.05.5

      • Audience: All cluster users

      • Impact: Jobs will be paused during maintenance

    • Login node reboots

      • Audience: All login node users

      • Impact: Login nodes will reboot during the maintenance window

    Impacted Cannon Partitions (Full or Partial Outage):

    • arguelles_delgado_gpu_a100

    • arguelles_delgado_gpu_mixed

    • bigmem_intermediate

    • blackhole_gpu

    • eddy

    • gershman

    • gpu_requeue

    • hejazi

    • hernquist_ice

    • hoekstra

    • huce_ice

    • iaifi_gpu

    • iaifi_gpu_priority

    • iaifi_gpu_requeue

    • itc_gpu

    • jshapiro

    • kempner

    • kempner_dev

    • kempner_priority

    • kempner_h100

    • kempner_h100_priority

    • kempner_h100_priority2

    • kempner_h100_priority3

    • kempner_interactive

    • kempner_requeue

    • kovac

    • kozinsky

    • kozinsky_gpu

    • kozinsky_priority

    • kozinsky_requeue

    • murphy_ice

    • ortegahernandez_ice

    • rivas

    • seas_compute

    • seas_gpu

    • serial_requeue

    • siag_combo

    • siag_gpu

    • sur

    • zhuang

NESE tape system maintenance 12/1/25-12/5/25
Scheduled for December 01, 2025 at 11:00 AM – December 06, 2025 at 11:00 AM 5 days
  • In progress
    December 01, 2025 at 11:00 AM
    In progress
    December 01, 2025 at 11:00 AM
    Maintenance is now in progress
  • Planned
    December 01, 2025 at 11:00 AM
    Planned
    December 01, 2025 at 11:00 AM

    NESE, the Northeast Storage Exchange at MGHPCC which supplies the Tier3 tape service used by FASRC, will be offline for maintenance on the system Dec 1st - 5th. There will be ongoing performance-affecting maintenance until Dec 12th. Please see below for details.

    WHO: Any lab who has or is moving data to tape.

    IMPACT: No access 12/1/25 - 12/5/25. Reduced performance 12/5/25 - 12/12/25.

    NESE tape system maintenance and major software upgrade is scheduled to begin on December 1, 2025. As a result, the NESE Tape service will be offline from December 1 to December 5.

    Starting December 8 through December 12, the service will be back online with reduced performance. All maintenance activities are planned to conclude on December 12, 2025.

Nov 2025

Oct 2025

FASRC monthly maintenance Monday October 6th, 2025 9am-1pm
  • Completed
    October 06, 2025 at 5:00 PM
    Completed
    October 06, 2025 at 5:00 PM
    Maintenance is now in progress
  • In progress
    October 06, 2025 at 1:00 PM
    In progress
    October 06, 2025 at 1:00 PM
    Maintenance is now in progress
  • Planned
    October 06, 2025 at 1:00 PM
    Planned
    October 06, 2025 at 1:00 PM

    FASRC monthly maintenance will take place Monday October 6th, 2025 from 9am-1pm

    NOTICES

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: NO
    FASSE cluster will be paused during this maintenance?: NO

    • DNS server reboots

      • Audience: All FASRC services

      • Impact: Rolling reboot should have no impact

    • Login node reboots

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: All login nodes will rebooted during this maintenance window

    • Netscratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/

Oct 2025 to Dec 2025

Next