FAS Research Computing - Status Page

seas_compute experiencing degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

FASRC monthly maintenance Monday April 7th, 2025 from 8am-1pm
Scheduled for April 07, 2025 at 12:00 PM – 5:00 PM about 5 hours
  • Planned
    April 07, 2025 at 12:00 PM
    Planned
    April 07, 2025 at 12:00 PM

    FASRC monthly maintenance will take place Monday April 7th, 2025 from 9am-1pm

    NOTICES

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: YES
    FASSE cluster will be paused during this maintenance?: YES

    • Slurm Upgrade to 24.11.3

      • Audience: All cluster users

      • Impact: Jobs and the scheduler will be paused during this upgrade

    • Holylogin chassis repair

      • Audience: All cluster users

      • Impact: Holylogin nodes will be unavailable

    • bos-isilon code upgrade

      • Audience: All shares on bos-isilon (Boston isilon/Tier1)

      • Impact: bos-isilon shares may be unavailable during reboots

    • Tier 2 NFS server updates (h-nfsxx, b-nfsxx shares)

      • Audience: All b-nfsxx and h-nfsxx Tier2 NFS shares

      • Impact:  Ahares unavailable during update/reboot

    • OOD node maintenance

      • Audience: OOD (Open OnDemand/VDI) users

      • Impact: OOD nodes will unavailable during this maintenance window

    • Portal database migration

    • Login node reboots

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: All login nodes will rebooted during this maintenance window

    • Netscratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/

Degraded performance

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Degraded performance

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Recent notices

Show notice history