[Cs-affiliates] Fwd: SCC Expansion and Enhancement

Paul Stauffer paulds at bu.edu
Mon Apr 10 16:03:22 EDT 2017

Hi all.  I believe this announcement only went out to people with existing
SCC accounts; for the rest of you, perhaps this will spur your interest in
learning more about this excellent computational resource available to all
BU researchers.

For more information on BU's Shared Computing Cluster, or to request an
account, please visit:

- Paul

----- Forwarded message from Research Computing Services <rcs at bu.edu> -----

> Date: Mon, 10 Apr 2017 14:52:34 -0400 (EDT)
> From: Research Computing Services <rcs at bu.edu>
> To: paulds at bu.edu
> Subject: SCC Expansion and Enhancement
> Dear Researcher,
> We are pleased to announce that a major expansion has been made to the
> Shared portion of the Shared Computing Cluster (SCC), enabling researchers
> to run several types of jobs that were not previously possible on the SCC.
> New resources include state-of-the-art GPUs, larger memory nodes, and
> faster fabric MPI nodes.  The new nodes all have at least 28 CPU cores, 
> allowing researchers to run larger single-node OMP jobs and larger-scale 
> MPI jobs.  They also have 10 Gb Ethernet access to the SCC filesystem.  
> There are 86 new nodes with a total of 2424 CPU cores.  Given the new 
> capacity, the per user limit on simultaneous usage of Shared cores has been 
> increased from 512 to 1000.
> The new nodes were recently put into production and your jobs may have 
> already utilized them.  They were acquired with University funds as part of 
> an effort to update older cluster resources with newer technology.  These
> older resources will be retired at a later date.
> Details on the expansion are summarized below and have also been
> incorporated into the RCS web site.  If you have questions about using any
> of these resources, please don't hesitate to contact the RCS staff at
> help at scc.bu.edu.
> Sincerely,
> RCS Staff
> ============================
> 36 nodes, each with 28 cores and 256 GB memory 
> Policy: 30 day time limit
> Intended for single-processor and small multithreaded jobs. 
> For more information on requesting appropriate resources for your batch 
> jobs see:
> http://www.bu.edu/tech/support/research/system-usage/running-jobs/submitting-jobs/ 
> 2 nodes, each with 36 cores and 1 TB memory
> 8 nodes, each with 28 cores and 512 GB memory
> Policy: 10 day time limit; 1 running job per user for 1 TB nodes; 2 running
> jobs per user for 512 GB nodes
> To run on these nodes, the job must request the whole node.  For more
> information and examples see:
> http://www.bu.edu/tech/support/research/system-usage/running-jobs/batch-script-examples/#LARGEMEMORY
> 36 nodes, each with 28 cores, 256 GB memory, and 100Gb/s EDR Infiniband fabric.
> Policy: 5 day time limit; maximum of 448 cores per user for MPI jobs
> running on these nodes
> For more information on running MPI jobs see:
> http://www.bu.edu/tech/support/research/system-usage/running-jobs/parallel-batch/#mpi 
> 4 nodes, each with 28 CPU cores, 256 GB memory, and 2 NVIDIA P100 GPUs
> Policy: 48 hour time limit; 2 GPUs per user for batch jobs; 1 GPU per user
> for interactive jobs
> These state-of-the-art general purpose GPUs are appropriate for a range of
> applications including Machine Learning, Computational Chemistry, Fluid
> Dynamics, Bioinformatics, Numerical Analytics, and many others.
> For more information on running jobs using GPUs see:
> http://www.bu.edu/tech/support/research/system-usage/running-jobs/parallel-batch/#gpu 
> For details on all SCC nodes see the Technical Summary: 
> http://www.bu.edu/tech/support/research/computing-resources/tech-summary/

----- End forwarded message -----

Paul Stauffer <paulds at bu.edu>
Manager of Systems Administration
Computer Science Department
Boston University

More information about the Cs-affiliates mailing list