Cluster dusky.sharcnet.ca
Links |
System documentation in the SHARCNET Help Wiki
|
Manufacturer |
HP |
Operating System |
CentOS 7.6 |
Interconnect |
10G |
Total processors/cores |
1756 |
Nodes |
1‑20
|
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3
@ 2.3 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 500 GB
|
27
|
32 cores
4 sockets x 8 cores per socket
Xeon(R) CPU E5-4620
@ 2.2 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 140 GB
|
28‑31
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-262
@ 2.0 GHz
Type: Compute
Memory: 256.0 GB
Local storage: 140 GB
|
32‑39
|
16 cores
2 sockets x 8 cores per socket
Xeon(R) CPU E5-2630
@ 2.4 GHz
Type: Compute
Notes: 8 x NVIDIA Tesla K80 GPUs
Memory: 96.0 GB
Local storage: 1.8 TB
|
40‑59
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 480 GB
|
60‑63
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 GB
|
76
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-2620 v3
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
77
|
40 cores
2 sockets x 20 cores per socket
Xeon(R) Gold 6148
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
78
|
32 cores
2 sockets x 16 cores per socket
Intel(R) Xeon(R) Silver 4314
@ 2.4 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 TB
|
80‑96
|
24 cores
2 sockets x 12 cores per socket
Intel(R) Xeon(R) Gold 5317
@ 3.0 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 480 TB
|
|
Total attached storage |
220 TB |
Suitable use |
Note: This system is contributed by research groups. The contributing groups have the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET. |
Software available |
FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4 |
Recent System Notices
Status
|
Status
|
Notes
|
May 30 2024, 02:08PM
(7 months ago)
|
The cooling failure has been repaired and all nodes are back online.
|
May 03 2024, 03:07PM
(8 months ago)
|
The cooling failure has been partially repaired, but the technicians are still waiting for one replacement part. Some of the newer nodes have been restarted but we’ll leave the majority of the cluster down until the repair has been completed.
|
Apr 29 2024, 09:12PM
(8 months ago)
|
Nodes are offline due to a cooling failure. No jobs have been lost but no new jobs will start until the cooling is repaired.
|
Mar 14 2024, 10:24AM
(9 months ago)
|
All nodes are recovered after a previous problem with the /project filesystem.
|
Mar 14 2024, 09:58AM
(9 months ago)
|
The scheduler has put nodes into a drain state due to a previous problem with the /project filesystem. We’re working on restoring full service.
|
Sign-in to get full status history