FutureSystems
Function: Computer systems optimized for Big Data research and analysis. Supports Hadoop, Spark and Twister2 with solid-state disk, NVMe and local HDD.
Features:
a. 3456-core Haswell Cluster (Juliet)
The Haswell cluster (Juliet) is a SuperMicro distributed shared memory cluster with 3456 CPU cores and 16TB total memory capacity.
- Compute Nodes: SuperMicro X10DRT-HIBF servers
- 32 nodes: 2×18-core Intel® Xeon® CPU E5-2699 v3 2.30GHz
- 96 nodes: 2×12-core Intel® Xeon® CPU E5-2670 v3 2.30GHz
- Each node: 128GB memory, 8TB local disk, 400GB NVMe storage
- Network: Mellanox ConnectX-3 InfiniBand FDR 56GT/s
Operating System: RedHat Linux 7.4
b. 126-core NVIDIA K80/Volta GPU Cluster (Romeo)
The K80/Volta GPU cluster (Romeo) is a SuperMicro distributed shared memory cluster with 126 CPU cores, 161,792 CUDA cores, and 768GB total memory capacity.
- 4× SuperMicro X10DGQ servers: 2×12-core Intel® Xeon® E5-2670 v3 2.30GHz, 4× NVIDIA Tesla K80 GPUs (4992 CUDA cores each)
- 2× SuperMicro X10DGO servers: 2×10-core Intel® Xeon® E5-2600 v4 2.2GHz, 8× NVIDIA Tesla V100 GPUs (5120 CUDA cores each)
- Each node: 128GB memory, 8TB local disk, 400GB NVMe storage
- Network: Mellanox ConnectX-3 InfiniBand FDR 56GT/s
Operating System: RedHat Linux 7.4
c. 3264-core Knight’s Landing Cluster (Tango)
The Knight’s Landing cluster (Tango) is a Penguin Computing distributed shared memory cluster with 3264 Xeon Phi cores and 12.8TB total memory capacity.
- 16 nodes: 1×72-core Intel® Xeon Phi™ 7290F 1.50GHz
- 48 nodes: 1×68-core Intel® Xeon Phi™ 7250F 1.50GHz
- Each node: 200GB memory, 3.2TB local disk, 800GB NVMe storage
- Network: Intel OmniPath adapter
Operating System: CentOS 7.2.1511
d. 480-core Platinum Cluster (Tempest)
The Platinum cluster (Tempest) is a SuperMicro distributed shared memory cluster with 480 CPU cores and 2.5TB total memory capacity.
- 10 compute nodes: SuperMicro X11DPT-PS servers
- Each node: 2×24-core Intel® Xeon® Platinum 8160 2.10GHz
- Each node: 256GB memory, 8TB local disk, 400GB NVMe storage
- Network: Intel OmniPath adapter
Operating System: RedHat Linux 7.4
e. 768-core Platinum Cluster (Victor)
The Platinum cluster (Victor) is a SuperMicro distributed shared memory cluster with 768 CPU cores and 2.5TB total memory capacity.
- 16 compute nodes: SuperMicro X11DPT-PS servers
- Each node: 2×24-core Intel® Xeon® Platinum 8160 2.10GHz
- Each node: 256GB memory, 8TB local disk, 400GB NVMe storage
- Network: Mellanox ConnectX-3 InfiniBand FDR 56GT/s
Operating System: RedHat Linux 7.4
f. 192-core Cloud Cluster (Echo)
The cloud cluster (Echo) is a SuperMicro distributed shared memory cluster with 192 CPU cores and 6TB total memory capacity.
- 16× SuperMicro X9DRW servers
- Each node: 2×6-core Intel® Xeon® E5-2640 2.50GHz
- Each node: 384GB memory, 10TB local disk
- Network: 10GbE Ethernet and Mellanox ConnectX-3 InfiniBand FDR 56GT/s
Operating System: Ubuntu Linux 16.04
g. 128-core HP Cluster (Bravo)
The large-memory HP cluster (Bravo) is a 1.7 Tflop HP Proliant distributed shared memory cluster with 128 processor cores and 3TB total memory capacity.
- 16× HP DL180 servers
- Each with two quad-core Intel® Xeon® E5620 2.4GHz processors
- Each node: 192GB memory, 12TB local storage
- Network: PCIe 4x QDR InfiniBand adapter
Bravo is currently used as a shared storage cluster and is not being utilized for compute jobs.
Operating System: RedHat Linux 6.9
h. 192-core Tesla GPU Cluster (Delta)
The GPU cluster (Delta) is a SuperMicro distributed shared memory cluster with 192 CPU cores, 14,336 GPU cores, and 3TB total memory capacity.
- 16× SuperMicro X8DTG-QF servers
- Each with 2×6-core Intel® Xeon® 5660 2.80GHz processors
- Each with 2× NVIDIA Tesla C2075 GPUs (448 cores per GPU)
- Each node: 192GB memory, 9TB local storage
- Network: Mellanox ConnectX-2 VPI dual-port InfiniBand QDR/10GigE adapter
Operating System: RedHat Linux 7.4
Contact:
- Unit: Digital Science Center
- Campus: Bloomington
- Resource Type: Equipment
- Contact Name: Gary Miksek
- Contact Email: gmiksik@indiana.edu
Return to Search

