Assembling my 32 GPU RIG Part 2

Now all 4 nodes are installed. Due to lack of availability of GTX295 cards only 8 GPU’s are installed. Total number of GPU’s will be 32 (4 nodes per 4 double GPU GTX295 cards = total 32 GPU’s). Cooling panel will be added probably next weekend.

 

Estonia Donates 32 GPU Supercomputer, 4 nodes and 8 GPU's installed

Estonia Donates 32 GPU Supercomputer, 4 nodes and 8 GPU's installed

 

 

Typical 1 node configuration (nodes ED01 – ED04):

All nodes are interconnected via Cisco / Linksys rackmount switch. Power is distributed via 2 independent 240V rails.

 

62 000 PPD, with 8 GPUs (total 32 GPUs in near future)

62 000 PPD, with 8 GPU's (total 32 GPU's in near future)

All nodes have local work directories on local hard drive due to added authonomy and failure tolerance. FahMon is installed to ED01. All work directories are shared to network and made accesable to FahMon to display aggregate report and PPD figure.

Every node is also remote accessable by LogMeIn. Its easy to check from the office or home rigs GPU teperatures and other parameters.

Advertisements

Assembling my 32 GPU RIG Part 1

Empty 19″ rack waiting for 4 nodes and other..

19" empty rack for 16 GPU supercomputer

 

2 power distribution panels

2 power distribution panels

Cisco / Linksys switch for node interconnection

Cisco / Linksys switch for node interconnection

After the break, installing the nodes.

Solving the cooling

How to cool down 4-5 KW of heat from the device packed into relatively tight place. My solution is air-cooling and precice airflow control. 

 

Forced Air Cooling

Forced Air Cooling

All 4 nodes are housed inside the 24U closed rack. Cold air is sucked in near the floor at the bottom and pushed to the back side of closed rack. Then its sucked from back to front through GTX 295 GPU cards. Hot air isimmeadetly  moved out from the rack, rising upwards to AC devices of the server room.

In the fan plate I use 3 x 170 mm diameter  240V AC fans at full speed.

Assembling first node for my 4-node GPU rack

I have 24 U rack waiting to house 4 nodes, each 4 double graphic cards (8 GPU-s), total 32 GPU-s. Combined arithmetic performance is about 28 000 GFLOPS = 28 TFLOPS. 
I have 4 shelves (4U height to house 1 or 2 PSU-s each, system board, 4 GTX 295 graphics cards and hard drive. Trickiest part was to mount system board to shelf. Holes didn’t match. I bought 3 mm bolts, and 20 mm spacers to make good mounting. HDD is seated into plastic HDD shelf, attached to shelf.
Components:
planning components layout

planning components layout

 

20 mm apecers between rack shelf and K9A2 Platinum sytem board

20 mm spacers between rack shelf and K9A2 Platinum sytem board

 

System board on the left, PSU with cable management on the right

System board on the left, PSU with cable management on the right

 

HDD in the bracket, cables attached

HDD in the bracket, cables attached