Folding at 45 000 PPD pace

 

45K PPD dayly production

45K PPD daily production

My rigs:

  1. 3 x GTX 295 rig doing in average 33 – 38 000 PPD
  2. 2 x GTX 280 rig doing in average 11 – 14 000 PPD

Next week I start my super 4 x GTX 295 rig in the nec 2 PSU case. Hope to improve daily production byadditional  11-14 000 PPD to 55-60K level. In this case I’m in world top 100 individual folder list. Today I’m No 138 individual folder (by daily production in last 24 hours) in the world.

Advertisements

Coolermaster HAF 2 PSU case arrived

 

Coolermaster HAF

Coolermaster HAF

 

 

My new Coolermaster HAF 932 double PSU case arrived today. It needs some modding to install 4 fat GTX 295 cards.

Folding pretty well

Outlining my GPU cluster

 

  • Cluster will have 8-16 disk-less nodes with GPU-s and 1 server;
  • I like NVIDIA GTX 295 cause of 55 nm technology, less heat, less current per PPD.  WE have 4 PCI-s to spare on 1 host, density is very important. Each new host/node adds overhead cost. I believe, that next core will support more streaming processors.
  • Most probably next GTX 300 family is still 55nm, not 40 nm (as ATI), and there is no point to wait GTX 300; 
  • I’m wondering, why ATI is so poor in computing and good in games…. NVIDIA is my choice for GPU;
  • 19″ rack mount is excellent basis. I would put at least 2 ATX side by side, both equipped with 4 GPU double cards. All air-cooled. Total thermal output is 9-20 KW;
  • Diskless nodes, PXE-booting up via Ethernet from server HDD;
  • Custom-made central power supply ( One for all 8 to 16 nodes, ca 20 KW output);
  • I would leave CPU / SMP folding out of picture. I’d rather use MSI K9A2 Platinum 4 x PCIx16 MB’s and cheap Phenom X3 CPU-s with 2 GB RAM (enough);
  • Each node will gave 4 x double-GPU cards — GTX 295 for example);
  • Very important aspect is management in such system. I dont have solution Yet, but VMWARE, Windows HPC will do it most probably. 
  • I would prefer Win platform, cause of better drivers for GPU-s.

Tartu University bought cluster – but outdated technology

Read an article in Äripäev yesterday about new computer-cluster of Tartu University. It’s old-fashioned classical 42-node 1U 2 x 4core setup. each node has 2 x Intel Quad Xeon 5400 = 8 cores per node, 32 GB ram and 500GB HDD, interconnected via InfiniBand. TU reports total performance 0,84 TFLOPS.

Some math (and physics):

  • CPU power consumption for CPU system
    42 x 2 x 80 Watts = 6 720 Watts
  • GPU power consumption for GPU system
    4 x 285 Watts = 1 140 Watts 
  • Performance per Watt for CPU system
    840 GFLOPS / 6 720 W= 0,125 GFLOPS / Watt
  • Performance per Watt for GPU system
    7 200 GFLOPS / 6 720 W= 6,315 GFLOPS / Watt

Conclusion: GPU system uses 50 times less energy per GFLOPS than CPU system used by Tartu University.

So, its just 334 cores against my current configuration of 1440 cores. I really believe, that their choice was driven by status quo, sad, too mutch money has been spent to achieve only mediocre performance. 

Read Tartu University news article

Future milestones

milestones

Our plans for the future. Phase 1 and 2 are successfully completed. Architecture of Phase 4 is on the drawing board. It will be HPC (High Performance Cluster), using GPU nodes, interconnected via Gigabit ethernet.

Improved GPU air-cooling

I improved cooling, to get GPU temperatures down from 95 C degrees. I cut the hole into side panel, just opposite to GPU-s and installed 2 pcs of 120 mm fans pulling hot air from GPU-s OUT from the case.GTX 280 pumped hot cooling air out from the case througt back panel, GTX 295 (used in my rig) pumps hot air back to the case.

Temperatures drop immedeatly 7-8 C degrees. Hope, that < 90 C is good cruising temp.

My rig is doing in average 35-36 000 PPD, depending of work units. It pulls ca 800-850 Watts from the wall and most is used by GPU-s, producing lot of hot air.