Now all 4 nodes are installed. Due to lack of availability of GTX295 cards only 8 GPU’s are installed. Total number of GPU’s will be 32 (4 nodes per 4 double GPU GTX295 cards = total 32 GPU’s). Cooling panel will be added probably next weekend.
Typical 1 node configuration (nodes ED01 – ED04):
- AMD Phenom 8650 2,3 GHz X3 Triple-Core CPU
- MSI K9A2 Platinum motherboard with 4 x double-spaced PCI-X slots
- 4 GB RAM (Apacer DDR2 2 x 2 GB 800 Mbps modules)
- 4 x GTX295 double GPU cards (total 8 GPU’s per node)
- 2 x 850 W (total 1700W per node, two Chieftec CFT-850 Turbo Series cable management poer supplies synchronised)
- HDD 80 GB (WD 7200 RPM SATA)
- 1 x F@H SMP 6.23 beta client running in SMP mode
- 8 x F@H GPU 6.23 beta clients, running in individual GPU’s
- LogMeIn client
- MS Windows XP Pro 32-bit SP3
All nodes are interconnected via Cisco / Linksys rackmount switch. Power is distributed via 2 independent 240V rails.
All nodes have local work directories on local hard drive due to added authonomy and failure tolerance. FahMon is installed to ED01. All work directories are shared to network and made accesable to FahMon to display aggregate report and PPD figure.
Every node is also remote accessable by LogMeIn. Its easy to check from the office or home rigs GPU teperatures and other parameters.