Skip to content

Tuesday May 7, 2013 – Hardware Installation and Power-On Tests

Today, physical installation of the server hardware continued.  With all of the 16 blade enclosures and 96 compute nodes installed yesterday, the team focused on installing the cluster's 4 head nodes, Dell NSS primary storage system, and the Dell HSS Terascala Lustre scratch storage system.

The cluster's head nodes will primarily handle user logins, cluster management, resource scheduling, data transfers in and out, and interconnect network fabric management.  Each head node is equipped with dual Intel Xeon E5-2670 2.6 GHz 8-core processors, 64GB of RAM, one non-blocking 56GBps FDR InfiniBand connection to the cluster's internal interconnect network, and two 10Gb fiber uplinks to the campus network core.  Combined, the head nodes will have 80 Gbps of throughput to the GW network core!  With this level of data throughput into the core, researchers will be poised to take advantage of GW's robust connectivity to the public Internet, the Internet2 research institution network, and the 100Gb inter-campus link planned for later this year.

As the storage systems are physically installed, the Dell COSIP team verifies each of the 96 compute nodes by individually powering and testing each node.

The cluster's primary storage system, Dell's NSS NFS storage solution, resides in Rack #4 and features non-blocking connectivity to the cluster's high-speed FDR Infiniband interconnect network and approximately 144TB of usable capacity.  There's plenty of room to expand the storage system - each additional 4U storage chassis adds 144TB of usable capacity in a simple and highly cost effective manner.

After completing the storage system installation in Rack #4, the installers focused on the installation of the Dell Terascala HSS Lustre scratch storage system.  The HSS solution will provide a high-speed parallel file system to support multi-node jobs efficiently and effectively.  In its current configuration, the solution supports up to 6.2 GB/s of read performance and 4.2 GB/s write performance.  Both the storage capacity and performance can be expanded relatively easily by adding storage enclosures and additional object storage server (OSS) pairs.

With all of the major hardware installed, the Dell Team turns to installing, labeling, and managing cables.  With 192 InfiniBand and Ethernet cables coming from the 96 compute nodes, this is no small task.  Because of the amount of heat produced by the compute nodes, airflow is essential.  Proper cable management will aid in establishing proper airflow for each cluster component.

Day two of the installation wrapped up around 6:30 PM.  Great progress has been made in the physical deployment.  Tomorrow the team will focus on running cables for the management and interconnect networks.

Leave a Reply