User Tools

Site Tools


functional_diagram

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
functional_diagram [2021/05/03 14:39]
brandonm [1-GbE Switch and Internal Cluster Network] Hyphenation and word fixes
functional_diagram [2021/05/03 14:47] (current)
brandonm [Cooling Fans] Spelling, phrasing, and word fixes
Line 59: Line 59:
 ====25-GbE Network==== ====25-GbE Network====
  
-A high-performance 25-GbE network is available as an option on some systems (and on others, it is included). Similar to the 1-GbE network, the 25-GbE is a non-routable (192.168.1.X) internal network. The "switchless" network is achieved by using a a four port 25-GbE NIC on the main motherboard. Three of the four ports are connected to the worker motherboard. Nominally, the three worker ports are combined with the host to form an Ethernet bridge. In the bridge configuration, the bridge responds and acts like a single IP address. Communication between workers is forwarded to the correct port ("bridged") through the host. The reaming port on the four port NiC is available for expansion.+A high-performance 25-GbE network is available as an option on some systems (and on others, it is standard). Similar to the 1-GbE network, the 25-GbE is a non-routable (192.168.1.X) internal network. The "switchless" network is achieved by using a a four port 25-GbE NIC on the main motherboard. Three of the four ports are connected to the worker motherboard. Nominally, the three worker ports are combined with the host to form an Ethernet bridge. In the bridge configuration, the bridge responds and acts like a single IP address. Communication between workers is forwarded to the correct port ("bridged") through the host. The remaining port on the four port NIC is available for expansion.
  
-If configured, the worker motherboards have a single port 25 GbE NIC as part of the removable blade. A cable from this port on each worker blade is brought out to the back of the case and connected to the four port 25GbE NIC.+If configured, the worker motherboards have a single-port 25 GbE NIC as part of the removable blade. A cable from this port on each worker blade is brought out to the back of the case and connected to the four-port 25 GbE NIC.
  
-On the double-wide Limulus systems, there is a second four port NIC on one of the additional worker motherboards. This NIC is connected to the other three worker motherboards and added to the bridge by connecting the extra port on each of the four port 25-GbE NIC.+On the double-wide Limulus systems, there is a second four-port NIC on one of the additional worker motherboards. This NIC is connected to the other three worker motherboards and added to the bridge by connecting the extra port on each four-port 25-GbE NIC.
  
 ====Data Analytics SSDs==== ====Data Analytics SSDs====
Line 69: Line 69:
 On Data Analytics (Hadoop/Spark) systems, each node can have up to two SSD drives. These drives are used exclusively for Hadoop storage (HDFS). System software for each node, including all analytics applications, is stored on a local NVMe disk (which is housed in the the M.2 slot on the motherboard).  On Data Analytics (Hadoop/Spark) systems, each node can have up to two SSD drives. These drives are used exclusively for Hadoop storage (HDFS). System software for each node, including all analytics applications, is stored on a local NVMe disk (which is housed in the the M.2 slot on the motherboard). 
  
-The Data Analytics drives are removable and are located in the front of the case. Each Limulus system has eight front-facingremovable drive slots (Double-wide systems have sixteen). Each blade provides two SATA connections that are routed to the removable drive cage. At minimum, each motherboard (including the main motherboard) has one of the drive slots occupied (for a total of four SSDs and eight SSDs for double-wide systems). Users can expand the HDFS storage (using the second drive slot) by adding additional SSD drives (either at time of purchase or in the future).+The Data Analytics drives are removable and are located in the front of the case. Each Limulus system has eight front-facing removable drive slots (double-wide systems have sixteen). Each blade provides two SATA connections that are routed to the removable drive cage. At minimum, each motherboard (including the main motherboard) has one of the drive slots occupied (for a total of four SSDs and eight SSDs for double-wide systems). Users can expand the HDFS storage (using the second drive slot) by adding additional SSD drives (either at time of purchase or in the future).
  
 ====Cooling Fans==== ====Cooling Fans====
  
-There are seven cooling fans in each case (eleven in the double-wide systems). Four of these fans are on the processor cooler (heat-sync). These fans are controlled by the motherboard on which they are housed (i.e. as the processor temperature increases, so does the fan speed).+There are seven cooling fans in each case (eleven in the double-wide systems). Four of these fans are on the processor cooler (heat sink). These fans are controlled by the motherboard on which they are housed (i.e. as the processor temperature increases, so does the fan speed).
  
-There are two large fans on the bottom of the case. These fans pull air from underneath the case and push it into the three worker blades. The fans, using a "Y" splitter, are attached to an auxiliary fan port on the main motherboard. A monitoring daemon running the main motherboards checks the processor temperature of each blade and reports the highest temperature to a second fan control daemon that controls the auxiliary fan port. If any blade temperature increases, the bottom intake fan speed is increased. +There are two large fans on the bottom of the case. These fans pull air from underneath the case and push it into the three worker blades. The fans are attached to an auxiliary fan port on the main motherboard using a "Y" splitter. A monitoring daemon running on the main motherboard checks the processor temperature of each blade and reports the highest temperature to a second fan control daemon that controls the auxiliary fan port. If any blade temperature increases, the bottom intake fan speed is increased. 
  
 The remaining fan is on the back of the case and runs at constant speed. This fan is used to help move air through the case. The remaining fan is on the back of the case and runs at constant speed. This fan is used to help move air through the case.
  
  
functional_diagram.1620052787.txt.gz ยท Last modified: 2021/05/03 14:39 by brandonm