This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
functional_diagram [2020/08/04 02:45] taylors [Main Motherboard] |
functional_diagram [2021/05/03 14:47] (current) brandonm [Cooling Fans] Spelling, phrasing, and word fixes |
||
---|---|---|---|
Line 4: | Line 4: | ||
- | The figure illustrates the functional parts that are packaged in a standard Limulus system (either desk-side or rack-mount) The " | + | The figure illustrates the functional parts that are packaged in a standard Limulus system (either desk-side or rack-mount). The " |
====Main Motherboard==== | ====Main Motherboard==== | ||
- | This is the main system motherboard. Users log into this motherboard like any other Linux workstation that is attached | + | This is the main system motherboard. Users log into this motherboard like any other Linux workstation that is connected |
====Worker Motherboards==== | ====Worker Motherboards==== | ||
- | The worker motherboards are removable blades (see the blade description). Each of these motherboards | + | The worker motherboards are removable blades (see the [[Limulus Compute Blades|blade description]]). Each of these motherboards |
- | On HPC systems the worker motherboards are booted and operate " | + | On HPC systems, the worker motherboards are booted and operate " |
- | On Data Analytics Systems (Hadoop, Spark, Kafka) the worker motherboards are booted using the local SSD drive occupying the NVMe slot. These motherboards are installed, maintained, and monitored | + | On Data Analytics Systems (Hadoop, Spark, Kafka), the worker motherboards are booted using the local SSD drive occupying the NVMe slot. These motherboards are installed, maintained, and monitored |
- | All IO is exposed on the front the of the case bezel (or door on the rack-mount systems). Under normal operation, there is no need to access the IO panel. All network cables and storage cables are connected to the front or the blade and must be removed before the blade it removed. | + | All I/O is exposed on the front of the case bezel (or door on the rack-mount systems). Under normal operation, there is no need to access the I/O panel. All network cables and storage cables are connected to the front of the blade and must be removed before the blade is removed. |
- | Twelve volt (DC) power to the blade comes from a blind connector on the back of the blade. An onboard | + | Twelve-volt (DC) power to the blade comes from a blind connector on the back of the blade. An onboard |
- | DC-DC convertor | + | DC-DC converter |
+ | |||
+ | Blade design details can be found on the [[Limulus Compute Blades|Limulus Compute Blades]] page. | ||
====NVMe Storage==== | ====NVMe Storage==== | ||
- | As mentioned, each motherboard provides NVMe storage option. The main node used an entrpise | + | As mentioned, each motherboard provides |
====RAID Storage==== | ====RAID Storage==== | ||
- | A RAID storage option is available and runs from the main motherboard. Depending on the amount of storage a RAID1 or RAD6 option is provided. There are six available 3.5 inch spinning disk slots available (12 on the double-wide systems). All RAID is managed in software and depending on the number of drives additional high speed SATA ports are added using an additional PCI card. | + | A RAID storage option is available and runs from the main motherboard. Depending on the amount of storage, a RAID1 or RAID6 option is provided. There are six available 3.5-inch spinning disk slots available (12 on the double-wide systems). All RAID is managed in software and depending on the number of drives, additional high speed SATA ports are added using an additional PCI card. |
The RAID drives are removable from inside the case (i.e they are inside behind the side door). | The RAID drives are removable from inside the case (i.e they are inside behind the side door). | ||
Line 35: | Line 37: | ||
====Power Supply==== | ====Power Supply==== | ||
- | Each Limulus system has a single power supply with a power cord (redundant options use two power cords). This power supply converts the AC line signal to the appropriate DC voltages to run the main motherboard and provided | + | Each Limulus system has a single power supply with a power cord (redundant options use two power cords). This power supply converts the AC line signal to the appropriate DC voltages to run the main motherboard and provides |
- | All power supplies are high quality 80+ Gold (or higher) and carry a minimum | + | All power supplies are high quality 80+ Gold (or higher) and carry a minimum |
====Power Relays==== | ====Power Relays==== | ||
- | Twelve volt power to the worker nodes is controlled by a a series of relays (one per motherboard). The relays are under control of the main motherboard through a USB connection. Each worker motherboard blade can be power cycled individually. | + | Twelve-volt power to the worker nodes is controlled by a series of relays (one per motherboard). The relays are under control of the main motherboard through a USB connection. Each worker motherboard blade can be power cycled individually. |
- | On HPC systems the worker nodes are powered off when the main motherboard starts. Either the user or the resource manager (Slurm) | + | On HPC systems, the worker nodes are powered off when the main motherboard starts. Either the user or the resource manager (Slurm) |
- | On Data Analytic | + | On Data Analytics |
====LAN Connection==== | ====LAN Connection==== | ||
- | A 1 GbE LAN connection is available on the back of the case. The interface is configured for DHCP and is normally a NIC inserted in to the main motherboard. The on-board 1-GbE on the main motherboard is used to connect to the internal cluster network. | + | A 1 GbE LAN connection is available on the back of the case. The interface is configured for DHCP and is normally a NIC inserted in the main motherboard. The on-board 1-GbE on the main motherboard is used to connect to the internal cluster network. |
====1-GbE Switch and Internal Cluster Network==== | ====1-GbE Switch and Internal Cluster Network==== | ||
- | An eight port 1-GbE switch is housed inside the case. It sits above of the power supply and connects all the nodes in the cluster. On the double-wide systems, a second 1-GbE switch is on the opposite side of the case (toward the top rear). There are two ports exposed on the back of the case. Once of these ports is used to connect the main motherboard 1-GbE link to the internal switch using a short CAT6 cable. The second open port can be used for expansion or to connect | + | An eight-port 1-GbE switch is housed inside the case. It sits above the power supply and connects all the nodes in the cluster. On the double-wide systems, a second 1-GbE switch is on the opposite side of the case (toward the top rear). There are two ports exposed on the back of the case. One of these ports is used to connect the main motherboard 1-GbE link to the internal switch using a short CAT6 cable. The second open port can be used for expansion or connection |
====25-GbE Network==== | ====25-GbE Network==== | ||
- | A high performance 25-GbE network is available as an option on some systems (on others it is included). Similar to the 1-GbE network, the 25-GbE is non-routable (192.168.1.X) internal network. The " | + | A high-performance 25-GbE network is available as an option on some systems (and on others, it is standard). Similar to the 1-GbE network, the 25-GbE is a non-routable (192.168.1.X) internal network. The " |
- | If configured, the worker motherboards have a single port 25 GbE NIC as part of the removable blade. A cable from this port on each worker blade it brought out to the back of the case and connected to the 4-port 25GbE NIC. | + | If configured, the worker motherboards have a single-port 25 GbE NIC as part of the removable blade. A cable from this port on each worker blade is brought out to the back of the case and connected to the four-port 25 GbE NIC. |
- | On the double-wide Limulus systems, there is a second | + | On the double-wide Limulus systems, there is a second |
====Data Analytics SSDs==== | ====Data Analytics SSDs==== | ||
- | On Data Analytics (Hadoop/ | + | On Data Analytics (Hadoop/ |
- | The Data Analytics drives are removable and are located in the front of the case. Each Limulus system has eight front facing removable drive slots (Double | + | The Data Analytics drives are removable and are located in the front of the case. Each Limulus system has eight front-facing removable drive slots (double-wide systems have sixteen). Each blade provides two SATA connections that are routed to the removable drive cage. At minimum, each motherboard (including the main motherboard) |
====Cooling Fans==== | ====Cooling Fans==== | ||
- | The are seven cooling fans in each case (eleven in the double wide systems). Four of these fans are on the processor cooler (heat-sync). These fans are controlled by the motherboard on which they are housed (i.e. as the processor temperature increases so does the fan speed). | + | There are seven cooling fans in each case (eleven in the double-wide systems). Four of these fans are on the processor cooler (heat sink). These fans are controlled by the motherboard on which they are housed (i.e. as the processor temperature increases, so does the fan speed). |
- | There are two large fans on the bottom of the case. These fans pull air from underneath the case and push it into the three worker blades. The fans, using a " | + | There are two large fans on the bottom of the case. These fans pull air from underneath the case and push it into the three worker blades. The fans are attached to an auxiliary fan port on the main motherboard |
The remaining fan is on the back of the case and runs at constant speed. This fan is used to help move air through the case. | The remaining fan is on the back of the case and runs at constant speed. This fan is used to help move air through the case. | ||