This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
essential_quick_start_hpc_systems [2021/04/27 16:58] brandonm [File System:] Word & punctuation fixes |
essential_quick_start_hpc_systems [2021/04/30 15:17] (current) brandonm [Adding Users:] Add space after command prompt |
||
---|---|---|---|
Line 11: | Line 11: | ||
There is also a GUI tool for adding and deleting users. The GUI can be be found in the root '' | There is also a GUI tool for adding and deleting users. The GUI can be be found in the root '' | ||
< | < | ||
- | $AddUser -g | + | $ AddUser -g |
</ | </ | ||
See [[Adding Users|Adding Users]]. | See [[Adding Users|Adding Users]]. | ||
====The Case: ==== | ====The Case: ==== | ||
- | The case has two doors on both sides and a removable front bezel. The front bezel can be " | + | The case has two doors on both sides and a removable front bezel. The front bezel can be " |
====Login Node (Main Node): ==== | ====Login Node (Main Node): ==== | ||
Line 26: | Line 26: | ||
====Networking: | ====Networking: | ||
- | All systems have a 1 GbE network. All administrative daemons and shared file systems (NFS) are assigned to this network. The login node acts as a gateway node (i.e. it has an external LAN connection and an internal cluster network connection). The LAN port (as labeled on the case) is set to use DHCP. The internal cluster 1 GbE network uses 10.0.0.0/24 addresses (the login node is 10.0.0.1; the nodes, n0, start as 10.0.0.10 and are sequential). The login node is connected to this network by the short blue Ethernet cable on the back of the unit. This cable connects the login node to the internal 1 GbE switch. | + | All systems have a 1 GbE network. All administrative daemons and shared file systems (NFS) are assigned to this network. The login node acts as a gateway node (i.e. it has an external LAN connection and an internal cluster network connection). The LAN port (as labeled on the case) is set to use DHCP. The internal cluster 1 GbE network uses 10.0.0.0/24 addresses (the login node is 10.0.0.1; the nodes, |
The internal Ethernet switch (as seen inside the case above the power supply) connects to the internal cluster network. In addition to the blue cable connected to the login node, one additional external cluster Ethernet port is available on the back of the unit for expansion (i.e. a cluster NAS or more nodes). | The internal Ethernet switch (as seen inside the case above the power supply) connects to the internal cluster network. In addition to the blue cable connected to the login node, one additional external cluster Ethernet port is available on the back of the unit for expansion (i.e. a cluster NAS or more nodes). | ||
Line 51: | Line 51: | ||
====Software: | ====Software: | ||
- | The system has been installed with CentOS7. Specific Limulus and OpenHPC software | + | The system has been installed with CentOS 7. Specific Limulus and OpenHPC software has been installed |
- | The worker nodes are provisioned by the Warewulf Cluster Toolkit. Adding and removing software from Warewulf provisioned nodes requires some additional steps. See [[Warewulf Worker Node Images|Warewulf Worker Node Images]] | + | The worker nodes are provisioned by the Warewulf Cluster Toolkit. Adding and removing software from Warewulf provisioned nodes requires some additional steps. See [[Warewulf Worker Node Images|Warewulf Worker Node Images]]. |
====Adaptive Cooling and Filters: ==== | ====Adaptive Cooling and Filters: ==== | ||
- | The nodes have adaptive cooling. There are two intake fans underneath the worker nodes pushing air up into the nodes from underneath the case. The speed of the intake fans will increase or decrease depending on the node processor temperatures. Note that all intake locations on the case have magnetic dust filters (including the bottom of the case). In dusty environments, | + | The nodes have adaptive cooling. There are two intake fans underneath the worker nodes pushing air up into the nodes from underneath the case. The speed of the intake fans will increase or decrease depending on the node processor temperatures. Note that all intake locations on the case have magnetic dust filters (including the bottom of the case). In dusty environments, |
====Monitoring: | ====Monitoring: | ||
- | There are two basic ways to monitor the cluster as a whole. The first is the web-based Ganglia interface (Open a browser and point to '' | + | There are two basic ways to monitor the cluster as a whole. The first is the web-based Ganglia interface. (Open a browser and point to '' |
- | There is also a command line utility called '' | + | There is also a command line utility called '' |
====Additional Information: | ====Additional Information: | ||
See the rest of the manual for additional topics and information. | See the rest of the manual for additional topics and information. |