This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
essential_quick_start_data_analytics_systems [2021/04/28 14:04] brandonm [Power Control:] Word and punctuation fixes |
essential_quick_start_data_analytics_systems [2021/05/19 16:03] (current) brandonm [Worker Nodes:] Word fix |
||
---|---|---|---|
Line 11: | Line 11: | ||
There is also a GUI tool for adding and deleting users. The GUI can be be found in the root '' | There is also a GUI tool for adding and deleting users. The GUI can be be found in the root '' | ||
< | < | ||
- | $AddUser -g | + | $ AddUser -g |
</ | </ | ||
See [[Adding Users|Adding Users]]. | See [[Adding Users|Adding Users]]. | ||
====The Case: ==== | ====The Case: ==== | ||
- | The case has two doors on both sides and a removable front bezel. The front bezel can be " | + | The case has two doors on both sides and a removable front bezel. The front bezel can be " |
====Login Node (Main Node): ==== | ====Login Node (Main Node): ==== | ||
Line 24: | Line 24: | ||
The worker nodes are on blades that are behind the bezel. Blades are removed by turning off the blade power (see below), removing the cables, unscrewing the captive bolts (top and bottom), and pulling the blade out using the captive bolts. The blade positions are unique (the same blade must always be placed in the same slot, due to DHCP booting (HPC systems) or the attached HDFS drives (Hadoop systems)). | The worker nodes are on blades that are behind the bezel. Blades are removed by turning off the blade power (see below), removing the cables, unscrewing the captive bolts (top and bottom), and pulling the blade out using the captive bolts. The blade positions are unique (the same blade must always be placed in the same slot, due to DHCP booting (HPC systems) or the attached HDFS drives (Hadoop systems)). | ||
- | Node names begin with " | + | Node names begin with " |
====Networking: | ====Networking: | ||
Line 50: | Line 50: | ||
====SSH Access:==== | ====SSH Access:==== | ||
- | The root user has '' | + | The root user has '' |
====File System:==== | ====File System:==== | ||
- | The file system configuration can vary depending installation options. In general, there is a solid state NVME drive on the head node that stores the OS There is an /opt and /home partition NFS mounted to all nodes. If a RAID option has been included, the RAID drives will be installed in the top drive area in the case and mounted on the head node. | + | The file system configuration can vary depending |
- | On Hadoop systems, each node has a NVME drive for the OS and Hadoop software. One or two additional SATA drives are also attached to each node. These drives are used for Hadoop HDFS storage. | + | On Hadoop systems, each node has a NVMe drive for the OS and Hadoop software. One or two additional SATA drives are also attached to each node. These drives are used for Hadoop HDFS storage. |
====Software: | ====Software: | ||
- | The system has been installed with CentOS 7 as the base distribution. Specific Limulus RPM packages that provide management tools, an updated kernel, power control drivers, etc. have also been installed | + | The system has been installed with CentOS 7 as the base distribution. Specific Limulus RPM packages that provide management tools, an updated kernel, power control drivers, etc. have also been installed. |
- | On Hadoop systems, all node software is managed by the Ambari management system (on the head node open local browser to '' | + | On Hadoop systems, all node software is managed by the Ambari management system (on the head node, open local browser to '' |
====Adaptive Cooling and Filters: ==== | ====Adaptive Cooling and Filters: ==== | ||
- | The nodes have adaptive cooling. There are two intake fans underneath the worker nodes pushing air up into the nodes from underneath the case. The speed of the intake fans will increase or decrease depending on the node processor temperatures. Note that all intake locations on the case have magnetic dust filters (including the bottom of the case). In dusty environments, | + | The nodes have adaptive cooling. There are two intake fans underneath the worker nodes pushing air up into the nodes from underneath the case. The speed of the intake fans will increase or decrease depending on the node processor temperatures. Note that all intake locations on the case have magnetic dust filters (including the bottom of the case). In dusty environments, |
====Monitoring: | ====Monitoring: | ||
On Hadoop systems, the [[Using the Apache Ambari Cluster Manager|Apache Ambari Cluster Manager]] provides complete node (and Hadoop) monitoring. | On Hadoop systems, the [[Using the Apache Ambari Cluster Manager|Apache Ambari Cluster Manager]] provides complete node (and Hadoop) monitoring. | ||
- | In addition, there is also a command line utility called wwtop. This utility provides a real-time " | + | In addition, there is also a command line utility called wwtop. This utility provides a real-time " |
====Additional Information: | ====Additional Information: | ||
See the rest of this manual for additional topics and information. | See the rest of this manual for additional topics and information. |