User Tools

Site Tools


using_the_apache_ambari_cluster_manager

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
using_the_apache_ambari_cluster_manager [2021/01/02 22:01]
deadline added starting stopping the cluser
using_the_apache_ambari_cluster_manager [2021/05/19 16:35] (current)
brandonm [Shutting Down The Cluster] Spelling, spacing, punctuation, word, and case fixes
Line 1: Line 1:
 ====Introduction==== ====Introduction====
  
-The Ambari web based tool can be started opening a browser on the head node and entering [[http:/localhost:8080|http:/localhost:8080]]. After login (password is provided by Limulus Computing) The Ambari Control panel similar to the image below will be displayed.+The Ambari web-based tool can be started opening a browser on the head node and entering [[ http://localhost:8080|"http://localhost:8080]]. After login (password is provided by Limulus Computing), the **Ambari Dash Board** (control panelsimilar to the image below will be displayed.
  
 {{ :wiki:ambari-control-panel.png?600 |}} {{ :wiki:ambari-control-panel.png?600 |}}
  
-The left side menu allows the values services and nodes of the cluster to be viewed. An example of the the HDFS  (Hadoop Distributed File System) is shown in the following figure:+The left side menu allows the valuesservices and nodes of the cluster to be viewed. An example of the the HDFS  (Hadoop Distributed File System) is shown in the following figure:
  
 {{ :wiki:ambari-services.png?600 |}} {{ :wiki:ambari-services.png?600 |}}
Line 16: Line 16:
 ====Basic Background==== ====Basic Background====
  
-The Limulus Hadoop design requires that the headnode (login node) and n0 operates in multiple roles. The headnode runs the bulk of these services and has extra memory for this purpose. All nodes,however operated and a HSDS and YARN client, which means they all participate in the distributed HDFS storage and running MapReduce and Spark jobs. This default behavior may need to be adjusted depending on your requirements(i.e. the number of YARN jobs on the headnode may be reduced or eliminated).+The Limulus Hadoop design requires that the head node (login node) and n0 operate in multiple roles. The head node runs the bulk of these services and has extra memory for this purpose. All nodes, however, operate an HSDS and YARN client, which means they all participate in the distributed HDFS storage and running MapReduce and Spark jobs. This default behavior may need to be adjusted depending on your requirements (i.e. the number of YARN jobs on the head node may be reduced or eliminated).
  
 Additional content to be available soon. Additional content to be available soon.
Line 22: Line 22:
 ====Service Start-Up==== ====Service Start-Up====
  
-Ambari has been configured to start all the cluster services when the system starts. If a service failed to start or is having issues, a read or orange dot will be shown next to the service name in the Services section. Functioning services have a green dot.+Ambari has been configured to start all the cluster services when the system starts. [**Note: The Auto Start feature is not currently working. To start all the services, log into the Ambari interface (http://localhost:8080), click on the three dots next to ''Services'' in the vertical menu on the left side, and select ''Start All''**.]  If a service failed to start or is having issues, a red or orange dot will be shown next to the service name in the Services section. Functioning services have a green dot.
  
 ====Shutting Down The Cluster==== ====Shutting Down The Cluster====
  
-Best practices dictate that al the Hadoop services are shutdown gracefully. Ambari provides a full system shutdown option. Click the three dots next to ''Services'' in the left side vertical menu and select "Stop All.The shutdown is "graceful" an will take some time. (Do not be concerned with all the service alertsthese are normal when services startup of shutdown)+Best practices dictate that all the Hadoop services are shut down gracefully. Ambari provides a full system shutdown option. Click the three dots next to ''Services'' in the left-side vertical menu and select ''Stop All''. The shutdown is "graceful" an will take some time. (Do not be concerned with all the service alertsthese are normal when services start up or shut down.When "Stop All" services is started, a progress screen will be shown similar to the image below.
  
-After all the services are stopped, the entire cluster may be powered-off by shutting down the head node (nodes gracefully shutdown when the headnode is powered-off using the ''poweroff'' command or the desktop interface. (avoid just pressing the power button).+{{ :wiki:ambari-shutdown-all.png?600 |}} 
 + 
 +Once the Hadoop daemons are totally shut down, the progress screen will show completion (green means no issues). The dashboard should look like the following image.  
 + 
 +{{ :wiki:ambari-all-stopped.png?600 |}} 
 + 
 + 
 +**FINAL STEP:** After all the services are stopped, the entire cluster may be powered off by shutting down the head node (nodes gracefully shut down when the head node is powered off using the ''poweroff'' command or the desktop interface)Avoid pressing the power button to shut the system down.
  
  
using_the_apache_ambari_cluster_manager.1609624889.txt.gz · Last modified: 2021/01/02 22:01 by deadline