HOW TO INSTALL HADOOP CLUSTER ON UBUNTU – PART 3

Configuration Hadoop core-site.xml

Open to /opt/hadoop/etc/hadoop/core-site.xml by vi or nano and input below line

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop1:9000</value>skfwk
</property>
</configuration>

Configuration Hadoop hdfs-site.xml

Open to /opt/hadoop/etc/hadoop/hdfs-site.xml by vi or nano and input below line

<configuration>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/opt/hadoop/hdfs/namenode</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/opt/hadoop/hdfs/datanode</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.datanode.use.datanode.hostname</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
                <value>false</value>
        </property>
</configuration>

Configuration Hadoop workers

Open to /opt/hadoop/etc/hadoop/workers by vi or nano and input below line

hadoop1
hadoop2
hadoop3

Copy hadoop file all host

scp /opt/hadoop/* hadoop2:/opt/hadoop/
scp /opt/hadoop/* hadoop3:/opt/hadoop/

Run start-dfs.sh command

Check Hadoop Web UI

Access to the following URL: http://hadoop1:9870/

Configuration Yarn

Open to /opt/hadoop/etc/hadoop/hdfs-site.xml by vi or nano and input below line

<property>						    
     <name>yarn.resourcemanager.hostname</name>						    
     <value>hadoop1</value>
</property>

Run start-yarn.sh command

Access to the following URL: http://hadoop1:8088/

How to use hadoop by example

Now you can use hadoop command

hadoop fs -ls / <display hadoop root dir>
hadoop dfs -mkdir /input <make dir>
hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar wordcount /input /output

If you want to run test mapreduce, you can use include example, on /opt/hadoop/share/hadoop/mapreduce/

Facebook Comments

Leave A Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.