hadoop集群学习笔记(1)

作者: 疯狂小兵 | 2016-05-25 | 阅读
「编辑」 「本文源码」

1.修改hdfs-site.xml

修改配置如下,并复制到集群的其他机器中

<configuration>
        <!--副本的份数-->
        <property>
                <name>dfs.replication</name>
                <value>3</value>
                <description>replication num</description>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/root/hadoop/dfs/nn/name</value>
                <description>name location</description>
        </property>
        <property>
                <name>dfs.namenode.edits.dir</name>
                <value>file:/root/hadoop/dfs/nn/edits</value>
                <description>edit file location</description>
        </property>
        <property>
                <name>dfs.namenode.checkpoint.dir</name>
                <value>file:/root/hadoop/dfs/snn/name</value>
                <description>seconde name file location</description>
        </property>
        <property>
                <name>dfs.namenode.checkpoint.edits.dir</name>
                <value>file:/root/hadoop/dfs/snn/edits</value>
                <description>seconde edit file location</description>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/root/hadoop/dfs/dn/data</value>
                <description>data location</description>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <!-- nameNode1,namenode2 -->
        <property>
                <name>dfs.nameservices</name>
                <value>ns1,ns2</value>
        </property>
        <!-- namenode1 config  -->
        <property>
                <name>dfs.namenode.rpc-address.ns1</name>
                <value>h2m1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.servicerpc-address.ns1</name>
                <value>h2m1:8021</value>
        </property>
 		<property>
                <name>dfs.namenode.secondary.http-address.ns1</name>
                <value>h2m1:9001</value>
                <description>secondary namenode web config</description>
        </property>
        <property>
                <name>dfs.namenode.secondary.https-address.ns1</name>
                <value>h2m1:9002</value>
                <description>secondary namenode web config</description>
        </property>
        <!-- namenode2 config  -->
        <property>
                <name>dfs.namenode.rpc-address.ns2</name>
                <value>h2s1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.servicerpc-address.ns2</name>
                <value>h2s1:8021</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address.ns2</name>
                <value>h2s1:9001</value>
                <description>secondary namenode web config</description>
        </property>
        <property>
                <name>dfs.namenode.secondary.https-address.ns2</name>
                <value>h2s1:9002</value>
                <description>secondary namenode web config</description>
        </property>
</configuration>

2. 修改core-site.xml

作为NameNode faderation的节点修改如下的配置将h2m1修改为自定主机名或ip

 <property>
        <name>fs.defaultFS</name>
        <value>hdfs://h2m1:9000</value>
</property>

3. 删除logs、dfs目录

rm -rf /root/hadoop/dfs/  ### 自定义的dfs位置

rm -rf /usr/local/hadoop271/logs/

4. 格式化NameNode faderation节点,每个都需要执行命令

hdfs namenode -format -clusterId clusterCustomId clusterCustomId为指定的id,faderation节点必须相同,否则不会加入faderation集群

5.启动hdfs

start-dfs.sh


版权声明:本文由 在 2016年05月25日发表。本文采用CC BY-NC-SA 4.0许可协议,非商业转载请注明出处,不得用于商业目的。
文章题目及链接:《hadoop集群学习笔记(1)》




  相关文章:

「游客及非Github用户留言」:

「Github登录用户留言」:

TOP