hadoop 2.X HA详细配置

hadoop-daemon.sh与hadoop-daemons.sh区别

hadoop-daemon.sh只能本地执行

hadoop-daemons.sh能远程执行

1. 启动JN

hadoop-daemons.sh start journalnode

hdfs namenode -initializeSharedEdits //复制edits log文件到journalnode节点上,第一次创建得在格式化namenode之后使用

http://hadoop-yarn1:8480来看journal是否正常

2.格式化namenode,并启动Active Namenode

一、Active NameNode节点上格式化namenode

hdfs namenode -format
hdfs namenode -initializeSharedEdits

初始化journalnode完毕

二、启动Active Namenode

hadoop-daemon.sh start namenode

3.启动 Standby namenode

一、Standby namenode节点上格式化Standby节点

复制Active Namenode上的元数据信息拷贝到Standby Namenode节点上

hdfs namenode -bootstrapStandby

二、启动Standby节点

hadoop-daemon.sh start namenode

4.启动Automatic Failover

在zookeeper上创建 /hadoop-ha/ns1这样一个监控节点(ZNode)

hdfs zkfc -formatZK
start-dfs.sh

5.查看namenode状态

hdfs  haadmin -getServiceState nn1
active

6.自动failover

hdfs  haadmin -failover nn1 nn2

配置文件详细信息

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>
    
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp</value>
    </property>
    
    <property>
        <name>fs.trash.interval</name>
        <value>60*24</value>
    </property>
    
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181</value>
    </property>
    
    <property>  
        <name>hadoop.http.staticuser.user</name>
        <value>yuanhai</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>nn1,nn2</value>
        </property>
        
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn1</name>
        <value>hadoop-yarn1:8020</value>
    </property>
    
        <property>
        <name>dfs.namenode.rpc-address.ns1.nn2</name>
        <value>hadoop-yarn2:8020</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn1</name>
        <value>hadoop-yarn1:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn2</name>
        <value>hadoop-yarn2:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1</value>
    </property>
    
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp/journal</value>
    </property>
    
     <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    

<!--     <property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop-yarn.dragon.org:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop-yarn.dragon.org:50090</value>
    </property>
    
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/name</value>
    </property>
    
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>${dfs.namenode.name.dir}</value>
    </property>
    
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/data</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.edits.dir</name>
        <value>${dfs.namenode.checkpoint.dir}</value>
    </property>
-->    
</configuration>

slaves

hadoop-yarn1
hadoop-yarn2
hadoop-yarn3

yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-yarn1</value>
    </property> 
    
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property> 

</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-yarn1:10020</value>
        <description>MapReduce JobHistory Server IPC host:port</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-yarn1:19888</value>
        <description>MapReduce JobHistory Server Web UI host:port</description>
    </property>
    
    <property>
        <name>mapreduce.job.ubertask.enable</name>
        <value>true</value>
    </property>
    
</configuration>

hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.6.0_24

其他相关文章:

http://blog.csdn.net/zhangzhaokun/article/details/17892857

本文出自 “点滴积累” 博客,请务必保留此出处http://tianxingzhe.blog.51cto.com/3390077/1711811

时间: 2024-11-07 15:26:46

hadoop 2.X HA详细配置的相关文章

Hadoop-2.8.0集群搭建、hadoop源码编译和安装、host配置、ssh免密登录、hadoop配置文件中的参数配置参数总结、hadoop集群测试,安装过程中的常见错误

25.集群搭建 25.1 HADOOP集群搭建 25.1.1集群简介 HADOOP集群具体来说包含两个集群:HDFS集群和YARN集群,两者逻辑上分离,但物理上常在一起 HDFS集群: 负责海量数据的存储,集群中的角色主要有NameNode / DataNode YARN集群: 负责海量数据运算时的资源调度,集群中的角色主要有 ResourceManager /NodeManager 25.1.2服务器准备 本案例使用虚拟机服务器来搭建HADOOP集群,所用软件及版本: ü Vmware 11.

MySQL中HA: GTID_MODE配置不一致的案例

1.1.1. HA: GTID_MODE配置不一致 [环境描述] msyql5.6.14 [报错信息] 初始状态Master和Slave都开启了enforce-gtid-consistency和gtid-mode,然后在Master上把它俩都修改成了off关闭状态,这时,Slave发生报错: Last_IO_Error: The slave IO thread stopsbecause the master has @@GLOBAL.GTID_MODE OFF and this server h

centos-【CentOs求助】centOs中ftp配置问题,求详细配置流程

问题描述 [CentOs求助]centOs中ftp配置问题,求详细配置流程 安装vsftpd后,本地用户无法登录,应该是配置问题,但是网上的方法不管用.求ftp配置流程.新用户,以我全部财产悬赏. 解决方案 http://www.jb51.net/article/47795.htm 解决方案二: centos6.3下的ftp详细配置centos FTP 服务器的架设和配置centos FTP 服务器的架设和配置

WinNT+JDK+TomCat+AXIS+MySQL+MYSQLAdministrator+WinTookit详细配置

  Jsp Web Service + MySQL+VC WinTookit详细配置指南   配置环境windows2003,配置前,需要先到官方网站下载所需要的服务器组件,因为我们使用的使用Windows操作系统,所有组建我们都要下载for windows版本的.如果先使用最新版本的相关组件,请到官方网站下载.并且为了安全期间,请到官方网站或者官方指定的镜像站点去,或者国内知名下载站点去下载.   1:开发组件下载说明        1):j2sdk(jdk-1_5_0_06-windows-

SQL Server 远程链接服务器详细配置

原文:SQL Server 远程链接服务器详细配置[转载] http://hi.baidu.com/luxi0194/blog/item/a5c2a9128a705cc6c2fd7803.html 远程链接服务器详细配置--建立连接服务器EXEC sp_addlinkedserver  '远程服务器IP','SQL Server'--标注存储EXEC sp_addlinkedserver@server = 'server', --链接服务器的本地名称.也允许使用实例名称,例如MYSERVER\S

asp中的ckEditor的详细配置小结_应用技巧

ckeditor的详细配置: 在网上找了好久终于找到了!O(∩_∩)O哈哈~ 一.使用方法: 1.在页面<head>中引入ckeditor核心文件ckeditor.js 复制代码 代码如下: <script type="text/javascript" src="ckeditor/ckeditor.js"></script> 2.在使用编辑器的地方插入HTML控件<textarea> 复制代码 代码如下: <te

求大声解答-hadoop如何在idea上配置开发

问题描述 hadoop如何在idea上配置开发 hadoop如何实现在idea上配置开发,hadoop环境已经搭建好

求助:怎么在openstack下管理docker?以及openstack集成的详细配置?

问题描述 求助:怎么在openstack下管理docker?以及openstack集成的详细配置? 解决方案 解决方案二:可参考以下链接学习和尝试一下:http://m.csdn.net/article/2014-04-23/2819449-Cloud-OpenStack-Dockerhttp://www.openstack.cn/p1423.html

tomcat环境变量详细配置步骤_Tomcat

本文实例为大家分享了tomcat环境变量的配置教程,供大家参考,具体内容如下 1.===> 进入bin目录下,双击startup.bat看是否报错.一般肯定会报.  2.===> 右键我的电脑===>高级===>环境变量   新建一个 变量名为 JAVA_HOME 然后变量值是 你的JDK的bin目录的上一层. 再新建一个 变量名为CATALINA_HOME 变量值是你的Tomcat的bin的目录的上一层.  3.===> 进入Tomcat bin目录下  双击shutdow