hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题

问题描述

hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题

系统:centos 7
分布式环境,现有3个节点,主节点node0,hadoop中node0,node1,node2同时作为datanode的节点
启动hbase时主节点没有Hmaster线程
hbase的日志错误如下:

 2015-11-13 16:23:59,178 INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,node0:2181,node2:2181 sessionTimeout=60000 watcher=master:600000x0, quorum=node1:2181,node0:2181,node2:2181, baseZNode=/hbase
2015-11-13 16:23:59,195 INFO  [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Opening socket connection to server node1/192.168.0.161:2181. Will not attempt to authenticate using SASL (unknown error)
2015-11-13 16:23:59,200 INFO  [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Socket connection established to node1/192.168.0.161:2181, initiating session
2015-11-13 16:23:59,211 INFO  [main-SendThread(node1:2181)] zookeeper.ClientCnxn: Session establishment complete on server node1/192.168.0.161:2181, sessionid = 0x150feccba590005, negotiated timeout = 40000
2015-11-13 16:23:59,223 INFO  [main] zookeeper.RecoverableZooKeeper: Node /hbase already exists and this is not a retry
2015-11-13 16:23:59,245 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2015-11-13 16:23:59,246 INFO  [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: starting
2015-11-13 16:23:59,301 INFO  [master:namenode:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-11-13 16:23:59,344 INFO  [master:namenode:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2015-11-13 16:23:59,346 INFO  [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2015-11-13 16:23:59,346 INFO  [master:namenode:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-11-13 16:23:59,355 INFO  [master:namenode:60000] http.HttpServer: Jetty bound to port 60010
2015-11-13 16:23:59,355 INFO  [master:namenode:60000] mortbay.log: jetty-6.1.26
2015-11-13 16:23:59,656 INFO  [master:namenode:60000] mortbay.log: Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010
2015-11-13 16:23:59,724 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available
2015-11-13 16:23:59,725 INFO  [master:namenode:60000] master.ActiveMasterManager: Registered Active Master=namenode,60000,1447403038605
2015-11-13 16:23:59,731 INFO  [master:namenode:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-11-13 16:23:59,875 FATAL [master:namenode:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V
        at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1342)
        at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371)
   at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1371)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1403)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1382)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1307)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:384)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:380)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:380)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:324)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
        at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:664)
        at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:642)
        at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:599)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:481)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:154)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:130)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:881)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:684)
        at java.lang.Thread.run(Thread.java:745)
2015-11-13 16:23:59,876 INFO  [master:namenode:60000] master.HMaster: Aborting
2015-11-13 16:23:59,877 DEBUG [master:namenode:60000] master.HMaster: Stopping service threads
2015-11-13 16:23:59,877 INFO  [master:namenode:60000] ipc.RpcServer: Stopping server on 60000
2015-11-13 16:23:59,877 INFO  [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2015-11-13 16:23:59,877 INFO  [master:namenode:60000] master.HMaster: Stopping infoServer
2015-11-13 16:23:59,877 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-11-13 16:23:59,877 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-11-13 16:23:59,879 INFO  [master:namenode:60000] mortbay.log: Stopped HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:60010
2015-11-13 16:23:59,994 INFO  [master:namenode:60000] zookeeper.ZooKeeper: Session: 0x150feccba590005 closed
2015-11-13 16:23:59,994 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-11-13 16:23:59,994 INFO  [master:namenode:60000] master.HMaster: HMaster main thread exiting
2015-11-13 16:23:59,995 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
                                                                           at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:201)
        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3062)

每个节点上hadoop和zookeeper的进程都有,hregionserver进程也有,就是主节点没有hmaster,配置文件什么的已经对过很多遍了
另外有一个奇怪的问题是,每次系统重启过后,启动hadoop,可以看到相应的进程,但是在外部无法访问http://node0:50070端口,必须要输入iptables -F才可以访问,zookeeper启动过后,如果没有输入过上述命令,zookeeper的MODE也显示不出来,也就是集群模式没有启动成功

解决方案

hadoop+zookeeper+hbase环境搭建的一些问题
hadoop1.2.1+zookeeper3.4.6+hbase0.94集群环境搭建
Hadoop+HBase+ZooKeeper分布式集群环境搭建

解决方案二:

http://www.it165.net/admin/html/201502/4925.html

时间: 2024-10-03 14:37:02

hbase0.98.15+hadoop2.6+zookeeper3.4.6环境搭建问题的相关文章

centos+scala2.11.4+hadoop2.3+spark1.3.1环境搭建

一.Java安装 1.安装包准备: 首先到官网下载jdk,http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html,我下载jdk-7u79-linux-x64.tar.gz,下载到主目录 2.解压安装包 通过终端在/usr/local目录下新建java文件夹,命令行: sudo mkdir /usr/local/java 然后将下载到压缩包拷贝到java文件夹中,命令行: 进入jdk压缩

hbase-0.98.7Hbase集群数据迁移问题,重构元数据时报的错误

问题描述 0.98.7Hbase集群数据迁移问题,重构元数据时报的错误 Hbase集群数据迁移,由于两个集群不能相互通信,因此采用磁盘拷贝数据表的方式,数据量不是很大,只有几个G而已,拷贝到hdfs文件系统的hbase目录下,最后执行copy_tables_desc.rb脚本重构meta表分区信息时报如下错误,[hadoop@master bin]$ ./hbase org.jruby.Main replication/copy_tables_desc.rb NameError: cannot

hive查询报错-hive0.9.0+hbase0.96.2+hadoop2.2.0整合执行查询hql报错如下

问题描述 hive0.9.0+hbase0.96.2+hadoop2.2.0整合执行查询hql报错如下 hive> select * from hbasehive_table; OK Exception in thread "main" java.lang.InstantiationError: org.apache.hadoop.mapreduce.JobContext at org.apache.hadoop.hive.shims.Hadoop20SShims.newJobC

hbase0 98-nutch2.2.1 hbase0.98

问题描述 nutch2.2.1 hbase0.98 nutch默认只支持hbase0.94x,可是我的hbase版本是0.98.请问我该怎么编译? 哪位好心人做个?求教! 解决方案 当时也曾经做过这方面的尝试,但涉及的源代码比较多,也就放弃了,楼主可以参考博客 解决方案二: 参考 Nutch in Hadoop 2.x 解决方案三: 遇到同样的问题,不知如何解决??

Hadoop2.X/YARN环境搭建--CentOS7.0 JDK配置_数据库其它

Hadoop是Java写的,他无法使用Linux预安装的OpenJDK,因此安装hadoop前需要先安装JDK(1.6以上) 原材料:在Oracle官网下载的32位JDK: 说明: 1.CentOS 7.0系统现在只有64位的,但是,Hadoop一般支持32位的,在64位环境下有事会有Warning出现,避免真的有神马问题,选择i586的JDK(即32位的),当然,64位的CentOS 7 肯定是兼容32位JDK的,记住:64位系统肯定兼容32位的软件,32位系统不能兼容64位软件.64位只是说

Hadoop2.X/YARN环境搭建--CentOS7.0系统配置_数据库其它

一.我缘何选择CentOS7.0     14年7月7日17:39:42发布了CentOS 7.0.1406正式版,我曾使用过多款Linux,对于Hadoop2.X/YARN的环境配置缘何选择CentOS7.0,其原因有: 1.界面采用RHEL7.0新的GNOME界面风,这可不是CentOS6.5/RHEL6.5所能比的!(当然,Fedora早就采用这种风格的了,但是现在的Fedora缺包已然不成样子了) 2.曾经,我也用了RHEL7.0,它最大的问题就是YUM没法用,而且总会有Warning提

hadoop2.6分布式环境搭建

1. 前言 在3个系统centos6.5的linux虚拟机搭建一个分布式hadoop环境,hadoop版本为2.6,节点ip分别为 192.168.17.133 192.168.17.134 192.168.17.135 2. 配置hosts文件 分别在3个节点上配置/etc/hosts文件,内容如下: 192.168.17.133 master 192.168.17.134 slave1 192.168.17.135 slave2 127.0.0.1 localhost localhost.l

Linux下Hadoop2.6.0集群环境的搭建

本文旨在提供最基本的,可以用于在生产环境进行Hadoop.HDFS分布式环境的搭建,对自己是个总结和整理,也能方便新人学习使用. 基础环境 JDK的安装与配置 现在直接到Oracle官网(http://www.oracle.com/)寻找JDK7的安装包不太容易,因为现在官方推荐JDK8.找了半天才找到JDK下载列表页的地址(http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html).因

从titan0.5.4源码学习maven

看了一遍<maven权威指南>,对着titan-0.5.4的源码琢磨了一下.代码没仔细看,撸了一遍pom.xml文件. 以下是我get到的技能: 1.多模块 titan-0.5.4titan-alltitan-berkeleyjetitan-cassandratitan-coretitan-dist    titan-dist-hadoop-1    titan-dist-hadoop-2titan-estitan-hadoop-parent    titan-hadoop    titan-