Hypertable on HDFS(hadoop) 安装

Hypertable on HDFS(hadoop) 安装

Hadoop - HDFS 安装指南

过程 4.2. Hypertable on HDFS

  1. 创建工作目录

    $ hadoop fs -mkdir /hypertable
    $ hadoop fs -chmod 777 /hypertable
    					
  2. 安装 Java 运行环境
    yum install java-1.7.0-openjdk
    yum localinstall http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    					
  3. 修改 jrun bug
    cp /opt/hypertable/current/bin/jrun /opt/hypertable/current/bin/jrun.old
    
    vim /opt/hypertable/current/bin/jrun
    #HT_JAR=`ls -1 /opt/hypertable/doug/current/lib/java/*.jar | grep "hypertable-[^-]*.jar" | awk 'BEGIN {FS="/"} {print $NF}'`
    HT_JAR=`ls -1 /opt/hypertable/current/lib/java/*.jar | grep "hypertable-[^-]*.jar" | awk 'BEGIN {FS="/"} {print $NF}'`
    					
     export JAVA_HOME=/usr
     export HADOOP_HOME=/usr
     export HYPERTABLE_HOME=/opt/hypertable/current
    					
  4. hypertable.cfg
    # cat conf/hypertable.cfg
    #
    # hypertable.cfg
    #
    
    # HDFS Broker
    #HdfsBroker.Hadoop.ConfDir=/etc/hadoop/conf
    HdfsBroker.Hadoop.ConfDir=/etc/hadoop
    
    # Ceph Broker
    CephBroker.MonAddr=192.168.6.25:6789
    
    # Local Broker
    DfsBroker.Local.Root=fs/local
    
    # DFS Broker - for clients
    DfsBroker.Port=38030
    
    # Hyperspace
    Hyperspace.Replica.Host=localhost
    Hyperspace.Replica.Port=38040
    Hyperspace.Replica.Dir=hyperspace
    
    # Hypertable.Master
    #Hypertable.Master.Host=localhost
    Hypertable.Master.Port=38050
    
    # Hypertable.RangeServer
    Hypertable.RangeServer.Port=38060
    
    Hyperspace.KeepAlive.Interval=30000
    Hyperspace.Lease.Interval=1000000
    Hyperspace.GracePeriod=200000
    
    # ThriftBroker
    ThriftBroker.Port=38080
    					

    /etc/Hadoop/hdfs-site.xml

    # cat /etc/hadoop/hdfs-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.name.dir</name>
            <value>/var/hadoop/name1</value>
            <description>  </description>
        </property>
        <property>
            <name>dfs.data.dir</name>
            <value>/var/hadoop/hdfs/data1</value>
            <description> </description>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
    </configuration>
    
  5. 启动 dfsbroker
    # /opt/hypertable/current/bin/set-hadoop-distro.sh cdh4
    Hypertable successfully configured for Hadoop cdh4
    					
    # /opt/hypertable/current/bin/start-dfsbroker.sh hadoop
    DFS broker: available file descriptors: 1024
    Started DFS Broker (hadoop)
    					

    查看启动日志

    # tail -f /opt/hypertable/current/log/DfsBroker.hadoop.log
    log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration).
    log4j:WARN Please initialize the log4j system properly.
    HdfsBroker.dfs.client.read.shortcircuit=false
    HdfsBroker.dfs.replication=2
    HdfsBroker.Server.fs.default.name=hdfs://namenode.example.com:9000
    Apr 23, 2013 6:43:18 PM org.hypertable.AsyncComm.IOHandler DeliverEvent
    INFO: [/192.168.6.25:53556 ; Tue Apr 23 18:43:18 HKT 2013] Connection Established
    Apr 23, 2013 6:43:18 PM org.hypertable.DfsBroker.hadoop.ConnectionHandler handle
    INFO: [/192.168.6.25:53556 ; Tue Apr 23 18:43:18 HKT 2013] Disconnect - COMM broken connection : Closing all open handles from /192.168.6.25:53556
    Closed 0 input streams and 0 output streams for client connection /192.168.6.25:53556
时间: 2024-12-10 11:52:27

Hypertable on HDFS(hadoop) 安装的相关文章

hadoop安装完并正常运行,输入以下命令进行测试,发现如下异常,求大神解答!

问题描述 hadoop安装完并正常运行,输入以下命令进行测试,发现如下异常,求大神解答! ubuntu@master:~$ hadoop-2.5.2/bin/hadoop jar hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar wordcount /hadoop-2.5.2/input/* /hadoop/output 15/10/09 16:13:28 INFO client.RMProxy: Co

centos-在CentOS版本的linux结合Hadoop安装mahout报错怎么解决?

问题描述 在CentOS版本的linux结合Hadoop安装mahout报错怎么解决? [root@master bin]# ./mahout Running on hadoop, using /opt/modules/hadoop-2.6.0/bin/hadoop and HADOOP_CONF_DIR= ERROR: Could not find mahout-examples-*.job in /home/mahout-distribution-0.9 or /home/mahout-di

图片-hadoop安装完成出现的问题

问题描述 hadoop安装完成出现的问题 安装完毕的hadoop平台,在点击查看日志的时候,时常会出现一个错误的窗口. 环境是:hadoop安装的版本是5.4,使用的所有机器的操作系统均为CENTOS6.6.安装顺利完成的.只是在登录hadoop管理平台--ClouderaManager的诊断-日志,点击搜索的时候,就出现那些错误的提示. 望解答,谢谢! 请问这个是什么原因造成的,如何处理? 详细抓图: 解决方案 Hadoop2.6.0安装过程中出现的问题(一)Hadoop安装,问题汇总Hado

hadoop-Windows Cygwin下 Hadoop安装

问题描述 Windows Cygwin下 Hadoop安装 出现以下情况是什么原因? Administrator@ZKCRJB84CNJ0ZTJ /cygdrive/c/hadoop/sbin $ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 15/12/31 20:04:48 WARN util.NativeCodeLoader: Unable to load nativ

【笔记】Hadoop安装部署

安装虚拟机 使用VirtualBox安装rhel6.3,存储为30G,内存为1G,并使用复制克隆出两个新的虚拟机,这样就存在3台虚拟机,设置三台虚拟机的主机名称,如:rhel-june.rhel-june-1.rhel-june-2 配置网络 a. VirtualBox全局设定-网络中添加一个新的连接:vboxnet0 b. 设置每一个虚拟机的网络为Host-Only c.分别修改每个虚拟机的ip,DHCP或手动设置 vim /etc/sysconfig/network-scripts/ifcf

Hadoop安装实战

安装hadoop 准备机器:一台master,若干台slave,配置每台机器的/etc/hosts保证各台机器之间通过机器名可以互访,例如:       172.16.200.4  node1(master)         172.16.200.5  node2 (slave1)         172.16.200.6  node3 (slave2)     主机信息:   机器名   IP地址 作用 master 172.16.200.4 NameNode.JobTracker Node1

Hadoop安装遇到各种异常以及解决方法

异常一: 2014-03-13 11:10:23,665 INFO org.apache.Hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumhttp://www.aliyun.com/zixun/aggregation/16460.html">CountWithFi

HBase伪分布式安装(HDFS)+ZooKeeper安装+HBase数据操作+HBase架构体系

HBase1.2.2伪分布式安装(HDFS)+ZooKeeper-3.4.8安装配置+HBase表和数据操作+HBase的架构体系+单例安装,记录了在Ubuntu下对HBase1.2.2的实践操作,HBase的安装到数据库表的操作.包含内容1.HBase单例安装2.HBase伪分布式安装(基于Hadoop的HDFS)过程,3.HBase的shell编程,对HBase表的创建,删除等的命令,HBase对数据的增删查等操作.4.简单概述了Hbase的架构体系.5.zookeeper的单例安装和常用操

Hadoop安装遇到的各种异常及解决办法

2014-03-13 11:10:23,665 INFO org.apache.Hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38/10.10.208.38:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 2014-03-13 11:1