关于hadoop 环境搭建问题

问题描述

我按照视频搭建环境,但是在执行wordcount时遇到这个问题:[root@127hadoop-1.2.1]#bin/hadoopjarhadoop-examples-1.2.1.jarwordcountinout14/08/1721:23:19INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried0time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:20INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried1time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:21INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried2time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:22INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried3time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:23INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried4time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:24INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried5time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:25INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried6time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:26INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried7time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:27INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried8time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:28INFOipc.Client:Retryingconnecttoserver:192.168.30.131/192.168.30.131:9001.Alreadytried9time(s);retrypolicyisRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)14/08/1721:23:28ERRORsecurity.UserGroupInformation:PriviledgedActionExceptionas:rootcause:java.net.ConnectException:Callto192.168.30.131/192.168.30.131:9001failedonconnectionexception:java.net.ConnectException:Connectionrefusedjava.net.ConnectException:Callto192.168.30.131/192.168.30.131:9001failedonconnectionexception:java.net.ConnectException:Connectionrefusedatorg.apache.hadoop.ipc.Client.wrapException(Client.java:1142)atorg.apache.hadoop.ipc.Client.call(Client.java:1118)atorg.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)atorg.apache.hadoop.mapred.$Proxy2.getProtocolVersion(UnknownSource)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:483)atorg.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)atorg.apache.hadoop.mapred.$Proxy2.getProtocolVersion(UnknownSource)atorg.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)atorg.apache.hadoop.mapred.JobClient.createProxy(JobClient.java:559)atorg.apache.hadoop.mapred.JobClient.init(JobClient.java:498)atorg.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479)atorg.apache.hadoop.mapreduce.Job$1.run(Job.java:563)atjava.security.AccessController.doPrivileged(NativeMethod)atjavax.security.auth.Subject.doAs(Subject.java:422)atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)atorg.apache.hadoop.mapreduce.Job.connect(Job.java:561)atorg.apache.hadoop.mapreduce.Job.submit(Job.java:549)atorg.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)atorg.apache.hadoop.examples.WordCount.main(WordCount.java:82)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:483)atorg.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)atorg.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)atorg.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:483)atorg.apache.hadoop.util.RunJar.main(RunJar.java:160)Causedby:java.net.ConnectException:Connectionrefusedatsun.nio.ch.SocketChannelImpl.checkConnect(NativeMethod)atsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)atorg.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)atorg.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)atorg.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)atorg.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)atorg.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)atorg.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)atorg.apache.hadoop.ipc.Client.getConnection(Client.java:1249)atorg.apache.hadoop.ipc.Client.call(Client.java:1093)...33more我主机设置为192.168.30.131配置文件分别是:[root@127conf]#catcore-site.xml<?xmlversion="1.0"?><?xml-stylesheettype="text/xsl"href="configuration.xsl"?><!--Putsite-specificpropertyoverridesinthisfile.--><configuration><property><name>fs.default.name</name><value>hdfs://192.168.30.131:9000</value></property><property><name>hadoop.tmp.dir</name><value>/home/jichaow/hadoop</value></property></configuration>[root@127conf]#catmapred-site.xml<?xmlversion="1.0"?><?xml-stylesheettype="text/xsl"href="configuration.xsl"?><!--Putsite-specificpropertyoverridesinthisfile.--><configuration><property><name>mapred.job.tracker</name><value>192.168.30.131:9001</value></property></configuration>[root@127conf]#catmasters192.168.30.131[root@127conf]#catslaves192.168.30.133谁能帮我看看问题出在哪里?多谢

解决方案

时间: 2024-08-30 13:23:34

关于hadoop 环境搭建问题的相关文章

hadoop环境搭建问题

问题描述 我的linux内核版本是Linuxversion2.6.32-504.el6.i686(mockbuild@c6b9.bsys.dev.centos.org)(gccversion4.4.720120313(RedHat4.4.7-11)(GCC))#1SMPWedOct1503:02:07UTC2014而我的Hadoop版本是Hadoop2.7.1环境等相关都配置了的为什么在启动时总是说Unabletoloadnative-hadooplibraryforyourplatform..

hadoop实战--搭建开发环境及编写Hello World

1.下载 整个Hadoop是基于Java开发的,所以要开发Hadoop相应的程序就得用java方便 进入网站:http://archive.eclipse.org/eclipse/downloads/ 选择3.71 eclipse SDK 进入下面的页面: http://archive.eclipse.org/eclipse/downloads/drops/R-3.7.1-201109091335/#EclipseSDK 选择相关的版本下载JDK,我选择的版本是:eclipse-SDK-3.7.

hadoop实战–搭建eclipse开发环境及编写Hello World

1.在eclise中安装hadoop的插件并配置 在上篇文章<编译hadoop eclipse的插件(hadoop1.0)>,已经介绍过怎样编译基于hadoop1.0的eclipse插件   将jar包放在eclipse安装目录下的plugins文件夹下.然后启动eclipse 进入后,在菜单window->Rreferences下打开设置: 点击"Ant" 出现: 点击browse选择hadoop的源码下的build目录,然后点OK 打开Window->Sho

大数据-hadoop伪分布式环境搭建

问题描述 hadoop伪分布式环境搭建 最近开始新学hadoop,想搭建个伪分布式环境,而自己的电脑配置不高,不能运行虚拟机,可以利用阿里云来搭建伪分布式运行环境吗? 解决方案 阿里云搞几台机器在一个局域网就可以搭建或者用单机模式 解决方案二: 嗯,单机模式是不是不用装虚拟机

eclipse-win7下Eclipse开发Hadoop应用程序环境搭建

问题描述 win7下Eclipse开发Hadoop应用程序环境搭建 这需要装cygwin么?然后我搭建的用不了 解决方案 Android系统应用程序Eclipse开发环境的搭建win7下Cygwin搭建Hadoop开发环境hadoop集群(hadoop-1.1.2)eclipse开发环境搭建 解决方案二: 这个你指的是在win下已经安装来hadoop来?如果是,那出现什么情况都可以谅解,如果是虚拟机hadoop在win下,那就要cygwin,配置好ssh就可以来

Hadoop学习之HBase的集群环境搭建

HBase的集群环境搭建 该集成环境是在伪分布搭建的基础上搭建 1.   修改原来的伪分布hadoop1上的hbase的配置文件 #cd /usr/local/hbase/conf/ 待修改的文件:hbase-env.sh.hbase-site.xml.regionservers #vim hbase-env.sh 使用搭建的zookeeper集群环境,因此hbase自带的zookeeper设置为false,不启动. #vim hbase-site.xml 将zookeeper集群所在的主机名,

windows 32位eclipse远程hadoop开发环境搭建_java

本文假设hadoop环境在远程机器(如linux服务器上),hadoop版本为2.5.2 注:本文eclipse/intellij idea 远程调试hadoop 2.6.0主要参考了并在其基础上有所调整 由于我喜欢在win7 64位上安装32位的软件,比如32位jdk,32位eclipse,所以虽然本文中的操作系统是win7 64位,但是所有的软件都是32位的. 软件版本: 操作系统:win7 64位 eclipse: eclipse-jee-mars-2-win32 java: 1.8.0_

Hadoop 2.x伪分布式环境搭建详细步骤_数据库其它

本文以图文结合的方式详细介绍了Hadoop 2.x伪分布式环境搭建的全过程,供大家参考,具体内容如下 1.修改hadoop-env.sh.yarn-env.sh.mapred-env.sh 方法:使用notepad++(beifeng用户)打开这三个文件 添加代码:export JAVA_HOME=/opt/modules/jdk1.7.0_67 2.修改core-site.xml.hdfs-site.xml.yarn-site.xml.mapred-site.xml配置文件 1)修改core-

HADOOP,大数据,c++开发环境搭建问题

问题描述 HADOOP,大数据,c++开发环境搭建问题 各位大侠....我现在用c++来开发hadoop,现在服务环境已经搭建好了,我想再搭建一个用c++开发.编译hadoop的环境 c++的开发工具有eclipse和vs2010,请问各位大侠,我该怎么下手,怎么搭建 解决方案 http://blog.csdn.net/jin123wang/article/details/39012255http://blog.csdn.net/zwx19921215/article/details/19896