问题描述
问题描述:输入两个文件:第一个文件内容:HelloWorldByeWorld第二个文件内容:HelloHadoopGoodbyeHadoopareate运行WordCount程序的输出内容:却是以上两行!HelloHadoopGoodbyeHadoopareateHelloWorldByeWorld----------------------------------------------------------------------------------WordCount程序应该没问题,应为之前运行正常呢,今天忽然间运行不正常,感觉像Mapper,Reducer不工作了,真心不知道咋回事!本想看看hadoop进程运行是否正常,可是进入hadoop的安装目录bin文件夹后,它竟然说jps命令找不到!控制台打印的部分信息是:13/10/0221:35:43WARNutil.NativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable13/10/0221:35:43WARNmapred.JobClient:Nojobjarfileset.Userclassesmaynotbefound.SeeJobConf(Class)orJobConf#setJar(String).13/10/0221:35:43INFOinput.FileInputFormat:Totalinputpathstoprocess:213/10/0221:35:43WARNsnappy.LoadSnappy:Snappynativelibrarynotloaded13/10/0221:35:44INFOmapred.JobClient:Runningjob:job_local255489033_000113/10/0221:35:44INFOmapred.LocalJobRunner:Waitingformaptasks13/10/0221:35:44INFOmapred.LocalJobRunner:Startingtask:attempt_local255489033_0001_m_000000_013/10/0221:35:44INFOutil.ProcessTree:setsidexitedwithexitcode013/10/0221:35:44INFOmapred.Task:UsingResourceCalculatorPlugin:org.apache.hadoop.util.LinuxResourceCalculatorPlugin@58dad8b513/10/0221:35:44INFOmapred.MapTask:Processingsplit:hdfs://localhost:9000/user/test/input/2.txt:0+3613/10/0221:35:44INFOmapred.MapTask:io.sort.mb=10013/10/0221:35:45INFOmapred.MapTask:databuffer=79691776/9961472013/10/0221:35:45INFOmapred.MapTask:recordbuffer=262144/327680
解决方案
解决方案二:
源码贴出来看看呢
解决方案三:
引用1楼s060403072的回复:
源码贴出来看看呢
packagetestpkg;importjava.io.IOException;importjava.util.StringTokenizer;importorg.apache.hadoop.conf.Configuration;importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.IntWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Job;importorg.apache.hadoop.mapreduce.Mapper;importorg.apache.hadoop.mapreduce.Reducer;importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;importorg.apache.hadoop.util.GenericOptionsParser;publicclassWordCount{publicstaticclassTokenizerMapperextendsMapper<Object,Text,Text,IntWritable>{privatefinalstaticIntWritableone=newIntWritable(1);privateTextword=newText();publicvoidmap(Objectkey,Textvalue,Contextcontext)throwsIOException,InterruptedException{StringTokenizeritr=newStringTokenizer(value.toString());while(itr.hasMoreTokens()){word.set(itr.nextToken());context.write(word,one);}}}publicstaticclassIntSumReducerextendsReducer<Text,IntWritable,Text,IntWritable>{privateIntWritableresult=newIntWritable();publicvoidreduce(Textkey,Iterable<IntWritable>values,Contextcontext)throwsIOException,InterruptedException{intsum=0;for(IntWritableval:values){sum+=val.get();}result.set(sum);context.write(key,result);}}publicstaticvoidmain(String[]args)throwsException{Configurationconf=newConfiguration();String[]otherArgs=newGenericOptionsParser(conf,args).getRemainingArgs();if(otherArgs.length!=2){System.err.println("Usage:wordcount<in><out>");System.exit(2);}Jobjob=newJob(conf,"wordcount");job.setJarByClass(WordCount.class);job.setMapperClass(TokenizerMapper.class);job.setCombinerClass(IntSumReducer.class);job.setReducerClass(IntSumReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);FileInputFormat.addInputPath(job,newPath(otherArgs[0]));FileOutputFormat.setOutputPath(job,newPath(otherArgs[1]));System.exit(job.waitForCompletion(true)?0:1);}}
解决方案四:
可以重启下机器再试下
解决方案五:
引用3楼s060403072的回复:
可以重启下机器再试下
重启了哇,也还是那样,算了,我再重新配置一下试试吧~
解决方案六:
嗯,感觉是环境变量的问题。。。
解决方案七:
jsp不好使?那是java环境的问题啊,/etc/profile配置是不是出问题了。还有hadop-env.sh也得设置javahome。
解决方案八:
jps命令是在jdk的bin目录中
解决方案九:
jps是jdk里面用来查看java进程的命令。请将jar包重新打下,试试。