问题描述
10.0.15.104为master,10.0.15.105和10.0.15.106是worker,slave文件设置为:10.0.15.105和10.0.15.106spark-env.sh设置为:exportSCALA_HOME=/app/spark/scala-2.10.3exportHADOOP_HOME=/app/hadoop-2.2.0exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopSPARK_WORKER_INSTANCES=3SPARK_MASTER_PORT=8081SPARK_MASTER_WEBUI_PORT=8090SPARK_WORKER_PORT=8091SPARK_MASTER_IP=10.0.15.104SPARK_WORKER_DIR=/app/spark/spark-0.9.1-bin-hadoop2/worker但是当我调用按照下面去调用JavaSparkContext时总是报错。Stringmaster="spark://10.0.15.104:8081";StringsparkHome="/app/spark/spark-0.9.1-bin-hadoop2";StringappName="JavaWordCount";String[]jarArray=JavaSparkContext.jarOfClass(WordCount.class);JavaSparkContextctx=newJavaSparkContext(master,appName,sparkHome,jarArray);结果就一直报下面的错误,/220.250.64.18:0这个IP不知道从哪里来的,我根本就设置过这个日夜期盼大神出手指点迷津Exceptioninthread"main"org.jboss.netty.channel.ChannelException:Failedtobindto:/220.250.64.18:0atorg.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)atakka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)atakka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)atscala.util.Success$$anonfun$map$1.apply(Try.scala:206)atscala.util.Try$.apply(Try.scala:161)atscala.util.Success.map(Try.scala:206)atscala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)atscala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)atscala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)atakka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)atakka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)atakka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)atakka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)atscala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)atakka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)atakka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)atakka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)atscala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)atscala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)atscala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)atscala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Causedby:java.net.BindException:无法指定被请求的地址atsun.nio.ch.Net.bind(NativeMethod)atsun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)atsun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)atorg.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)atorg.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)atorg.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)atorg.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)atjava.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)atjava.lang.Thread.run(Thread.java:662)
解决方案
解决方案二:
exportSPARK_MASTER_IP=localhostexportSPARK_LOCAL_IP=localhost
解决方案三:
建议你连接master的时候尽量用hostname去连接,不然会连不上的。