spark集群中slave访问master被拒绝

问题描述

16/04/0923:18:20WARNNativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable16/04/0923:18:45ERRORRetryingBlockFetcher:Exceptionwhilebeginningfetchof1outstandingblocksjava.io.IOException:Failedtoconnecttomaster/218.192.172.48:35542atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)atorg.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)atorg.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(NettyBlockTransferService.scala:100)atorg.apache.spark.storage.ShuffleBlockFetcherIterator.sendRequest(ShuffleBlockFetcherIterator.scala:169)atorg.apache.spark.storage.ShuffleBlockFetcherIterator.fetchUpToMaxBytes(ShuffleBlockFetcherIterator.scala:351)atorg.apache.spark.storage.ShuffleBlockFetcherIterator.initialize(ShuffleBlockFetcherIterator.scala:286)atorg.apache.spark.storage.ShuffleBlockFetcherIterator.<init>(ShuffleBlockFetcherIterator.scala:119)atorg.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:43)atorg.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:98)atorg.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)atorg.apache.spark.rdd.RDD.iterator(RDD.scala:277)atorg.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)atorg.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)atorg.apache.spark.rdd.RDD.iterator(RDD.scala:277)atorg.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)atorg.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)atorg.apache.spark.rdd.RDD.iterator(RDD.scala:277)atorg.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)atorg.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)atorg.apache.spark.rdd.RDD.iterator(RDD.scala:277)atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)atorg.apache.spark.scheduler.Task.run(Task.scala:82)atorg.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)atjava.lang.Thread.run(Thread.java:745)Causedby:java.net.ConnectException:拒绝连接:master/218.192.172.48:35542atsun.nio.ch.SocketChannelImpl.checkConnect(NativeMethod)atsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)atio.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)atio.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)atio.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)atio.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)atio.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)atio.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)atio.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)...1more16/04/0923:18:50ERRORRetryingBlockFetcher:Exceptionwhilebeginningfetchof1outstandingblocks(after1retries)java.io.IOException:Failedtoconnecttomaster/218.192.172.48:35542atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)atorg.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)atorg.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)atjava.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)atjava.util.concurrent.FutureTask.run(FutureTask.java:262)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)atjava.lang.Thread.run(Thread.java:745)Causedby:java.net.ConnectException:拒绝连接:master/218.192.172.48:35542atsun.nio.ch.SocketChannelImpl.checkConnect(NativeMethod)atsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)atio.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)atio.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)atio.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)atio.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)atio.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)atio.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)atio.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)...1more16/04/0923:18:55ERRORRetryingBlockFetcher:Exceptionwhilebeginningfetchof1outstandingblocks(after2retries)java.io.IOException:Failedtoconnecttomaster/218.192.172.48:35542atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)atorg.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)atorg.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)atorg.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)atorg.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)atjava.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)atjava.util.concurrent.FutureTask.run(FutureTask.java:262)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)atjava.lang.Thread.run(Thread.java:745)配置是按照教程:http://wuchong.me/blog/2015/04/04/spark-on-yarn-cluster-deploy/来的,但是始终解决不了这个问题,主从互拼也能拼通,请问各位前辈有遇到过类似情形么,我应该如何排错?其中我把master同时也作为worder,但是master的日志里我也发现了同样的错误.另外,SparkPi的例程却可以运行,而我现在运行的程序在local模式下是没有问题的。

解决方案

解决方案二:
我为master映射了多个ip,然而我发现原来拒绝访问的ip是外网的。这可能是根据hostname来解析的ip,因为在服务器上我不能随便改它
解决方案三:
首先再强调一下环境,我用的是Intellij的IDE,并且已经按照提示在conf/spark_env.sh中设置了SPARK_LOCAL_IP对应着局域网下本机的ip,可是在IDE的环境下仍然提示映射到了外网ip中,从机我设置在了虚拟机上,拒绝访问master也是正常的。然而我发现,在spark-shell的环境下,并没有提示解析到外网ip上,所以我尝试了下用spark-submit提交,拒绝访问的问题就消失了。当然,如果一开始master和对应的hostname是一样的,应该就不会存在拒绝访问的问题,看起来默认就是按照hostname来解析,只是intellij没有读取spark-env.sh中的设置,这是个需要研究的问题。

时间: 2024-08-24 09:50:33

spark集群中slave访问master被拒绝的相关文章

spark-java怎么在Spark集群中运行?

问题描述 java怎么在Spark集群中运行? 本人初学者,现在已经成功跑了本地的SparkPi程序.现有条件: windows8.1: eclipse: putty以及root和密码:Master主机IP:两台slave的IP. 不知道下一步该怎么做了,请问SparkPi怎么在Spark集群中运行呢?我查了一下,是不是要导出jar包,然后在putty中用spark-submit来运行?如果是的话,具体怎么操作呢? 多谢相助! 解决方案 对,搭建好集群后,通过spark submit来提交任务让

【Spark Summit EU 2016】在Spark集群中内置Elasticsearch

本讲义出自Oscar Castaneda在Spark Summit EU上的演讲,在使用ES-Hadoop进行开发的过程中,使Elasticsearch运行在Spark集群外部是一件非常繁琐的事情,为了在开发过程中更好地Elasticsearch实例,并且尽可能地降低开发团队之间的依赖关系,使用ES快照作为团队合作的接口,并且提高QA的效率,所以提出了在Spark集群中内置Elasticsearch的方式.

Scala-IDE Eclipse(Windows)中开发Spark应用程序,在Ubuntu Spark集群上运行

在进行实际的Spark应用程序开发时,常常会利用Window环境进行程序开发,开发测试好之后提交到Spark集群中利用bin/spark-submit脚本进行程序的发布运行,本教程教将一步一步地教大家如何进行该操作.本教程主要内容如下: Window开发环境说明 Spark集群运行环境说明 Scala IDE For Eclipse中Spark程序开发 利用spark-submit脚本提交到Spark集群当中运行 Windows开发环境说明 (1)Scala-IDE eclipse ,版本号见下

HBase集群中RegionServer崩溃快速恢复探索

摘 要:本文 主要介绍了HBaseRegionServer与Zookeeper间的交互过程,阐述RegionServer崩溃后的恢复机制,并在此基础上提出了几点优化的恢复措施.优化后的恢复措施大大缩短了RegionServer崩溃后的故障恢复时间和业务中断时间,从而提高了HBase集群的稳定性和可靠性. 0 引言 随着互联网和通信行业的迅猛发展,积聚的各种数据呈急剧增长态势.这些海量数据既蕴含着丰富的信息和资源,又面临着信息有效管理和提取的难题.云计算是分布式处理.并行处理和网格计算的发展,可以

win7下运行的scala程序无法连接到Linux中的spark集群

问题描述 win7下运行的scala程序无法连接到Linux中的spark集群,老是报这样的错误:16/01/0610:59:32INFOAbstractConnector:StartedSelectChannelConnector@0.0.0.0:404016/01/0610:59:32INFOUtils:Successfullystartedservice'SparkUI'onport4040.16/01/0610:59:32INFOSparkUI:StartedSparkUIathttp:

spark小demo在集群中的流程

问题描述 一个spark的单词计数程序,如下main(){vala=.....valsparkConf=newSparkConf().....valssc=newStreamingContext...//业务代码.....ssc.start()ssc.awaitTermination()}它在集群中的流程是怎么样呢,我的理解是:a的赋值和ssc的初始化其实都只执行一次,而跟ssc有关的业务代码却是随ssc.awaitTermination()一次又一次执行的,是这样么

集群中IHS无法访问应用

问题描述 现象:通过集群中的IHS访问应用的时候报404错误.软硬件环境:IBMP561小机4台安装AIX5.3操作系统WAS6.1.0.15四台小机主机名app1.app2.app3.app4,四台小机上WAS即做了水平集群又做了垂直的集群,在每台小机上还建有两个server:其中DM装在app1上,app2和app3上还安装了httpserver,对外提供访问的入口是app2上面的httpserver地址,如:http://app2/WebApp几天前app2出现了故障,导致业务无法进行,当

《Spark大数据处理:技术、应用与性能优化》——第2章 Spark集群的安装与部署2.1 Spark的安装与部署

第2章 Spark集群的安装与部署 Spark的安装简便,用户可以在官网上下载到最新的软件包. Spark最早是为了在Linux平台上使用而开发的,在生产环境中也是部署在Linux平台上,但是Spark在UNIX.Windwos和Mac OS X系统上也运行良好.不过,在Windows上运行Spark稍显复杂,必须先安装Cygwin以模拟Linux环境,才能安装Spark. 由于Spark主要使用HDFS充当持久化层,所以完整地使用Spark需要预先安装Hadoop.下面介绍Spark集群的安装

Spark集群安装和使用

本文主要记录 CDH5 集群中 Spark 集群模式的安装过程配置过程并测试 Spark 的一些基本使用方法. 安装环境如下: 操作系统:CentOs 6.5 Hadoop 版本:cdh-5.4.0 Spark 版本:cdh5-1.3.0_5.4.0 关于 yum 源的配置以及 Hadoop 集群的安装,请参考 使用yum安装CDH Hadoop集群. 1. 安装 首先查看 Spark 相关的包有哪些: $ yum list |grep spark spark-core.noarch 1.3.0