Caused by: java.io.IOException: Filesystem closed的处理

org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename output from: hdfs://nameservice/user/hive/warehouse/om_dw.db/mac_wifi_day_data/tid=CYJOY/.hive-staging_hive_2016-01-20_10-19-09_200_1283758166994658237-1/_task_tmp.-ext-10002/c_date=2014-10-06/_tmp.000151_0 to: hdfs://nameservice/user/hive/warehouse/om_dw.db/mac_wifi_day_data/tid=CYJOY/.hive-staging_hive_2016-01-20_10-19-09_200_1283758166994658237-1/_tmp.-ext-10002/c_date=2014-10-06/000151_0
at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:242)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$200(FileSinkOperator.java:143)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1051)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:192)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2113)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:218)
... 15 more

原因:

多个datanode在getFileSystem过程中,由于Configuration一样,会得到同一个FileSystem。如果有一个datanode在使用完关闭连接,其它的datanode在访问就会出现上述异常

解决办法:

在hdfs core-site.xml里把fs.hdfs.impl.disable.cache设置为true
时间: 2024-08-04 11:23:53

Caused by: java.io.IOException: Filesystem closed的处理的相关文章

启动web harvest的时候出现Caused by: java.io.IOException: Could not get shel

问题描述 当使用java -jar命令的时候,出现如下异常,请各位看看,我还是新手:d:worktec_crawlers>java -jar webharvest.jarException in thread "AWT-EventQueue-0" java.lang.ExceptionInInitializerError at org.webharvest.gui.Ide.<init>(Unknown Source) at CommandLine$1.run(Unkn

JSP上传图片产生 java.io.IOException: Stream closed异常解决方法_JSP编程

在做 jsp 上传图片时,把 java 代码直接改成 jsp,上传时产生 如下异常: 2012-12-31 8:59:21 org.apache.catalina.core.StandardWrapperValve invoke 严重: Servlet.service() for servlet jsp threw exception java.io.IOException: Stream closed ... 百思不得其解,翻出 jsp 转成 servlet 后的代码.如下(很很的醒目一下):

Execute failed: java.io.IOException: Cannot run program &amp;quot;sdk-linux/build-tools/22.0.0/aapt&amp;quot;: error=2

在Linux上使用ant编译打包apk的时候,出现下面的错误及解决方法: 1./usr/local/android-sdk-linux/tools/ant/build.xml:698: Execute failed: java.io.IOException: Cannot run program "/usr/local/android-sdk-linux/build-tools/22.0.0/aapt": error=2, No such file or directory BUILD

android-调用webservice出现java.io.IOException: BufferedInputStream is closed

问题描述 调用webservice出现java.io.IOException: BufferedInputStream is closed 调用服务器的webservice接口有时会出现java.io.IOException: BufferedInputStream is closed异常连接不上服务器.有时第一次连不上第二次再调用就连上了,这是因为什么呢? 解决方案 ???? 这是android 2.3及以上的一个bug 希望这两篇文章能帮到你 http://blog.csdn.net/win

java.io.IOException: Call to Hadoop.Master/IP:port failed on local e

问题描述 java.io.IOException:CalltoHadoop.Master/192.168.10.196:9111failedonlocalexception:java.io.EOFExceptionatorg.apache.hadoop.ipc.Client.wrapException(Client.java:1144)atorg.apache.hadoop.ipc.Client.call(Client.java:1112)atorg.apache.hadoop.ipc.RPC$

在eclipse的console栏中一直显示java.io.IOException

在eclipse的console栏中一直显示java.io.IOException: 您的主机中的软件中止了一个已建立错误.具体如下. [2013-09-02 17:24:14 - ddmlib] 您的主机中的软件中止了一个已建立的连接. java.io.IOException: 您的主机中的软件中止了一个已建立的连接. at sun.nio.ch.SocketDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.writ

jsp标签-在JSP中抛出java.io.IOException: tmpFile.renameTo(classFile) failed异常怎么解决

问题描述 在JSP中抛出java.io.IOException: tmpFile.renameTo(classFile) failed异常怎么解决 在JSP中抛出java.io.IOException: tmpFile.renameTo(classFile) failed异常怎么解决 解决方案 你确定不是控制台抛出而是JSP抛出! 解决方案二: java.io.IOException: tmpFile.renameTo(classFile) failedjava.io.IOException:

hadoop错误,重新格式化namenode后,出现java.io.IOException Incompatible clusterIDs

错误:     java.io.IOException: Incompatible clusterIDs in /data/dfs/data: namenode clusterID = CID-d1448b9e-da0f-499e-b1d4-78cb18ecdebb; datanode clusterID = CID-ff0faa40-2940-4838-b321-98272eb0dee3 原因:     每次namenode format会重新创建一个namenodeId,而data目录包含了

java.io.IOException: Server returned HTTP response code: 505 for URL

问题描述 碰到个很奇怪的问题:代码如下:URL url = new URL(urlStr);URLConnection hpCon = url.openConnection();InputStream in = hpCon.getInputStream(); 然后我发送的一个URL是 : http://localhost:8000/account/accountTo.query?param=SR4A V12f报错:java.io.IOException: Server returned HTTP