spark-Spark1.3基于scala2.11编译hive-thrift报错,关于jline的

问题描述

Spark1.3基于scala2.11编译hive-thrift报错,关于jline的 5C
[INFO]

[INFO] ------------------------------------------------------------------------
[INFO] Building Spark Project Hive Thrift Server 1.3.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ spark-hive-thriftserver_2.11 ---
[INFO] Deleting /usr/local/spark-1.3.0/sql/hive-thriftserver/target
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-versions) @ spark-hive-thriftserver_2.11 ---
[INFO]
[INFO] --- scala-maven-plugin:3.2.0:add-source (eclipse-add-source) @ spark-hive-thriftserver_2.11 ---
[INFO] Add Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala
[INFO] Add Test Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/test/scala
[INFO]
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-scala-sources) @ spark-hive-thriftserver_2.11 ---
[INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala added.
[INFO]
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-default-sources) @ spark-hive-thriftserver_2.11 ---
[INFO] Source directory: /usr/local/spark-1.3.0/sql/hive-thriftserver/v0.13.1/src/main/scala added.
[INFO]
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ spark-hive-thriftserver_2.11 ---
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ spark-hive-thriftserver_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/resources
[INFO] Copying 3 resources
[INFO]
[INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ spark-hive-thriftserver_2.11 ---
[WARNING] Zinc server is not available at port 3030 - reverting to normal incremental compile
[INFO] Using incremental compilation
[INFO] compiler plugin: BasicArtifact(org.scalamacrosparadise_2.11.22.0.1null)
[INFO] Compiling 9 Scala sources to /usr/local/spark-1.3.0/sql/hive-thriftserver/target/scala-2.11/classes...
[ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:25: object ConsoleReader is not a member of package jline
[ERROR] import jline.{ConsoleReader History}
[ERROR] ^
[WARNING] Class jline.Completor not found - continuing with a stub.
[WARNING] Class jline.ConsoleReader not found - continuing with a stub.
[ERROR] /usr/local/spark-1.3.0/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala:165: not found: type ConsoleReader
[ERROR] val reader = new ConsoleReader()
[ERROR] ^
[ERROR] Class jline.Completor not found - continuing with a stub.
[WARNING] Class com.google.protobuf.Parser not found - continuing with a stub.
[WARNING] Class com.google.protobuf.Parser not found - continuing with a stub.
[WARNING] Class com.google.protobuf.Parser not found - continuing with a stub.
[WARNING] Class com.google.protobuf.Parser not found - continuing with a stub.
[WARNING] 6 warnings found
[ERROR] three errors found
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [01:20 min]
[INFO] Spark Project Networking ........................... SUCCESS [01:31 min]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 47.808 s]
[INFO] Spark Project Core ................................. SUCCESS [34:00 min]
[INFO] Spark Project Bagel ................................ SUCCESS [03:21 min]
[INFO] Spark Project GraphX ............................... SUCCESS [09:22 min]
[INFO] Spark Project Streaming ............................ SUCCESS [15:07 min]
[INFO] Spark Project Catalyst ............................. SUCCESS [14:35 min]
[INFO] Spark Project SQL .................................. SUCCESS [16:31 min]
[INFO] Spark Project ML Library ........................... SUCCESS [18:15 min]
[INFO] Spark Project Tools ................................ SUCCESS [01:50 min]
[INFO] Spark Project Hive ................................. SUCCESS [13:58 min]
[INFO] Spark Project REPL ................................. SUCCESS [06:13 min]
[INFO] Spark Project YARN ................................. SUCCESS [07:05 min]
[INFO] Spark Project Hive Thrift Server ................... FAILURE [01:39 min]
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Spark Project External Twitter ..................... SKIPPED
[INFO] Spark Project External Flume Sink .................. SKIPPED
[INFO] Spark Project External Flume ....................... SKIPPED
[INFO] Spark Project External MQTT ........................ SKIPPED
[INFO] Spark Project External ZeroMQ ...................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:25 h
[INFO] Finished at: 2015-04-16T14:11:24+08:00
[INFO] Final Memory: 62M/362M
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile ""hadoop-2.5"" could not be activated because it does not exist.
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project spark-hive-thriftserver_2.11: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile failed. CompileFailed -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems you can resume the build with the command
[ERROR] mvn -rf :spark-hive-thriftserver_2.11

解决方案

我也有遇到 没解决呢

解决方案二:
检查一下是否版本依赖的问题,我看官方下的是0.9.94版本,cdh下来的是1.11
我在hive-thriftserver的pom.xml上加上

jline
jline
0.9.94

可以试一下

解决方案三:

<dependency>  <groupId>jline</groupId>  <artifactId>jline</artifactId>  <version>0.9.94</version></dependency>

解决方案四:
请看官网说明,scall 2.11是不可以支持JDBC
Building for Scala 2.11
To produce a Spark package compiled with Scala 2.11 use the -Dscala-2.11 property:

dev/change-version-to-2.11.sh
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
Spark does not yet support its JDBC component for Scala 2.11.

时间: 2024-10-09 07:12:00

spark-Spark1.3基于scala2.11编译hive-thrift报错,关于jline的的相关文章

c++-C++ functional中的template在编译的时候报错

问题描述 C++ functional中的template在编译的时候报错 template模板代码为:1 template2 class _Arg0> _Call_wrapper<_Callable_pmd<_Rx _Arg0::*const _Arg0> 3 mem_fn(_Rx _Arg0::*const _Pmd)4 { // return data object wrapper5 return (_Call_wrapper<_Callable_pmd<_Rx

mingw-Code::Blocks Windows环境下编译HelloWorld程序报错,求高手支招!

问题描述 Code::Blocks Windows环境下编译HelloWorld程序报错,求高手支招! 很简单的Hello World 程序,编译时提示: D:MinGWincludec++3.4.5bitscodecvt.h|475 这个文件中引用的 bits/codecvt_specializations.h 文件找不到 请问是哪里出了问题? ?

library-请教大神帮忙解决一下eclipse里面android_NDK编译.c文件报错

问题描述 请教大神帮忙解决一下eclipse里面android_NDK编译.c文件报错 "D:Androidandroid-ndk-r9dndk-build.cmd" all 系统找不到指定的文件. [armeabi-v7a] Compile thumb : hello-jni <= hello-jni.c jni/hello-jni.c:31:1: fatal error: opening dependency file ./obj/local/armeabi-v7a/objs

byte-java中 单独编译为什么会报错

问题描述 java中 单独编译为什么会报错 byte b=10; b=b+10; System.out.println(b); 解决方案 java是一种强类型语言,运算时,转换的一般原则是位少的类型转换为位数多的类型:10 是int 类型,32位,和byte类型8位相加,结果是int 类型的,可以这样 :b = (byte) (10 + b); 解决方案二: 要么b=(byte)(b+10);要么int b = 10; b + 10,因为后面的10被当作整数,所以表达式的结果也是整数,而没有整数

vs2013-使用VS2013编译C++出现报错:值不能为null,参数名:SolutionDirectory

问题描述 使用VS2013编译C++出现报错:值不能为null,参数名:SolutionDirectory 使用VS2013编译C++出现报错:值不能为null,参数名:SolutionDirectory,我写的是最简单的helloworld,求解 解决方案 参考 How to use Visual Studio C++ Compiler? 在VS中,你不能够直接写一个cpp文件,然后编译.需要先创建一个项目,或在现有项目中编译. 解决方案二: 用VS工程向导建立工程,然后在自动生成的代码中添加

pl sql-orcal 存储过程第一次写,编译的时候报错,大侠帮帮忙,急!!

问题描述 orcal 存储过程第一次写,编译的时候报错,大侠帮帮忙,急!! sql脚本如下:create or replace procedure ""PageList""(tbName IN VARCHAR2tbFieldsIN VARCHAR2 orderField IN VARCHAR2orderType IN INTstrWhere IN VARCHAR2pageSize IN INTpageIndex IN INTpageRecord OUT INT)is

android studio 中编译时老是报错

问题描述 android studio 中编译时老是报错 在文件中都有,但是为什么还是有错: 解决方案 http://zhidao.baidu.com/link?url=nqNjZq730FSkqIB-yNckbp0co3ENuoAoHQTY4xq4zW73Fe--x88FKQ3JiYA_R1uZhnyy9T6ERxhfOQlmrWgKkEyA4yu2nC-b4uBh2NM_Bqu 解决方案二: 报的什么错呢? 不然没法分析的

MyEclipse 编译java代码报错,但是类文件上没有红叉提示

问题描述 MyEclipse 编译java代码报错,但是类文件上没有红叉提示 MyEclipse 在做项目时,java代码中编译错误之后,在类上没有红X显示,怎么回事呢? 解决方案 建议你将代码放在另外一台电脑上试试,看看是你的电脑环境的问题,还是统一存在的问题. 如果你的电脑的问题,估计别人是帮不上忙的.除了重装系统之外,估计也没有其它办法. 如果都存在这样的问题,可能是软件的 BUG.只能反馈给软件开发者来修改. 解决方案二: 可能是你还没有保存,你保存一下看看 解决方案三: clean一下

eclipse-个人app在android系统里面编译的时候报错

问题描述 个人app在android系统里面编译的时候报错 Error: Can't read proguard.ClassPathEntry@53ef9f1d) make[1]: *** [out/target/common/obj/APPS/Netants_intermediates/proguard.classes.jar] Error 1 描述:自己编写的app,工程名为Netants,在eclipse下能正常运行,但是放进android系统里面编译就会出错.各位大神求指点 解决方案 在