sqoop2:从mysql导出数据到hdfs

sqoop2:从mysql导出数据到hdfs中

sqoop-shell

启动sqoopp-shell

jjzhu:bin didi$ sqoop2-shell
Setting conf dir: /opt/sqoop-1.99.7/bin/../conf
Sqoop home directory: /opt/sqoop-1.99.7
Sqoop Shell: Type 'help' or '\h' for help.

sqoop:000> set server --host localhost --port 12000 --webapp sqoop
Server is set successfully
sqoop:000> show version --all
client version:
  Sqoop 1.99.7 source revision 435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb
  Compiled by abefine on Tue Jul 19 16:08:27 PDT 2016
0    [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
server version:
  Sqoop 1.99.7 source revision 435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb
  Compiled by abefine on Tue Jul 19 16:08:27 PDT 2016
API versions:
  [v1]
sqoop:000>

配置sqoop server

sqoop:000> set server --host localhost --port 12000 --webapp sqoop
Server is set successfully

查看server连接是否可用

sqoop:000> show version --all
client version:
  Sqoop 1.99.7 source revision 435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb
  Compiled by abefine on Tue Jul 19 16:08:27 PDT 2016
0    [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
server version:
  Sqoop 1.99.7 source revision 435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb
  Compiled by abefine on Tue Jul 19 16:08:27 PDT 2016
API versions:
  [v1]
sqoop:000>

创建链接

查看sqoop server上可用的链接

sqoop:000> show connector
+------------------------+---------+------------------------------------------------------------+----------------------+
|          Name          | Version |                           Class                            | Supported Directions |
+------------------------+---------+------------------------------------------------------------+----------------------+
| generic-jdbc-connector | 1.99.7  | org.apache.sqoop.connector.jdbc.GenericJdbcConnector       | FROM/TO              |
| kite-connector         | 1.99.7  | org.apache.sqoop.connector.kite.KiteConnector              | FROM/TO              |
| oracle-jdbc-connector  | 1.99.7  | org.apache.sqoop.connector.jdbc.oracle.OracleJdbcConnector | FROM/TO              |
| ftp-connector          | 1.99.7  | org.apache.sqoop.connector.ftp.FtpConnector                | TO                   |
| hdfs-connector         | 1.99.7  | org.apache.sqoop.connector.hdfs.HdfsConnector              | FROM/TO              |
| kafka-connector        | 1.99.7  | org.apache.sqoop.connector.kafka.KafkaConnector            | TO                   |
| sftp-connector         | 1.99.7  | org.apache.sqoop.connector.sftp.SftpConnector              | TO                   |
+------------------------+---------+------------------------------------------------------------+----------------------+
sqoop:000>
  • generic-jdbc-connector
    依赖于java JDBC的connector,可以作为数据导入的数据源和目标源
  • hdfs-connector
    以hdfs作为数据源或者目标源的connector

用如下命令创建一个generic-jdbc-connector的链接

sqoop:002> create link -c generic-jdbc-connector
Creating link for connector with name generic-jdbc-connector
Please fill following values to create new link object
Name: mysql_weibouser_link

Database connection

Driver class: com.mysql.jdbc.Driver
Connection String: jdbc:mysql://127.0.0.1:3306/spider
Username: root
Password: **
Fetch Size:
Connection Properties:
There are currently 0 values in the map:
entry# protocol=tcp
There are currently 1 values in the map:
protocol = tcp
entry# 

SQL Dialect

Identifier enclose:  **注意  这里不能直接回车!要打一个空格符号!因为如果不打,查询mysql表的时候会在表上加上“”,导致查询出错!
**
New link was successfully created with validation status OK and name mysql_weibouser_link

创建hdfs link

sqoop:002> create link -c hdfs-connector
Creating link for connector with name hdfs-connector
Please fill following values to create new link object
Name: hdfs_weibouser_link

HDFS cluster

URI: hdfs://localhost:9000
Conf directory: /opt/hadoop-2.7.3/etc/hadoop
Additional configs::
There are currently 0 values in the map:
entry#
New link was successfully created with validation status OK and name hdfs_weibouser_link

查看link

sqoop:002> show link
+----------------------+------------------------+---------+
|         Name         |     Connector Name     | Enabled |
+----------------------+------------------------+---------+
| mysql_weibouser      | generic-jdbc-connector | true    |
| mysql_weibouser_link | generic-jdbc-connector | true    |
| hdfs_link            | hdfs-connector         | true    |
| hdfs_link2           | hdfs-connector         | true    |
| hdfs_weibouser_link  | hdfs-connector         | true    |
+----------------------+------------------------+---------+

创建job

sqoop:002> create job -f "mysql_weibouser_link" -t "hdfs_weibouser_link"
Creating job for links with from name mysql_weibouser_link and to name hdfs_weibouser_link
Please fill following values to create new job object
Name: job_weibouser

Database source

Schema name: spider
Table name: spiders_weibouser
SQL statement:
Column names:
There are currently 0 values in the list:
element#
Partition column:
Partition column nullable:
Boundary query: 

Incremental read

Check column:
Last value: 

Target configuration

Override null value:
Null value:
File format:
  0 : TEXT_FILE
  1 : SEQUENCE_FILE
  2 : PARQUET_FILE
Choose: 0
Compression codec:
  0 : NONE
  1 : DEFAULT
  2 : DEFLATE
  3 : GZIP
  4 : BZIP2
  5 : LZO
  6 : LZ4
  7 : SNAPPY
  8 : CUSTOM
Choose: 0
Custom codec:
Output directory: hdfs://localhost:9000/usr/jjzhu/spider/spiders_weibouser
Append mode: 

Throttling resources

Extractors: 2
Loaders: 2

Classpath configuration

Extra mapper jars:
There are currently 0 values in the list:
element#
New job was successfully created with validation status OK  and name job_weibouser

各参数意义:

以下是各个属性
Name:一个标示符,自己指定即可。
Schema Name:指定Database或Schema的名字,在MySQL中,Schema同Database类似,具体什么区别没有深究过,但官网描述在创建时差不多。。
Table Name:自己指定导出的表。
SQL Statement:就是sql查询语句,文档上说需要指定一个$condition,但我一直没有创建成功,貌似是一个条件子句。
配置完以上几项,又回出现element#提示符,提示输入一些hash值,直接回车过。
Partition column:
Partition column nullable:
Boundary query
Last value
后面需要配置数据目的地各项值:
Null alue:大概说的是如果有空值用什么覆盖
File format:指定在HDFS中的数据文件是什么文件格式,这里使用TEXT_FILE,即最简单的文本文件。
Compression codec:用于指定使用什么压缩算法进行导出数据文件压缩,我指定NONE,这个也可以使用自定义的压缩算法CUSTOM,用Java实现相应的接口。
Custom codec:这个就是指定的custom压缩算法,本例选择NONE,所以直接回车过去。
Output directory:指定存储在HDFS文件系统中的路径,这里最好指定一个存在的路径,或者存在但路劲下是空的,貌似这样才能成功。
Append mode:用于指定是否是在已存在导出文件的情况下将新数据追加到数据文件中。
Extractors:2
Loaders:2
最后再次出现element#提示符,用于输入extra mapper jars的属性,可以什么都不写。直接回车。

至此若出现successful则证明已经成功创建。

查看创建的job

sqoop:002> show job
+----+---------------+-----------------------------------------------+--------------------------------------+---------+
| Id |     Name      |                From Connector                 |             To Connector             | Enabled |
+----+---------------+-----------------------------------------------+--------------------------------------+---------+
| 1  | spider_job    | mysql_weibouser (generic-jdbc-connector)      | hdfs_link (hdfs-connector)           | true    |
| 2  | job_weibouser | mysql_weibouser_link (generic-jdbc-connector) | hdfs_weibouser_link (hdfs-connector) | true    |
+----+---------------+-----------------------------------------------+--------------------------------------+---------+
sqoop:002>

启动job

start job -n job_weibouser
sqoop:002> start job -n job_weibouser
Submission details
Job Name: job_weibouser
Server URL: http://localhost:12000/sqoop/
Created by: didi
Creation date: 2017-04-11 14:37:46 CST
Lastly updated by: didi
External ID: job_1491888730134_0003
    http://jjzhu:8088/proxy/application_1491888730134_0003/
2017-04-11 14:37:46 CST: BOOTING  - Progress is not available

查看job运行状态

sqoop:002> status job -n job_weibouser
Submission details
Job Name: job_weibouser
Server URL: http://localhost:12000/sqoop/
Created by: didi
Creation date: 2017-04-11 14:37:46 CST
Lastly updated by: didi
External ID: job_1491888730134_0003
    http://jjzhu:8088/proxy/application_1491888730134_0003/
2017-04-11 14:38:41 CST: SUCCEEDED
Counters:
    org.apache.hadoop.mapreduce.FileSystemCounter
        FILE_LARGE_READ_OPS: 0
        FILE_WRITE_OPS: 0
        HDFS_READ_OPS: 2
        HDFS_BYTES_READ: 290
        HDFS_LARGE_READ_OPS: 0
        FILE_READ_OPS: 0
        FILE_BYTES_WRITTEN: 51361466
        FILE_BYTES_READ: 25115854
        HDFS_WRITE_OPS: 2
        HDFS_BYTES_WRITTEN: 24652721
    org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter
        BYTES_WRITTEN: 0
    org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
        BYTES_READ: 0
    org.apache.hadoop.mapreduce.JobCounter
        TOTAL_LAUNCHED_MAPS: 2
        VCORES_MILLIS_REDUCES: 20225
        MB_MILLIS_MAPS: 27120640
        TOTAL_LAUNCHED_REDUCES: 2
        SLOTS_MILLIS_REDUCES: 20225
        VCORES_MILLIS_MAPS: 26485
        MB_MILLIS_REDUCES: 20710400
        SLOTS_MILLIS_MAPS: 26485
        MILLIS_REDUCES: 20225
        OTHER_LOCAL_MAPS: 2
        MILLIS_MAPS: 26485
    org.apache.sqoop.submission.counter.SqoopCounters
        ROWS_READ: 109408
        ROWS_WRITTEN: 109408
    org.apache.hadoop.mapreduce.TaskCounter
        MAP_OUTPUT_MATERIALIZED_BYTES: 25115866
        REDUCE_INPUT_RECORDS: 109408
        SPILLED_RECORDS: 218816
        MERGED_MAP_OUTPUTS: 4
        VIRTUAL_MEMORY_BYTES: 0
        MAP_INPUT_RECORDS: 0
        SPLIT_RAW_BYTES: 290
        FAILED_SHUFFLE: 0
        MAP_OUTPUT_BYTES: 24762129
        REDUCE_SHUFFLE_BYTES: 25115866
        PHYSICAL_MEMORY_BYTES: 0
        GC_TIME_MILLIS: 1648
        REDUCE_INPUT_GROUPS: 109408
        COMBINE_OUTPUT_RECORDS: 0
        SHUFFLED_MAPS: 4
        REDUCE_OUTPUT_RECORDS: 109408
        MAP_OUTPUT_RECORDS: 109408
        COMBINE_INPUT_RECORDS: 0
        CPU_MILLISECONDS: 0
        COMMITTED_HEAP_BYTES: 1951399936
    Shuffle Errors
        CONNECTION: 0
        WRONG_LENGTH: 0
        BAD_ID: 0
        WRONG_MAP: 0
        WRONG_REDUCE: 0
        IO_ERROR: 0
Job executed successfully

查看hdfs的相关路径,看是否有输出文件

jjzhu:~ didi$ hdfs dfs -ls /usr/jjzhu/spider
Found 4 items
drwxr-xr-x   - didi supergroup          0 2017-04-11 14:38 /usr/jjzhu/spider/spiders_weibouser
drwxr-xr-x   - 777  supergroup          0 2017-04-11 10:58 /usr/jjzhu/spider/weibouser
drwxr-xr-x   - 777  supergroup          0 2017-04-11 13:33 /usr/jjzhu/spider/weobouser
drwxr-xr-x   - didi supergroup          0 2017-04-11 13:39 /usr/jjzhu/spider/weobouser2
jjzhu:~ didi$ hdfs dfs -ls /usr/jjzhu/spider/spiders_weibouser
Found 2 items
-rw-r--r--   1 didi supergroup   12262783 2017-04-11 14:38 /usr/jjzhu/spider/spiders_weibouser/33b56441-b638-48cc-8d0d-37a808f25653.txt
-rw-r--r--   1 didi supergroup   12389938 2017-04-11 14:38 /usr/jjzhu/spider/spiders_weibouser/73b20d50-de72-4aea-8c8c-d97cdc48e667.txt

也可以访问hdfs的webui进行查看http://localhost:50070/

注意事项

定位问题一定要查看日志,sqoop的日志输出在sqoop.properties 配的路径sqoop.log

  1. 在配置hdfs文件路径的时候,确保有写权限
    通过如下命令修改hdfs下文件的权限
hdfs dfs -chown -R 777 HDFS_PATH
  1. RemoteException:User: xxx is not allowed to impersonate xxx

代理问题,确保hadoop的core-site.xml的xxx有代理

    <!--把XXX改成自己的用户 -->
    <property>
        <name>hadoop.proxyuser.XXX.hosts</name>
         <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.XXX.groups</name>
        <value>*</value>
    </property>
  1. 开启sqoop的详细异常日志
sqoop:000> set option --name verbose --value true
  1. status job -n jobname时报错:java.net.ConnectException: Call From xxx.xxx.xxx.xxx to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refuse

这个问题一般是在hadoop2.x版本里会出现,Hadoop的datanode需要访问namenode的jobhistory server,如果没有修改,则默认为0.0.0.0:10020,则可以修改mapred-site.xml文件

<property>
        <name>mapreduce.jobhistory.address</name>
            <!-- 配置实际的Master主机名和端口-->
            <value>localhost:10020</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <!-- 配置实际的Master主机名和端口-->
            <value>localhost:19888</value>
    </property>

启动jobhistory

$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver
  1. Error: Java heap space Container killed by the ApplicationMaster. Container killed on request. Exit code is 143.

确保mapredce有足够的java内存,可以在mapreduce-site.xml中配置

    <property>
        <name>mapreduce.map.java.opts</name>
         <value>-Xmx1024m</value>
     </property>
     <property>
        <name>mapreduce.reduce.java.opts</name>
         <value>-Xmx1024m</value>
     </property>
  1. Exception has occurred during processing command
    Exception: org.apache.sqoop.common.SqoopException Message: GENERIC_JDBC_CONNECTOR_0016:Can't fetch schema -

解决: 在创建mysql链接时,Identifier enclose:指定SQL中标识符的定界符,也就是说,有的SQL标示符是一个引号:select * from "table_name",这种定界符在MySQL中是会报错的。这个属性默认值就是双引号,使用空格覆盖这个值。

参考: http://www.cnblogs.com/avivaye/p/6197123.html

时间: 2024-08-31 13:33:45

sqoop2:从mysql导出数据到hdfs的相关文章

《Hadoop实战手册》一1.2 使用Hadoop shell命令导入和导出数据到HDFS

1.2 使用Hadoop shell命令导入和导出数据到HDFS HDFS提供了许多shell命令来实现访问文件系统的功能,这些命令都是构建在HDFS FileSystem API之上的.Hadoop自带的shell脚本是通过命令行来执行所有操作的.这个脚本的名称叫做hadoop,通常安装在$HADOOP_BIN目录下,其中$HADOOP_BIN是Hadoopbin文件完整的安装目录,同时有必要将$HADOOP_BIN配置到$PATH环境变量中,这样所有的命令都可以通过hadoop fs -co

Mysql导出数据的正确方法_Mysql

之前折磨了很久,想在Mysql命令行下导出数据库,但就是每天提示不那个错误,后来才知道其实mysqldump不是mysql命令,因此不能在Mysql命令行下导出. 下面是错误的做法: . 下面才是正确的做法: 即:mysql导出数据库命令只能在系统命令行下使用,而不能在mysql命令行下使用.

java从mysql导出数据的具体实例_java

复制代码 代码如下: import java.sql.Connection;import java.sql.DriverManager;import java.sql.ResultSet;import java.sql.Statement; import java.io.BufferedReader;import java.io.File;import java.io.FileOutputStream;import java.io.FileReader;import java.io.IOExce

mysql导出数据错误-求大拿帮忙啊,MySql select导出的问题

问题描述 求大拿帮忙啊,MySql select导出的问题 小弟刚学编程,碰见的问题,求大神解答一下啊. 在Mysql用select查询导出的时候用下面的语句 mysql -hlocalhost -uroot -pzonvan select * from broadcast_members into outfile "ljf-1.txt"; 正确内容是这样的: [root@localhost ~]# cat ljf-3.txt id broadcast_id called_retrie

MySQL 导出数据为csv格式的方法_Mysql

方案有很多种,我这里简单说一下: 1.  into outfile 复制代码 代码如下: SELECT * FROM mytable   INTO OUTFILE '/tmp/mytable.csv'   FIELDS TERMINATED BY ','   OPTIONALLY ENCLOSED BY '"'   LINES TERMINATED BY '\n';   在我使用过程中发现一个特别严重问题,这里面不能插入查询条件,比如where这些,也就是说只能整表导出,不知道是不是我的写法有问

Mysql导出数据表结构

$sql = "show create table category";$res = mysql_query($sql) or die(mysql_error());while($obj=mysql_fetch_object($res))...{ print_r($obj); echo "<hr>";} 输出 stdClass Object( [Table] => category [Create Table] => CREATE TABLE

100高分 在线等 MySql导出二进制流数据 全是乱码 大家帮帮我吧 急啊!!!!!!!!!!!!!!!!!!!

问题描述 MySql导出数据其他的数据都能正常显示就是存为二进制流形式的图片不正常全是乱码恳请大家帮帮我解决一下我数据库存的格式都是utf8检查一下数据库,也都是utf8mysql>showvariableslike'character%';+--------------------------+----------------------------+|Variable_name|Value|+--------------------------+-----------------------

《Hadoop实战手册》一第1章 Hadoop分布式文件系统——导入和导出数据

第1章 Hadoop分布式文件系统--导入和导出数据 Hadoop实战手册 本章我们将介绍: 使用Hadoop shell命令导入和导出数据到HDFS 使用distcp实现集群间数据复制 使用Sqoop从MySQL数据库导入数据到HDFS 使用Sqoop从HDFS导出数据到MySQL 配置Sqoop以支持SQL Server 从HDFS导出数据到MongoDB 从MongoDB导入数据到HDFS 使用Pig从HDFS导出数据到MongoDB 在Greenplum外部表中使用HDFS 利用Flum

从MySQL导出到PostgreSQL

异构数据库的导入.导出是一个大难题.从MySQL到PG目前没有特别完善的工具,还需要人工的参与.本文将简单介绍下如何利用一些工具,手工将MySQL导出数据到PostgreSQL. 导出.转换.导入 数据导出.导入的思路是: 导出结构 导出数据 对结构进行转换 对数据进行转换 导入结构 导入数据 这里的想法是将结构和数据分开处理,这样有很多好处.首先DDL的语句的差别相比DML要大一些,比如各种类型的转换.符号的变化等,分开之后可以针对性的处理:其次,DDL和DML分开更好做批量的处理:其他好处-