java-求助jsoup取页面之后输出的问题

问题描述

求助jsoup取页面之后输出的问题

取出的表格数据为空,请问这是string body的问题么...
如果要解决问题应该怎样修改...

import java.io.ByteArrayOutputStream;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class Main {

    public static void main(String[] args) throws Throwable {

        for (int i = 1; i <= 3; i++) {
            System.out.println(getPrice(i));
        }

    }

    static List<String> getPrice(int pageNo) throws Throwable {

        Document doc = Jsoup.parse(getText(pageNo));

        Elements trs = doc.select("#ctl00_cphMainFrame_Table1 tr");

        List<String> result = new ArrayList<String>(trs.size());

        for (int i = 1, l = trs.size(); i < l; i++) {
            Element tr = trs.get(i);

            result.add(tr.child(5).text());
        }

        return result;

    }

    static String getText(int pageNo) throws Throwable {

        URL url = new URL("http://www.lnprice.gov.cn/wjjc/jgjc/ReportByDateOfPivot.aspx?PriceBureauMainType_Id=101&YM=201502&DP=28");

        HttpURLConnection conn = (HttpURLConnection) url.openConnection();

        conn.setRequestMethod("POST");

        conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko");

        conn.setDoOutput(true);

        conn.connect();

        String body = "ctl00%24cphMainFrame%24ScriptManager1=ctl00%24cphMainFrame%24UpdatePanel1%7Cctl00%24cphMainFrame%24aspnetpager1&ctl00%24cphMainFrame%24ddlYear=2015&ctl00%24cphMainFrame%24ddlMonth=02&ctl00%24cphMainFrame%24ddlTimePoint=28&__EVENTTARGET=ctl00%24cphMainFrame%24aspnetpager1&__LASTFOCUS=&__VIEWSTATE=%2FwEPDwULLTEwNTcyNDc4NjkPZBYCZg9kFgICAQ9kFgQCAQ8WAh4LXyFJdGVtQ291bnQCCxYWZg9kFgJmDxUCAzEwMQzlhpzlia%2Fkuqflk4FkAgEPZBYCZg8VAgMxMDcP5bel5Lia5raI6LS55ZOBZAICD2QWAmYPFQIDMTA4EuW3peS4mueUn%2BS6p%2Bi1hOaWmWQCAw9kFgJmDxUCAzEwORjln47luILlsYXmsJHmnI3liqHku7fmoLxkAgQPZBYCZg8VAgMxMTAY5Yac5p2R5bGF5rCR5pyN5Yqh5Lu35qC8ZAIFD2QWAmYPFQIDMTExDOa2ieWGnOS6p%2BWTgWQCBg9kFgJmDxUCAzEwMhwyMDEz5bm05Lul5YmN5bel5Lia5raI6LS55ZOBZAIHD2QWAmYPFQIDMTAzHzIwMTPlubTku6XliY3lt6XkuJrnlJ%2FkuqfotYTmlplkAggPZBYCZg8VAgMxMDQZMjAxM%2BW5tOS7peWJjeacjeWKoeS7t%2BagvGQCCQ9kFgJmDxUCAzEwNR8yMDEz5bm05Lul5YmN5Yac5Lia55Sf5Lqn6LWE5paZZAIKD2QWAmYPFQIDMTA2GTIwMTPlubTku6XliY3mtonlhpzkuqflk4FkAgMPZBYQAgEPDxYCHgRUZXh0BQzlhpzlia%2Fkuqflk4FkZAIDDw8WBB8BBVw8c3BhbiBzdHlsZT0ibWFyZ2luLWxlZnQ6MjBweDsiICBjbGFzcz0ibXNqZ19jaGF4dW5feHhrMV9iZzAwIiA%2B5oyJ5YiG57G75YWo55yB5p%2Bl6K%2BiPC9zcGFuPh4LTmF2aWdhdGVVcmwFM1JlcG9ydEJ5RGF0ZU9mUGl2b3QuYXNweD9QcmljZUJ1cmVhdU1haW5UeXBlX0lkPTEwMWRkAgQPDxYEHwEFQTxzcGFuIGNsYXNzPSJtc2pnX2NoYXh1bl94eGsxX2JnMTEiID7mjInllYblk4HliIbluILmn6Xor6I8L3NwYW4%2BHwIFNVJlcG9ydEdvb2RzSW5mb0J5Q2l0eS5hc3B4P1ByaWNlQnVyZWF1TWFpblR5cGVfSWQ9MTAxZGQCBQ8PFgQfAQVCPHNwYW4gY2xhc3M9Im1zamdfY2hheHVuX3h4azFfYmcxMSIgPuaMieaXtumXtOWIhuW4guafpeivoiA8L3NwYW4%2BHwIFMFJlcG9ydEluZm9ieVRpbWUuYXNweD9QcmljZUJ1cmVhdU1haW5UeXBlX0lkPTEwMWRkAgcPEA8WBh4NRGF0YVRleHRGaWVsZAUKQ3JlYXRlWWVhch4ORGF0YVZhbHVlRmllbGQFCkNyZWF0ZVllYXIeC18hRGF0YUJvdW5kZ2QQFRAEMjAwMAQyMDAxBDIwMDIEMjAwMwQyMDA0BDIwMDUEMjAwNgQyMDA3BDIwMDgEMjAwOQQyMDEwBDIwMTEEMjAxMgQyMDEzBDIwMTQEMjAxNRUQBDIwMDAEMjAwMQQyMDAyBDIwMDMEMjAwNAQyMDA1BDIwMDYEMjAwNwQyMDA4BDIwMDkEMjAxMAQyMDExBDIwMTIEMjAxMwQyMDE0BDIwMTUUKwMQZ2dnZ2dnZ2dnZ2dnZ2dnZxYBAg9kAgkPEGRkFgECAWQCCw8QDxYCHwVnZBAVBAnor7fpgInmi6kCMDUCMTUCMjUVBAEwAjI4AjI5AjMwFCsDBGdnZ2dkZAIPDxYCHgdWaXNpYmxlZxYCZg9kFgJmD2QWAgIDD2QWAmYPZBYCAgMPFgIfBmcWAmYPZBYCZg9kFgICAQ8PFgYeCFBhZ2VTaXplAhQeEEN1cnJlbnRQYWdlSW5kZXgCAh4LUmVjb3JkY291bnQCL2RkZM%2FO1WQW50DLN7G3eiSyS6q2rewQ&__EVENTVALIDATION=%2FwEWJAKb97l9ArjilMkFApDM2c4FApDMreUCApDMsZgLApDMhT8CkMzp0wgCkMz99gECkMzBrQ4CkMzVwAYCkMz5KQKQzM3MCAL79f%2FVDwL79cOIBAL79devDQL79bvCBQL79Y%2F5AgL79ZOcCwLWm967DgLG9LjWAgLG9LzWAgLG9IDWAgLG9ITWAgLG9IjWAgLG9IzWAgLG9JDWAgLG9NTVAgLG9NjVAgLZ9LTWAgLZ9LjWAgLZ9LzWAgLSx8%2BzDgLMx%2B%2BzDgLMx%2BOzDgLPx4%2BwDgLy%2BZrvCEFw0vATX2wSsTwyj9sMOqdXBRc0&__ASYNCPOST=true&__EVENTARGUMENT=" + pageNo;

        conn.getOutputStream().write(body.getBytes());

        byte[] buff = new byte[4096];
        int count;

        ByteArrayOutputStream out = new ByteArrayOutputStream(4096);
        InputStream in = conn.getInputStream();

        while((count = in.read(buff)) != -1) {
            out.write(buff, 0, count);
        }

        conn.disconnect();

        return out.toString("UTF-8");

    }

}

解决方案

代码没有问题,可能网站数据是实时变动的,。
body内容试试这个参数,这个是刚刚抓到的(如果不行,找个最新的再试试):

 String body = "ctl00%24cphMainFrame%24ScriptManager1=ctl00%24cphMainFrame%24UpdatePanel1%7Cctl00%24cphMainFrame%24aspnetpager1&ctl00%24cphMainFrame%24ddlYear=2015&ctl00%24cphMainFrame%24ddlMonth=02&ctl00%24cphMainFrame%24ddlTimePoint=28&__EVENTTARGET=ctl00%24cphMainFrame%24aspnetpager1&__LASTFOCUS=&__VIEWSTATE=%2FwEPDwULLTEwNTcyNDc4NjkPZBYCZg9kFgICAQ9kFgQCAQ8WAh4LXyFJdGVtQ291bnQCCxYWZg9kFgJmDxUCAzEwMQzlhpzlia%2Fkuqflk4FkAgEPZBYCZg8VAgMxMDcP5bel5Lia5raI6LS55ZOBZAICD2QWAmYPFQIDMTA4EuW3peS4mueUn%2BS6p%2Bi1hOaWmWQCAw9kFgJmDxUCAzEwORjln47luILlsYXmsJHmnI3liqHku7fmoLxkAgQPZBYCZg8VAgMxMTAY5Yac5p2R5bGF5rCR5pyN5Yqh5Lu35qC8ZAIFD2QWAmYPFQIDMTExDOa2ieWGnOS6p%2BWTgWQCBg9kFgJmDxUCAzEwMhwyMDEz5bm05Lul5YmN5bel5Lia5raI6LS55ZOBZAIHD2QWAmYPFQIDMTAzHzIwMTPlubTku6XliY3lt6XkuJrnlJ%2FkuqfotYTmlplkAggPZBYCZg8VAgMxMDQZMjAxM%2BW5tOS7peWJjeacjeWKoeS7t%2BagvGQCCQ9kFgJmDxUCAzEwNR8yMDEz5bm05Lul5YmN5Yac5Lia55Sf5Lqn6LWE5paZZAIKD2QWAmYPFQIDMTA2GTIwMTPlubTku6XliY3mtonlhpzkuqflk4FkAgMPZBYQAgEPDxYCHgRUZXh0BQzlhpzlia%2Fkuqflk4FkZAIDDw8WBB8BBVw8c3BhbiBzdHlsZT0ibWFyZ2luLWxlZnQ6MjBweDsiICBjbGFzcz0ibXNqZ19jaGF4dW5feHhrMV9iZzAwIiA%2B5oyJ5YiG57G75YWo55yB5p%2Bl6K%2BiPC9zcGFuPh4LTmF2aWdhdGVVcmwFM1JlcG9ydEJ5RGF0ZU9mUGl2b3QuYXNweD9QcmljZUJ1cmVhdU1haW5UeXBlX0lkPTEwMWRkAgQPDxYEHwEFQTxzcGFuIGNsYXNzPSJtc2pnX2NoYXh1bl94eGsxX2JnMTEiID7mjInllYblk4HliIbluILmn6Xor6I8L3NwYW4%2BHwIFNVJlcG9ydEdvb2RzSW5mb0J5Q2l0eS5hc3B4P1ByaWNlQnVyZWF1TWFpblR5cGVfSWQ9MTAxZGQCBQ8PFgQfAQVCPHNwYW4gY2xhc3M9Im1zamdfY2hheHVuX3h4azFfYmcxMSIgPuaMieaXtumXtOWIhuW4guafpeivoiA8L3NwYW4%2BHwIFMFJlcG9ydEluZm9ieVRpbWUuYXNweD9QcmljZUJ1cmVhdU1haW5UeXBlX0lkPTEwMWRkAgcPEA8WBh4NRGF0YVRleHRGaWVsZAUKQ3JlYXRlWWVhch4ORGF0YVZhbHVlRmllbGQFCkNyZWF0ZVllYXIeC18hRGF0YUJvdW5kZ2QQFRAEMjAwMAQyMDAxBDIwMDIEMjAwMwQyMDA0BDIwMDUEMjAwNgQyMDA3BDIwMDgEMjAwOQQyMDEwBDIwMTEEMjAxMgQyMDEzBDIwMTQEMjAxNRUQBDIwMDAEMjAwMQQyMDAyBDIwMDMEMjAwNAQyMDA1BDIwMDYEMjAwNwQyMDA4BDIwMDkEMjAxMAQyMDExBDIwMTIEMjAxMwQyMDE0BDIwMTUUKwMQZ2dnZ2dnZ2dnZ2dnZ2dnZxYBAg9kAgkPEGRkFgECAWQCCw8QDxYCHwVnZBAVBAnor7fpgInmi6kCMDUCMTUCMjUVBAEwAjI4AjI5AjMwFCsDBGdnZ2dkZAIPDxYCHgdWaXNpYmxlZxYCZg9kFgJmD2QWAgIDD2QWAmYPZBYCAgMPFgIfBmcWAmYPZBYCZg9kFgICAQ8PFgYeCFBhZ2VTaXplAhQeEEN1cnJlbnRQYWdlSW5kZXgCAh4LUmVjb3JkY291bnQCL2RkZMVUAu8HLwRzj1xEKpBi8MSr0fYD&__EVENTVALIDATION=%2FwEWJAKS79jeCgK44pTJBQKQzNnOBQKQzK3lAgKQzLGYCwKQzIU%2FApDM6dMIApDM%2FfYBApDMwa0OApDM1cAGApDM%2BSkCkMzNzAgC%2B%2FX%2F1Q8C%2B%2FXDiAQC%2B%2FXXrw0C%2B%2FW7wgUC%2B%2FWP%2BQIC%2B%2FWTnAsC1pveuw4CxvS41gICxvS81gICxvSA1gICxvSE1gICxvSI1gICxvSM1gICxvSQ1gICxvTU1QICxvTY1QIC2fS01gIC2fS41gIC2fS81gIC0sfPsw4CzMfvsw4CzMfjsw4Cz8ePsA4C8vma7whOBwA2O0BJTn5kLqZv1C98W2UbZQ%3D%3D&__ASYNCPOST=true&&__EVENTARGUMENT=" + pageNo;

解决方案二:

草草写了,测试可用,参考一下:
package com.kukio.jsoup;

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class Product {

public List<Map<String,String>> getProductInfo(String url) throws IOException{
    Document doc = Jsoup.connect(url).get();
    Elements ele = doc.select("td[id]");

// System.out.println(ele.toString());

    List<Map<String,String>> data =  new ArrayList<Map<String,String>>();
    for(int i=1;i<21;i++){
        //list添加数据
        data.add(getFoodInfo(i,ele));
    }
    return data;
}

private Map<String,String> getFoodInfo(int k,Elements ele){
    Map<String,String> info = new HashMap<String,String>();
    for(int i=0;i<4;i++){
        //map添加数据
        int j =0;
        for(Element m : ele){
            String value = m.attr("id");
            if(value.equals("ctl00_cphMainFrame_td"+k+"SecondType")){
                info.put("分类", m.text());
            }

            else if(value.equals("ctl00_cphMainFrame_td"+k+"TypeName")){
                info.put("名称", m.text());
            }

            else if(value.equals("ctl00_cphMainFrame_td"+k+"GoodsTypeName")){

// System.out.println(m.text());
if(j == 0){
info.put("品牌", m.text());
}
else if(i == 01){
info.put("单位", m.text());
}
j++;
}

if(value.equals("ctl00_cphMainFrame_td"+k+"AvevageValue")){
info.put("平均价格", m.text());
}
}
}
return info;
}

}

测试:
package com.kukio.jsoup;

import java.io.IOException;
import java.util.List;
import java.util.Map;

public class Test {

public static void main(String[] args) throws IOException {
    String url = "http://www.lnprice.gov.cn/wjjc/jgjc/ReportByDateOfPivot.aspx?PriceBureauMainType_Id=101&YM=201502&DP=28";
    Product pro = new Product();
    List<Map<String,String>> list = pro.getProductInfo(url);
    for(int i=0;i<list.size();i++){
        Map<String,String> map = list.get(i);
        System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>");
        System.out.println("<<< "+i);
        System.out.println("分类: "+map.get("分类"));
        System.out.println("名称: "+map.get("名称"));
        System.out.println("品牌: "+map.get("品牌"));
        System.out.println("单位: "+map.get("单位"));
        System.out.println("平均价格: "+map.get("平均价格"));
        System.out.println("************************************");
    }
}

}



解决方案三:

谷歌浏览器下F12,开发者模式,点击下一页后,查看浏览器Network栏发送的请求头信息里面的表单数据就是你的body参数了。其它浏览器也有开发者模式。

时间: 2024-09-19 10:16:18

java-求助jsoup取页面之后输出的问题的相关文章

Jsoup 抓取页面的数据实例详解

Jsoup 抓取页面的数据 需要使用的是jsoup-1.7.3.jar包   如果需要看文档我下载请借一步到官网:http://jsoup.org/ 这里贴一下我用到的 Java工程的测试代码 package com.javen.Jsoup; import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.

Python实现抓取页面上链接的简单爬虫分享_python

除了C/C++以外,我也接触过不少流行的语言,PHP.java.javascript.python,其中python可以说是操作起来最方便,缺点最少的语言了. 前几天想写爬虫,后来跟朋友商量了一下,决定过几天再一起写.爬虫里重要的一部分是抓取页面中的链接,我在这里简单的实现一下. 首先我们需要用到一个开源的模块,requests.这不是python自带的模块,需要从网上下载.解压与安装: 复制代码 代码如下: $ curl -OL https://github.com/kennethreitz/

网络爬虫-用Java来抓取网页实例中HttpClient类的问题

问题描述 用Java来抓取网页实例中HttpClient类的问题 报这么一大堆错误我也是受不了了...... 主要的问题应该是HttpClient类这个东西,在网上查了这个类是httpclient-2.x.jar包的产物,我导入的是httpclient-4.2.2.jar和httpcore-4.2.2.jar包,而这两个新的工具包并不包含HttpClient类,查阅了Java API帮助文档后,自己并没有找到HttpClient类的替代类,而是一堆接口和抽象类,由于是刚开始写这个,所以有点懵.

asp抓取页面

<%    if trim(request.form("url"))<>"" then    dim VBody:VBody=GetResStr(trim(request.form("url")))    dim Res:Res=VBody    dim code:code=GetCode(VBody,"charset= {0,}([^ ]+) {0,}""")    end if   

通过jsoup解析页面html获取优酷页面视频列表

  通过jsoup解析页面html获取优酷页面视频列表 作者: javaboy2012 Email:yanek@163.com qq:    1046011462     代码如下:   package com.yanek; import java.io.IOException; import java.util.HashMap; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Ele

关于java两个数取模的问题

问题描述 关于java两个数取模的问题 public class Test { public static void main(String [] args) { int b=5a=3; System.out.println(a%b); } } 为什么输出结果为3,而不是0? 解决方案 3除以5,商0余3!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 解决方案二: a%b = 3 a/b = 0. % 取模,也就是取余数,a = 3 b=5 a%b = 3%5=0

ssh-jsp中如何取页面textarea的一行一行的值

问题描述 jsp中如何取页面textarea的一行一行的值 如题!我是用SSH 我在action 里面已经获取了 textarea 里面的值了值比如说是这样的: aaaabbbbcccc现在我要把 textarea 里面的值一行一行的获取出来~怎么弄? 解决方案 split下得到数组遍历就好了吧 关于java中split的使用

JAVA反射獲取屬性名報錯

问题描述 JAVA反射獲取屬性名報錯 反射获取属性名异常,遍历明明存在,指定属性名获取却报错,求指点.代码如下: public class Test2 { private String column1; private List column2; public String getColumn1() { return column1; } public void setColumn1(String column1) { this.column1 = column1; } public List<

http-使用java下载文件,页面跳转

问题描述 使用java下载文件,页面跳转 想用java下载知网上的pdf文件 如http://www.cnki.com.cn/Article/CJFDTOTAL-DNBC201521037.htm 点击pdf下载 弹出的是这个网址http://epub.cnki.net/grid2008/docdown/docdownload.aspx?filename=DNBC201521037&dbcode=CJFD&year=2015&dflag=pdfdown 但我把这个网址复制,再在浏览