问题描述
现在公司有个项目对日志文件进行全文检索,准备用solr完成。现在遇到一个问题,由于日志的文件比较大,一般10几兆,大的已经达到几百兆了。我们将日志的内容设置为content字段,类型为string,但是当我们将documnet提交到server的时候,就报OOM的错误,请问如何解决该问题?
解决方案
public static void main(String[] args) { try { //Solr cell can also index MS file (2003 version and 2007 version) types. String fileName = "c:/Sample.pdf"; //this will be unique Id used by Solr to index the file contents. String solrId = "Sample.pdf"; indexFilesSolrCell(fileName, solrId); } catch (Exception ex) { System.out.println(ex.toString()); } } /** * Method to index all types of files into Solr. * @param fileName * @param solrId * @throws IOException * @throws SolrServerException */ public static void indexFilesSolrCell(String fileName, String solrId) throws IOException, SolrServerException { String urlString = "http://localhost:8983/solr"; SolrServer solr = new CommonsHttpSolrServer(urlString); ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract"); up.addFile(new File(fileName)); up.setParam("literal.id", solrId); up.setParam("uprefix", "attr_"); up.setParam("fmap.content", "attr_content"); up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true); solr.request(up); QueryResponse rsp = solr.query(new SolrQuery("*:*")); System.out.println(rsp); }
解决方案二:
不用Solr,改用Lucenen。自己写索引生成程序和检索程序。
解决方案三:
那么大,一定要用流处理