Mahout 安装配置及一个简单测试

Mahout 简介

Mahout 是一个很强大的数据挖掘工具,是一个分布式机器学习算法的集合,包括:被称为Taste的分布式协同过滤的实现、分类、聚类等。Mahout最大的优点就是基于hadoop实现,把很多以前运行于单机上的算法,转化为了MapReduce模式,这样大大提升了算法可处理的数据量和处理性能。

Hadoop

http://blog.csdn.net/fenglailea/article/details/53318459
风.fox

环境

Centos7 服务器
当前最新版 0.12.2

Mahout下载地址

http://archive.apache.org/dist/mahout/
http://archive.apache.org/dist/mahout/0.12.2/

wget http://archive.apache.org/dist/mahout/0.12.2/apache-mahout-distribution-0.12.2.tar.gz
tar -zxvf apache-mahout-distribution-0.12.2.tar.gz

这里放到 Hadoop 目录里

mv apache-mahout-distribution-0.12.2 /home/hadoop/mahout

Mahout环境变量设置

设置全局/etc/bashrc,当前用户~/.bashrc
这里使用当前用户

vim ~/.bashrc

mahout环境变量

export MAHOUT_HOME=/home/hadoop/mahout
export MAHOUT_CONF_DIR=$MAHOUT_HOME/conf
export PATH=$MAHOUT_HOME/conf:$MAHOUT_HOME/bin:$PATH

hadoop环境变量

export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_HOME_WARN_SUPPRESS=not_null

应用环境变量

. ~/.bashrc

查询是否安装成功,

mahout

若出现一下,表示安装成功

arff.vector: : Generate Vectors from an ARFF file or directory
  baumwelch: : Baum-Welch algorithm for unsupervised HMM training
  canopy: : Canopy clustering
  cat: : Print a file or resource as the logistic regression models would see it
  cleansvd: : Cleanup and verification of SVD output
  clusterdump: : Dump cluster output to text
  clusterpp: : Groups Clustering Output In Clusters
  cmdump: : Dump confusion matrix in HTML or text formats
  cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
  cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
  describe: : Describe the fields and target variable in a data set
  evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
  fkmeans: : Fuzzy K-means clustering
  hmmpredict: : Generate random sequence of observations by given HMM
  itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
  kmeans: : K-means clustering
  lucene.vector: : Generate Vectors from a Lucene index
  matrixdump: : Dump matrix in CSV format
  matrixmult: : Take the product of two matrices
  parallelALS: : ALS-WR factorization of a rating matrix
  qualcluster: : Runs clustering experiments and summarizes results in a CSV
  recommendfactorized: : Compute recommendations using the factorization of a rating matrix
  recommenditembased: : Compute recommendations using item-based collaborative filtering
  regexconverter: : Convert text files on a per line basis based on regular expressions
  resplit: : Splits a set of SequenceFiles into a number of equal splits
  rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
  rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
  runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
  runlogistic: : Run a logistic regression model against CSV data
  seq2encoded: : Encoded Sparse Vector generation from Text sequence files
  seq2sparse: : Sparse Vector generation from Text sequence files
  seqdirectory: : Generate sequence files (of Text) from a directory
  seqdumper: : Generic Sequence File dumper
  seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
  seqwiki: : Wikipedia xml dump to sequence file
  spectralkmeans: : Spectral k-means clustering
  split: : Split Input data into test and train sets
  splitDataset: : split a rating dataset into training and probe parts
  ssvd: : Stochastic SVD
  streamingkmeans: : Streaming k-means clustering
  svd: : Lanczos Singular Value Decomposition
  testnb: : Test the Vector-based Bayes classifier
  trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
  trainlogistic: : Train a logistic regression using stochastic gradient descent
  trainnb: : Train the Vector-based Bayes classifier
  transpose: : Take the transpose of a matrix
  validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
  vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
  vectordump: : Dump vectors from a sequence file to text
  viterbi: : Viterbi decoding of hidden states from given output states sequence

Mahout 和Hadoop 集成测试

首先,hadoop 要安装完成及启动

http://blog.csdn.net/fenglailea/article/details/53318459

下载测试数据

http://archive.ics.uci.edu/ml/databases/synthetic_control/

wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data

hadoop 上传测试数据

hadoop fs -mkdir -p ./testdata
hadoop fs -put synthetic_control.data ./testdata

查看目录及文件

hadoop fs -ls
hadoop fs -ls ./testdata

使用Mahout中的kmeans聚类算法进行测试

mahout -core  org.apache.mahout.clustering.syntheticcontrol.kmeans.Job

XX执行完成,最后几行如下

        1.0 : [distance=55.039831561905785]: [33.67,38.675,39.742,41.989,37.291,43.975,31.909,25.878,31.08,15.858,13.95,23.097,19.983,21.692,31.579,38.57,33.376,38.843,41.936,33.534,39.195,32.897,25.343,18.523,15.089,17.771,22.614,25.313,23.687,29.01,41.995,35.712,40.872,41.669,32.156,25.162,24.98,23.705,18.413,20.975,14.906,26.171,30.165,27.818,35.083,39.514,37.851,33.967,32.338,34.977,26.589,28.079,19.597,24.669,23.098,25.685,28.215,34.94,36.91,39.749]
16/11/24 16:47:52 INFO ClusterDumper: Wrote 6 clusters
16/11/24 16:47:52 INFO MahoutDriver: Program took 22175 ms (Minutes: 0.3695833333333333) 

查看输出

hadoop fs -ls ./output
Found 15 items
-rw-r--r--   1 hadoop supergroup        194 2016-11-24 16:47 output/_policy
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusteredPoints
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-0
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-1
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-10-final
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-2
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-3
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-4
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-5
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-6
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-7
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-8
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/clusters-9
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/data
drwxr-xr-x   - hadoop supergroup          0 2016-11-24 16:47 output/random-seeds

查看数据

mahout vectordump -i ./output/data/part-m-00000

查看
http://itindex.net/detail/51681-mahout
http://blog.csdn.net/wind520/article/details/38851367

时间: 2024-09-19 18:12:29

Mahout 安装配置及一个简单测试的相关文章

java-JavaScript的一个简单测试,为什么会出这个结果?

问题描述 JavaScript的一个简单测试,为什么会出这个结果? a的字面值和类型都和b相同,就输出"两者一样",但为什么它输出了"两者不一样"呢? 解决方案 因为你是typeof a 和 b比较,而不是 typeof a === typeof b 或者 a === b 解决方案二: 你去掉typeof直接判断的话,就是一样的了.

ASP.NET Aries 入门开发教程2:配置出一个简单的列表页面

前言: 朋友们都期待我稳定地工作,但创业公司若要躺下,也非意念可控. 若人生注定了风雨飘摇,那就雨中前行了. 最机开始看聊新的工作机会,欢迎推荐,创业公司也可! 同时,趁着自由时间,抓紧把这系列教程给写完了. 谢谢大伙的关注和支持. 步骤1:准备好数据库和表(或视图) 由于框架支持跨数据库,所以可以先不用管系统权限的那个数据库,先随意找个数据库. 当然你也可以找个视图(只是视图就不能编辑或删除[权限控制]) 步骤2:配置数据库链接: 以数据库名+Conn 为name(这是跨库的约定,不要乱取).

简单测试Apache是如何完成负载均衡策略配置_Linux

随着访问量的不断提升,以及对响应速度要求的苛刻,进行负载均衡设置就显得尤为重要了.公司的系统在最初设计的时候就已经考虑到了负载均衡的规划,www静态服务器配置了两台,由于初期项目时间紧,并且访问量并不高,所以当时只用了一台,另一台在内网中,只是进行了同步,并为发挥出效用来.此次就是对负载均衡的一个简单测试. 先介绍一下apache mod_proxy_balancer的几个配置规则: 将Apache作为LoadBalance前置机分别有三种不同的部署方式,分别是: 1 )轮询均衡策略的配置 进入

linux下redis的安装配置介绍

Redis是什么 REmote DIctionary Server(Redis) 是一个由Salvatore Sanfilippo写的key-value存储系统.Redis提供了一些丰富的数据结构,包括 lists, sets, ordered sets 以及 hashes ,当然还有和Memcached一样的 strings结构.Redis当然还包括了对这些数据结构的丰富操作. Redis的优点 性能极高 – Redis能支持超过 100K+ 每秒的读写频率. 丰富的数据类型 – Redis支

Shark-0.9.0安装配置运行实践

Shark(Hive on Spark)是UC Lab为Spark设计并开源的一款数据仓库系统,提供了分布式SQL查询引擎,它能够完全兼容Hive.首先,我们通过下面的图,看一下Shark与Hive的关系(http://shark.cs.berkeley.edu/img/shark-hive-integration.png):以前我们使用Hive分析HDFS中数据时,通过将HQL翻译成MapReduce作业(Job)在Hadoop集群上运行:而使用Shark可以像使用Hive一样容易,如HQL.

JStorm-0.9.6.2安装配置

JStorm是由Alibaba开源的实时计算系统,它使用Java重写了Apache Storm(使用Clojure+Java混编),而且在原来的基础上做了很多改进的地方.使用Java重写,对于使用Java的开发人员来说,可以通过阅读源码来了解JStorm内部的原理和实现,而且可以根据运行错误日志来排查错误. 下面通过安装配置,以及简单使用的验证,来说明JStorm宏观上与Apache Storm的不同之处. 安装配置JStorm Server 首先,要保证JDK成功安装配置,然后在一个节点上下载

VMware esxi 5.1安装配置的方法和设置步骤图文教程_VMware

Vmware对于用户来说已经很熟悉,vmware esxi又叫(VMware vSphere Hypervisor),VMware ESxi是VMware ESX的精简免费版,并且安装配置非常便捷简单,所以被很多用户所青睐,正睿作为VMware的合作伙伴,今天就来与大家分享一下VMware esxi 5.1虚拟机的安装和设置. 一.VMware esxi 5.1详细安装配置注意 安装前注意: 在安装VMware esxi 5.1之前,也确认主机符合VMware esxi 5.1支持的最低硬件配置

CentOS6.5中利用yum安装配置lnmp环境步骤

准备篇 1.配置防火墙,开启80 和3306端口 [root@localhost ~]# vim /etc/sysconfig/iptables         -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT         -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT         -A INPUT -m state --sta

Mahout学习之Mahout简介、安装、配置、入门程序测试

原文地址:http://blog.csdn.net/aaronhadoop/article/details/24577221 一.Mahout简介 查了Mahout的中文意思--驭象的人,再看看Mahout的logo,好吧,想和小黄象happy地玩耍,得顺便陪陪这位驭象人耍耍了... 附logo: (就是他,骑在象头上的那个Mahout)  步入正文啦:        Mahout 是一个很强大的数据挖掘工具,是一个分布式机器学习算法的集合,包括:被称为Taste的分布式协同过滤的实现.分类.聚