Spark机器学习6·聚类模型(spark-shell)


Spark机器学习

  • K-均值(K-mean)聚类 目的:最小化所有类簇中的方差之和

    • 类簇内方差和(WCSS,within cluster sum of squared errors)
    • fuzzy K-means
  • 层次聚类(hierarchical culstering)
    • 凝聚聚类(agglomerative clustering)
    • 分列式聚类(divisive clustering)

0 运行环境

cd $SPARK_HOME
bin/spark-shell --name my_mlib --packages org.jblas:jblas:1.2.4 --driver-memory 4G --executor-memory 4G --driver-cores 2
import org.apache.spark.mllib.recommendation.ALS
import org.apache.spark.mllib.recommendation.Rating
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.clustering.KMeans
import breeze.linalg._
import breeze.numerics.pow

1 提取特征

val PATH = "/Users/erichan/sourcecode/book/Spark机器学习"
val movies = sc.textFile(PATH+"/ml-100k/u.item")

println(movies.first)

1|Toy Story (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)|0|0|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0

提取标签

val genres = sc.textFile(PATH+"/ml-100k/u.genre")
genres.take(5).foreach(println)

unknown|0
Action|1
Adventure|2
Animation|3
Children's|4

val genreMap = genres.filter(!_.isEmpty).map(line => line.split("\\|")).map(array => (array(1), array(0))).collectAsMap
println(genreMap)

Map(2 -> Adventure, 5 -> Comedy, 12 -> Musical, 15 -> Sci-Fi, 8 -> Drama, 18 -> Western, 7 -> Documentary, 17 -> War, 1 -> Action, 4 -> Children's, 11 -> Horror, 14 -> Romance, 6 -> Crime, 0 -> unknown, 9 -> Fantasy, 16 -> Thriller, 3 -> Animation, 10 -> Film-Noir, 13 -> Mystery)

val titlesAndGenres = movies.map(_.split("\\|")).map { array =>
    val genres = array.toSeq.slice(5, array.size)
    val genresAssigned = genres.zipWithIndex.filter { case (g, idx) =>
        g == "1"
    }.map { case (g, idx) =>
        genreMap(idx.toString)
    }
    (array(0).toInt, (array(1), genresAssigned))
}
println(titlesAndGenres.first)

(1,(Toy Story (1995),ArrayBuffer(Animation, Children's, Comedy)))

训练推荐模型

val rawData = sc.textFile(PATH+"/ml-100k/u.data")
val rawRatings = rawData.map(_.split("\t").take(3))
val ratings = rawRatings.map{ case Array(user, movie, rating) => Rating(user.toInt, movie.toInt, rating.toDouble) }
ratings.cache
val alsModel = ALS.train(ratings, 50, 10, 0.1)

val movieFactors = alsModel.productFeatures.map { case (id, factor) => (id, Vectors.dense(factor)) }
val movieVectors = movieFactors.map(_._2)
val userFactors = alsModel.userFeatures.map { case (id, factor) => (id, Vectors.dense(factor)) }
val userVectors = userFactors.map(_._2)

归一化

val movieMatrix = new RowMatrix(movieVectors)
val movieMatrixSummary = movieMatrix.computeColumnSummaryStatistics()
val userMatrix = new RowMatrix(userVectors)
val userMatrixSummary = userMatrix.computeColumnSummaryStatistics()

println("Movie factors mean: " + movieMatrixSummary.mean)
println("Movie factors variance: " + movieMatrixSummary.variance)
println("User factors mean: " + userMatrixSummary.mean)
println("User factors variance: " + userMatrixSummary.variance)

Movie factors mean: [-0.2955253575969453,-0.2894017158566661,-0.2319822953560126,0.002917648331681182,0.16553261386128745,-0.21992534888550966,-0.03380127825698873,-0.20603088790834398,-0.15619861138444532,-0.028497688228493936,0.16963530616257805,0.14388067884376599,-0.0017092576491059591,-0.09626837303920982,-0.06064127207772349,-0.06045518556672421,0.18751923345914,0.2399624456229244,0.26532560070303446,0.05541910564428427,-0.015674971004015527,0.011168436718639107,-0.04741294377492476,0.11574693735017375,0.1289987655696671,0.44134038441588025,-0.5900688729554584,-0.03768358034266212,0.008887881298347921,0.20425041421871237,-0.20022602485759528,0.2697605004694663,0.10361325058554109,0.210277185021123,-0.22259636797095098,0.1174637780755839,-0.13688720440722232,0.03767713022869551,-0.0558405163043045,-0.12431407617904076,-0.046046222769634326,-0.20808223343120555,-0.3272035383525689,-0.2069514616509938,-0.0754149005227642,0.0856900404902959,0.06164062157888312,-0.06518672356795488,-0.32742867325628294,0.20285276122166002]

Movie factors variance: [0.04665858952268178,0.0348699710379137,0.03579479789217705,0.029901233287017822,0.03584001448747631,0.030266373892482327,0.03305718879524182,0.02686945635500392,0.025320493042287756,0.024438753389466123,0.02575390293411112,0.028972744965903668,0.02827972104601086,0.033613911928577246,0.033662480558315735,0.02833746789842838,0.02994391891556463,0.04394221701123749,0.03435422169469965,0.03561654218653061,0.02682492748548697,0.029604704664741063,0.024673702738648908,0.030823597518982247,0.028442111141151614,0.03613743157595084,0.05449475590178096,0.042012621165520236,0.028896894307921802,0.033241681676696215,0.02633619851965672,0.035711481477364235,0.025481764774248593,0.028764828375131987,0.0272530758482775,0.029673397451581197,0.03148963131813852,0.03622387708462999,0.02816170323774573,0.033017372289155716,0.028641152942670445,0.02904189221086495,0.030234076747492195,0.04509970117296679,0.029449883713724593,0.02756067270740738,0.04139144468263727,0.030245838006703486,0.03131689738936245,0.03378427027186054]

User factors mean: [-0.49867551877683824,-0.44459915531291566,-0.36051481893169574,0.017151848798776233,0.3213603826583396,-0.3196901675619378,-0.07224328943358119,-0.29041744434669287,-0.22727507102332345,-0.03178720415880569,0.25862894293461186,0.22888402894019788,0.012327199821030293,-0.14990885838046697,-0.1281515413333295,-0.09431829455241085,0.2679196025618735,0.38335691552119355,0.34604572069945905,0.11174974992685119,-0.02180706957147866,0.005610012764397019,-0.08491397316018835,0.20231176194774866,0.17161396689284497,0.6398163397864598,-0.8673987425745228,-0.10283010351171737,0.028330477167842844,0.30443187793692406,-0.301912604145753,0.4138735923728453,0.2256847560401456,0.285848070636566,-0.2605794171061,0.22449121780469036,-0.23998269543836812,0.036814175516996,-0.0679476059994798,-0.14427917258340417,-0.0833994179810923,-0.3582564875155623,-0.4564982359022274,-0.358039104184582,-0.12317214145750788,0.15037235678650748,0.06053431528961892,-0.06831269426575506,-0.5051800522709825,0.3151860279443428]

User factors variance: [0.04443021235777847,0.03057049227554263,0.03697914530468325,0.037294292401115044,0.035107560835514376,0.03456306589890253,0.03582898189508532,0.028049150877918694,0.032265557628909904,0.033678972590911474,0.03107771568048393,0.03456737756860466,0.035184013404102404,0.04264936219513472,0.04120372326054623,0.03364277736735525,0.040292531435941865,0.04006060147670186,0.03950365342886879,0.04560154697337463,0.030231562691714647,0.041120732342916626,0.03118953330313852,0.03508187607535198,0.03228272297984499,0.03603017959168009,0.04917534366846078,0.059425007832722164,0.03161224197770566,0.04211986001194535,0.02891350391303218,0.05259534335774597,0.03483271651803892,0.040489027307905476,0.03125884956067426,0.0379774604293261,0.035875980098136084,0.043509576391072786,0.03338290356822281,0.03675372599031079,0.03379511912889908,0.02951817116168268,0.0430380317818896,0.04214608566562065,0.03376833767379957,0.0314188022932176,0.048481326691437995,0.03724671278315033,0.034103714500646594,0.046064657833824844]

2 训练

val numClusters = 5
val numIterations = 10
val numRuns = 3
val movieClusterModel = KMeans.train(movieVectors, numClusters, numIterations, numRuns)
val movieClusterModelConverged = KMeans.train(movieVectors, numClusters, 100)
val userClusterModel = KMeans.train(userVectors, numClusters, numIterations, numRuns)

3 预测

val movie1 = movieVectors.first
val movieCluster = movieClusterModel.predict(movie1)
println(movieCluster)
val predictions = movieClusterModel.predict(movieVectors)
println(predictions.take(10).mkString(","))
def computeDistance(v1: DenseVector[Double], v2: DenseVector[Double]): Double = pow(v1 - v2, 2).sum
val titlesWithFactors = titlesAndGenres.join(movieFactors)
val moviesAssigned = titlesWithFactors.map { case (id, ((title, genres), vector)) =>
    val pred = movieClusterModel.predict(vector)
    val clusterCentre = movieClusterModel.clusterCenters(pred)
    val dist = computeDistance(DenseVector(clusterCentre.toArray), DenseVector(vector.toArray))
    (id, title, genres.mkString(" "), pred, dist)
}
val clusterAssignments = moviesAssigned.groupBy { case (id, title, genres, cluster, dist) => cluster }.collectAsMap

for ( (k, v) <- clusterAssignments.toSeq.sortBy(_._1)) {
    println(s"Cluster $k:")
    val m = v.toSeq.sortBy(_._5)
    println(m.take(20).map { case (_, title, genres, _, d) => (title, genres, d) }.mkString("\n"))
    println("=====\n")
}

4 评估性能

内部评价指标

  • WCSS
  • Davies-Bouldin指数
  • Dunn指数
  • 轮廓系数(silhouetee coefficient)

外不评价指标

  • Rand measure
  • F-measure
  • 雅卡尔系数(Jaccard index)

MLlib的computeCost

val movieCost = movieClusterModel.computeCost(movieVectors)
val userCost = userClusterModel.computeCost(userVectors)
println("WCSS for movies: " + movieCost)
println("WCSS for users: " + userCost)

5 调优

通过交叉验证选择K

val trainTestSplitMovies = movieVectors.randomSplit(Array(0.6, 0.4), 123)
val trainMovies = trainTestSplitMovies(0)
val testMovies = trainTestSplitMovies(1)
val costsMovies = Seq(2, 3, 4, 5, 10, 20).map { k => (k, KMeans.train(trainMovies, numIterations, k, numRuns).computeCost(testMovies)) }
println("Movie clustering cross-validation:")
costsMovies.foreach { case (k, cost) => println(f"WCSS for K=$k id $cost%2.2f") }

WCSS for K=2 id 870.36
WCSS for K=3 id 858.28
WCSS for K=4 id 847.40
WCSS for K=5 id 840.71
WCSS for K=10 id 842.58
WCSS for K=20 id 843.24

val trainTestSplitUsers = userVectors.randomSplit(Array(0.6, 0.4), 123)
val trainUsers = trainTestSplitUsers(0)
val testUsers = trainTestSplitUsers(1)
val costsUsers = Seq(2, 3, 4, 5, 10, 20).map { k => (k, KMeans.train(trainUsers, numIterations, k, numRuns).computeCost(testUsers)) }
println("User clustering cross-validation:")
costsUsers.foreach { case (k, cost) => println(f"WCSS for K=$k id $cost%2.2f") }

WCSS for K=2 id 573.50
WCSS for K=3 id 580.33
WCSS for K=4 id 574.84
WCSS for K=5 id 575.61
WCSS for K=10 id 586.05
WCSS for K=20 id 577.01

时间: 2024-10-21 08:30:21

Spark机器学习6·聚类模型(spark-shell)的相关文章

Spark机器学习4·分类模型(spark-shell)

Spark机器学习 线性模型 逻辑回归--逻辑损失(logistic loss) 线性支持向量机(Support Vector Machine, SVM)--合页损失(hinge loss) 朴素贝叶斯(Naive Bayes) 决策树 0 准备数据 kaggle2.blob.core.windows.net/competitions-data/kaggle/3526/train.tsv sed 1d train.tsv > train_noheader.tsv 0 运行环境 cd /Users

Spark机器学习7·降维模型(scala&amp;python)

Spark机器学习 PCA(主成分分析法,Principal Components Analysis) SVD(奇异值分解法,Singular Value Decomposition) http://vis-www.cs.umass.edu/lfw/lfw-a.tgz 0 运行环境 export SPARK_HOME=/Users/erichan/Garden/spark-1.5.1-bin-hadoop2.6 cd $SPARK_HOME bin/spark-shell --name my_m

Spark机器学习5·回归模型(pyspark)

Spark机器学习 分类模型的预测目标是:类别编号 回归模型的预测目标是:实数变量 回归模型种类 线性模型 最小二乘回归模型 应用L2正则化时--岭回归(ridge regression) 应用L1正则化时--LASSO(Least Absolute Shrinkage and Selection Operator) 决策树 不纯度度量方法:方差 0 准备数据 archive.ics.uci.edu/ml/machine-learning-databases/00275/Bike-Sharing

Apache Spark机器学习.1.5 Spark RDD和DataFrame

1.5 Spark RDD和DataFrame 本节关注数据以及Spark如何表示和组织数据.我们将介绍Spark RDD和DataFrame技术. 通过本节的学习,读者将掌握Spark的两个关键概念:RDD和DataFrame,并将它们应用于机器学习项目. 1.5.1 Spark RDD Spark主要以一种分布式项集合的形式进行数据抽象,称之为弹性分布式数据集(Resilient Distributed Dataset,RDD).RDD是Spark的关键创新,使其比其他框架计算更加快速和高效

Apache Spark机器学习.1.1 Spark概述和技术优势

摘要 Spark机器学习简介 本章从机器学习和数据分析视角介绍Apache Spark,并讨论Spark中的机器学习计算处理技术.本章首先概括介绍Apache Spark,通过与MapReduce等计算平台进行比较,展示Spark在数据分析中的技术优势和特点.接着,讨论如下五个方面的内容: 机器学习算法与程序库 Spark RDD和DataFrame 机器学习框架 Spark pipeline技术 Spark notebook技术 以上是数据科学家或机器学习专业人员必须掌握的五项最重要的技术内容

Apache Spark机器学习.1.8 Spark notebook简介

1.8 Spark notebook简介 在本节中,我们首先讨论有关面向机器学习的notebook方法.然后,我们介绍R Markdown,以其作为一个成熟的notebook案例,最后介绍Spark中的R notebook. 学习完本节,读者将掌握notebook相关的方法和概念,并为将其用于管理和开发机器学习项目做好准备. 1.8.1 面向机器学习的notebook方法 notebook已经成为众人青睐的机器学习工具,因为该工具既能动态驱动,还具备可重复生成的特点. 大部分notebook接口

Apache Spark机器学习3.1 Spark整体视图

摘要 基于Spark的整体视图 通过第1章,我们建立起了Spark系统,根据第2章的内容,我们完成了数据准备.现在将进入Spark系统应用的新阶段:从数据中获得洞见. 根据Gartner等机构的研究结果,许多公司仅仅是因为缺乏其商业的整体视图而损失了大量的价值.本章我们将回顾机器学习的方法和获得商业整体视图的步骤,然后讨论Spark如何简单.快速地进行相关计算,同时通过一个实例,循序渐进地展示使用Spark从数据到整体视图的开发过程. Spark整体视图 整体视图的方法 特征准备 模型估计 模型

用Spark机器学习数据流水线进行广告检测

在这篇文章中,我们Spark的其它机器学习API,名为Spark ML,如果要用数据流水线来开发大数据应用程序的话,这个是推荐的解决方案.关键点: 了解机器学习数据流水线有关内容. 怎么用Apache Spark机器学习包来实现机器学习数据流水线. 数据价值链处理的步骤. Spark机器学习流水线模块和API. 文字分类和广告检测用例. Spark ML(spark.ml)包提供了构建在DataFrame之上的机器学习API,它已经成了Spark SQL库的核心部分.这个包可以用于开发和管理机器

Spark机器学习9· 实时机器学习(scala with sbt)

Spark机器学习 1 在线学习 模型随着接收的新消息,不断更新自己:而不是像离线训练一次次重新训练. 2 Spark Streaming 离散化流(DStream) 输入源:Akka actors.消息队列.Flume.Kafka.-- http://spark.apache.org/docs/latest/streaming-programming-guide.html 类群(lineage):应用到RDD上的转换算子和执行算子的集合 3 MLib+Streaming应用 3.0 build