Spark机器学习3·推荐引擎(spark-shell)


Spark机器学习

准备环境

git clone https://github.com/mikiobraun/jblas.git
cd jblas
mvn install

运行环境

cd /Users/erichan/Garden/spark-1.5.1-bin-cdh4

bin/spark-shell --name my_mlib --packages org.jblas:jblas:1.2.4-SNAPSHOT --driver-memory 4G --executor-memory 4G --driver-cores 2

推荐引擎

1 提取有效特征

val PATH = "/Users/erichan/sourcecode/book/Spark机器学习"
val rawData = sc.textFile(PATH+"/ml-100k/u.data")
rawData.first()

res1: String = 196 242 3 881250949

import org.apache.spark.mllib.recommendation.ALS
import org.apache.spark.mllib.recommendation.Rating
val rawRatings=rawData.map(_.split("\t").take(3))
val ratings = rawRatings.map { case Array(user, movie, rating) => Rating(user.toInt, movie.toInt, rating.toDouble) }
ratings.first()

res2: org.apache.spark.mllib.recommendation.Rating = Rating(196,242,3.0)

2 训练推荐模型

val model = ALS.train(ratings, 50, 10, 0.01) //rank=50, iterations=10, lambda=0.01
model.userFeatures.count

res3: Long = 943

model.productFeatures.count

res4: Long = 1682

3 使用模型

3.1.1 用户推荐
val predictedRating = model.predict(789, 123)

val userId = 789
val K = 10
val topKRecs = model.recommendProducts(userId, K)
println(topKRecs.mkString("\n"))

Rating(789,176,5.732688958436494)
Rating(789,201,5.682340265545152)
Rating(789,182,5.5902224300291214)
Rating(789,183,5.5877871075408585)
Rating(789,96,5.4425266495153455)
Rating(789,76,5.39730369058763)
Rating(789,195,5.356822356978749)
Rating(789,589,5.1464233861748925)
Rating(789,134,5.109287533257644)
Rating(789,518,5.106161562126567)

3.1.2 校验推荐
val movies = sc.textFile(PATH+"/ml-100k/u.item")
val titles = movies.map(line => line.split("\\|").take(2)).map(array => (array(0).toInt, array(1))).collectAsMap()
titles(123)
val moviesForUser = ratings.keyBy(_.user).lookup(789)
println(moviesForUser.size)

33

moviesForUser.sortBy(-_.rating).take(10).map(rating => (titles(rating.product), rating.rating)).foreach(println)

(Godfather, The (1972),5.0)
(Trainspotting (1996),5.0)
(Dead Man Walking (1995),5.0)
(Star Wars (1977),5.0)
(Swingers (1996),5.0)
(Leaving Las Vegas (1995),5.0)
(Bound (1996),5.0)
(Fargo (1996),5.0)
(Last Supper, The (1995),5.0)
(Private Parts (1997),4.0)

topKRecs.map(rating => (titles(rating.product), rating.rating)).foreach(println)

(Aliens (1986),5.732688958436494)
(Evil Dead II (1987),5.682340265545152)
(GoodFellas (1990),5.5902224300291214)
(Alien (1979),5.5877871075408585)
(Terminator 2: Judgment Day (1991),5.4425266495153455)
(Carlito's Way (1993),5.39730369058763)
(Terminator, The (1984),5.356822356978749)
(Wild Bunch, The (1969),5.1464233861748925)
(Citizen Kane (1941),5.109287533257644)
(Miller's Crossing (1990),5.106161562126567)

3.2.1 物品推荐
import org.jblas.DoubleMatrix
val aMatrix = new DoubleMatrix(Array(1.0, 2.0, 3.0))
def cosineSimilarity(vec1: DoubleMatrix, vec2: DoubleMatrix): Double = {
    vec1.dot(vec2) / (vec1.norm2() * vec2.norm2())
}
val itemId = 567
val itemFactor = model.productFeatures.lookup(itemId).head
val itemVector = new DoubleMatrix(itemFactor)
cosineSimilarity(itemVector, itemVector)

res10: Double = 1.0

val sims = model.productFeatures.map{ case (id, factor) =>
    val factorVector = new DoubleMatrix(factor)
    val sim = cosineSimilarity(factorVector, itemVector)
    (id, sim)
}
val sortedSims = sims.top(K)(Ordering.by[(Int, Double), Double] { case (id, similarity) => similarity })
println(sortedSims.mkString("\n"))

(567,1.0)
(413,0.7309050775072655)
(895,0.6992030886048359)
(853,0.6960095521899471)
(219,0.6806270119940826)
(302,0.6757242121714326)
(257,0.6721490667554395)
(160,0.6672080746572076)
(563,0.6621573120106216)
(1019,0.6591520069387037)

3.2.2 校验推荐
println(titles(itemId))

Wes Craven's New Nightmare (1994)

val sortedSims2 = sims.top(K + 1)(Ordering.by[(Int, Double), Double] { case (id, similarity) => similarity })
sortedSims2.slice(1, 11).map{ case (id, sim) => (titles(id), sim) }.mkString("\n")

res13: String =
(Tales from the Crypt Presents: Bordello of Blood (1996),0.7309050775072655)
(Scream 2 (1997),0.6992030886048359)
(Braindead (1992),0.6960095521899471)
(Nightmare on Elm Street, A (1984),0.6806270119940826)
(L.A. Confidential (1997),0.6757242121714326)
(Men in Black (1997),0.6721490667554395)
(Glengarry Glen Ross (1992),0.6672080746572076)
(Stephen King's The Langoliers (1995),0.6621573120106216)
(Die xue shuang xiong (Killer, The) (1989),0.6591520069387037)
(Evil Dead II (1987),0.655134288821937)

4 模型效果评估

4.1 均方差(Mean Squared Error,MSE)
val actualRating = moviesForUser.take(1)(0)
val predictedRating = model.predict(789, actualRating.product)
val squaredError = math.pow(predictedRating - actualRating.rating, 2.0)
val usersProducts = ratings.map{ case Rating(user, product, rating)  => (user, product)}
val predictions = model.predict(usersProducts).map{
    case Rating(user, product, rating) => ((user, product), rating)
}
val ratingsAndPredictions = ratings.map{
    case Rating(user, product, rating) => ((user, product), rating)
}.join(predictions)
val MSE = ratingsAndPredictions.map{
    case ((user, product), (actual, predicted)) =>  math.pow((actual - predicted), 2)
}.reduce(_ + _) / ratingsAndPredictions.count
println("Mean Squared Error = " + MSE)

Mean Squared Error = 0.08527363423596633

val RMSE = math.sqrt(MSE)
println("Root Mean Squared Error = " + RMSE)

Root Mean Squared Error = 0.2920164965134099

4.2 K值平均准确率(MAPK)
def avgPrecisionK(actual: Seq[Int], predicted: Seq[Int], k: Int): Double = {
  val predK = predicted.take(k)
  var score = 0.0
  var numHits = 0.0
  for ((p, i) <- predK.zipWithIndex) {
    if (actual.contains(p)) {
      numHits += 1.0
      score += numHits / (i.toDouble + 1.0)
    }
  }
  if (actual.isEmpty) {
    1.0
  } else {
    score / scala.math.min(actual.size, k).toDouble
  }
}
val actualMovies = moviesForUser.map(_.product)
val predictedMovies = topKRecs.map(_.product)
val apk10 = avgPrecisionK(actualMovies, predictedMovies, 10)
val itemFactors = model.productFeatures.map { case (id, factor) => factor }.collect()
val itemMatrix = new DoubleMatrix(itemFactors)
println(itemMatrix.rows, itemMatrix.columns)

(1682,50)

val imBroadcast = sc.broadcast(itemMatrix)
val allRecs = model.userFeatures.map{ case (userId, array) =>
  val userVector = new DoubleMatrix(array)
  val scores = imBroadcast.value.mmul(userVector)
  val sortedWithId = scores.data.zipWithIndex.sortBy(-_._1)
  val recommendedIds = sortedWithId.map(_._2 + 1).toSeq
  (userId, recommendedIds)
}
val userMovies = ratings.map{ case Rating(user, product, rating) => (user, product) }.groupBy(_._1)
val K = 10
val MAPK = allRecs.join(userMovies).map{ case (userId, (predicted, actualWithIds)) =>
  val actual = actualWithIds.map(_._2).toSeq
  avgPrecisionK(actual, predicted, K)
}.reduce(_ + _) / allRecs.count
println("Mean Average Precision at K = " + MAPK)

Mean Average Precision at K = 0.030001472840815356

4.3 MLib内置评估函数·RMSE和MSE
import org.apache.spark.mllib.evaluation.RegressionMetrics
val predictedAndTrue = ratingsAndPredictions.map { case ((user, product), (actual, predicted)) => (actual, predicted) }
val regressionMetrics = new RegressionMetrics(predictedAndTrue)
println("Mean Squared Error = " + regressionMetrics.meanSquaredError)

Mean Squared Error = 0.08527363423596633

println("Root Mean Squared Error = " + regressionMetrics.rootMeanSquaredError)

Root Mean Squared Error = 0.2920164965134099

4.4 MLib内置评估函数·MAP(平均准确率)
import org.apache.spark.mllib.evaluation.RankingMetrics
val predictedAndTrueForRanking = allRecs.join(userMovies).map{ case (userId, (predicted, actualWithIds)) =>
  val actual = actualWithIds.map(_._2)
  (predicted.toArray, actual.toArray)
}
val rankingMetrics = new RankingMetrics(predictedAndTrueForRanking)
println("Mean Average Precision = " + rankingMetrics.meanAveragePrecision)

Mean Average Precision = 0.07208991526855565

val MAPK2000 = allRecs.join(userMovies).map{ case (userId, (predicted, actualWithIds)) =>
  val actual = actualWithIds.map(_._2).toSeq
  avgPrecisionK(actual, predicted, 2000)
}.reduce(_ + _) / allRecs.count
println("Mean Average Precision = " + MAPK2000)

Mean Average Precision = 0.07208991526855561

时间: 2024-09-26 11:10:52

Spark机器学习3·推荐引擎(spark-shell)的相关文章

Spark机器学习之推荐引擎

Spark机器学习之推荐引擎 一. 最小二乘法建立模型 关于最小二乘法矩阵分解,我们可以参阅: 一.矩阵分解模型. 用户对物品的打分行为可以表示成一个评分矩阵A(m*n),表示m个用户对n各物品的打分情况.如下图所示: 其中,A(i,j)表示用户user i对物品item j的打分.但是,ALS 的核心就是下面这个假设:的打分矩阵 A 可以用两个小矩阵和的乘积来近似:.这样我们就把整个系统的自由度从一下降到了.我们接下来就聊聊为什么 ALS 的低秩假设是合理的.世上万千事物,人们的喜好各不相同.

Apache Spark机器学习.1.1 Spark概述和技术优势

摘要 Spark机器学习简介 本章从机器学习和数据分析视角介绍Apache Spark,并讨论Spark中的机器学习计算处理技术.本章首先概括介绍Apache Spark,通过与MapReduce等计算平台进行比较,展示Spark在数据分析中的技术优势和特点.接着,讨论如下五个方面的内容: 机器学习算法与程序库 Spark RDD和DataFrame 机器学习框架 Spark pipeline技术 Spark notebook技术 以上是数据科学家或机器学习专业人员必须掌握的五项最重要的技术内容

Apache Spark机器学习.1.5 Spark RDD和DataFrame

1.5 Spark RDD和DataFrame 本节关注数据以及Spark如何表示和组织数据.我们将介绍Spark RDD和DataFrame技术. 通过本节的学习,读者将掌握Spark的两个关键概念:RDD和DataFrame,并将它们应用于机器学习项目. 1.5.1 Spark RDD Spark主要以一种分布式项集合的形式进行数据抽象,称之为弹性分布式数据集(Resilient Distributed Dataset,RDD).RDD是Spark的关键创新,使其比其他框架计算更加快速和高效

Apache Spark机器学习.1.8 Spark notebook简介

1.8 Spark notebook简介 在本节中,我们首先讨论有关面向机器学习的notebook方法.然后,我们介绍R Markdown,以其作为一个成熟的notebook案例,最后介绍Spark中的R notebook. 学习完本节,读者将掌握notebook相关的方法和概念,并为将其用于管理和开发机器学习项目做好准备. 1.8.1 面向机器学习的notebook方法 notebook已经成为众人青睐的机器学习工具,因为该工具既能动态驱动,还具备可重复生成的特点. 大部分notebook接口

Apache Spark机器学习3.1 Spark整体视图

摘要 基于Spark的整体视图 通过第1章,我们建立起了Spark系统,根据第2章的内容,我们完成了数据准备.现在将进入Spark系统应用的新阶段:从数据中获得洞见. 根据Gartner等机构的研究结果,许多公司仅仅是因为缺乏其商业的整体视图而损失了大量的价值.本章我们将回顾机器学习的方法和获得商业整体视图的步骤,然后讨论Spark如何简单.快速地进行相关计算,同时通过一个实例,循序渐进地展示使用Spark从数据到整体视图的开发过程. Spark整体视图 整体视图的方法 特征准备 模型估计 模型

用Spark机器学习数据流水线进行广告检测

在这篇文章中,我们Spark的其它机器学习API,名为Spark ML,如果要用数据流水线来开发大数据应用程序的话,这个是推荐的解决方案.关键点: 了解机器学习数据流水线有关内容. 怎么用Apache Spark机器学习包来实现机器学习数据流水线. 数据价值链处理的步骤. Spark机器学习流水线模块和API. 文字分类和广告检测用例. Spark ML(spark.ml)包提供了构建在DataFrame之上的机器学习API,它已经成了Spark SQL库的核心部分.这个包可以用于开发和管理机器

《 Apache Spark机器学习.》导读

本节书摘来自华章出版社< Apache Spark机器学习.>一书中作者[美] 刘永川(Alex Liu) 著 闫龙川 高德荃 李君婷 译  前 言 作为数据科学家和机器学习专业人员,我们的工作是建立模型进行欺诈检测.预测客户流失,或者在广泛的领域将数据转换为洞见.为此,我们有时需要处理大量的数据和复杂的计算.因此,我们一直对新的计算工具满怀期待,例如Spark,我们花费了很多时间来学习新工具.有很多可用的资料来学习这些新的工具,但这些资料大多都由计算机科学家编写,更多的是从计算角度来描述.

Spark高级数据分析· 3推荐引擎

推荐算法流程 推荐算法 预备 wget http://www.iro.umontreal.ca/~lisa/datasets/profiledata_06-May-2005.tar.gz cd /Users/erichan/garden/spark-1.6.0-bin-hadoop2.6/bin ./spark-shell --master local --driver-memory 6g 1 准备数据 val data ="/Users/erichan/AliDrive/ml_spark/da

Spark-构建基于Spark的推荐引擎

推荐引擎 推荐引擎就是是预测人们可能喜好的物品并通过探寻物品之间的联系来辅助这个过 程.从这点上来说,它同样也做预测的搜索引擎互补.但与搜索引擎不同,推荐引擎试图向人 们呈现的相关内容并不一定就是人们所搜索的,其返回的某些结果甚至人们都没听说过.推荐引擎试图对用户与某类物品之间的联系建模.比如上一个博客案 例中,我们使用推荐引擎来告诉用户有哪些电影他们可能会喜欢.如果这点做得很好,就能吸引 用户持续使用我们的服务.这对双方都有好处.同样,如果能准确告诉用户有哪些电影与某一电 影相似,就能方便用户