You should see something like the following when it finishes building successfully.
… PredictionIO-0.9.6/sbt/sbt PredictionIO-0.9.6/conf/ PredictionIO-0.9.6/conf/pio-env.sh PredictionIO binary distribution created at PredictionIO-0.9.6.tar.gz Extract the binary distribution you have just built.
$ tar zxvf PredictionIO-0.9.6.tar.gz
Let us install dependencies inside a subdirectory of the Apache PredictionIO (incubating) installation. By following this convention, you can use Apache PredictionIO (incubating)’s default configuration as is.
$ mkdir PredictionIO-0.9.6/vendors
Apache Spark is the default processing engine for PredictionIO. Download and extract it.
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.1-bin-hadoop2.6.tgz
$ tar zxvfC spark-1.5.1-bin-hadoop2.6.tgz PredictionIO-0.9.6/vendors
If you decide to install Apache Spark to another location, you must edit PredictionIO-0.9.6/conf/pio-env.sh and change the SPARK_HOME variable to point to your own Apache Spark installation.
官方给的例子是采用 PostgreSQL 或者Hbase + Elasticsearch，我选择 MySQL 作为数据存储，因为在将来的数据可视化方面会采用 Caravel 自动化生成仪表板展现数据，在后续的文章中我会再详细介绍这方面。
#!/usr/bin/env bash # Copy this file as pio-env.sh and edit it for your site's configuration. # PredictionIO Main Configuration # # This section controls core behavior of PredictionIO. It is very likely that # you need to change these to fit your site. # SPARK_HOME: Apache Spark is a hard dependency and must be configured.
# ES_CONF_DIR: You must configure this if you have advanced configuration for # your Elasticsearch setup. # ES_CONF_DIR=/opt/elasticsearch # HADOOP_CONF_DIR: You must configure this if you intend to run PredictionIO # with Hadoop 2. # HADOOP_CONF_DIR=/opt/hadoop # HBASE_CONF_DIR: You must configure this if you intend to run PredictionIO # with HBase on a remote cluster. # HBASE_CONF_DIR=$PIO_HOME/vendors/hbase-1.0.0/conf # Filesystem paths where PredictionIO uses as block storage.
# PredictionIO Storage Configuration # # This section controls programs that make use of PredictionIO's built-in # storage facilities. Default values are shown below. # # For more information on storage configuration please refer to # https://docs.prediction.io/system/anotherd # Storage Repositories # Default is to use PostgreSQL
# Storage Data Sources # PostgreSQL Default Settings # Please change "pio" to your database name in PIO_STORAGE_SOURCES_PGSQL_URL # Please change PIO_STORAGE_SOURCES_PGSQL_USERNAME and # PIO_STORAGE_SOURCES_PGSQL_PASSWORD accordingly #PIO_STORAGE_SOURCES_PGSQL_TYPE=jdbc #PIO_STORAGE_SOURCES_PGSQL_URL=jdbc:postgresql://localhost/pio #PIO_STORAGE_SOURCES_PGSQL_USERNAME=pio #PIO_STORAGE_SOURCES_PGSQL_PASSWORD=pio # MySQL Example
# Elasticsearch Example # PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch # PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=<elasticsearch_cluster_name> # PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost # PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300 # PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=$PIO_HOME/vendors/elasticsearch-1.4.4 # Local File System Example # PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs # PIO_STORAGE_SOURCES_LOCALFS_PATH=$PIO_FS_BASEDIR/models
# HBase Example # PIO_STORAGE_SOURCES_HBASE_TYPE=hbase # PIO_STORAGE_SOURCES_HBASE_HOME=$PIO_HOME/vendors/hbase-1.0.0
Once started, the master will print out a spark://HOST:PORT URL for itself, which you can use to connect workers to it, or pass as the “master” argument to SparkContext. You can also find this URL on the master’s web UI, which is http://localhost:8080 by default. Similarly, you can start one or more workers and connect them to the master via:
Once you have started a worker, look at the master’s web UI (http://localhost:8080 by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS).
3.3 Create a new Engine from an Engine Template
Now let’s create a new engine called MyRecommendation by downloading the Recommendation Engine Template. Go to a directory where you want to put your engine and run the following:
$ pio template get PredictionIO/template-scala-parallel-recommendation MyRecommendation
$ cd MyRecommendation
A new directory MyRecommendation is created, where you can find the downloaded engine template.
3.4 Generate an App ID and Access Key
You will need to create a new App in PredictionIO to store all the data of your app. The data collected will be used for machine learning modeling. Let’s assume you want to use this engine in an application named “MyApp1”. Run the following to create a new app “MyApp1”:
$ pio app new MyApp1
You should find the following in the console output: