This guide explains all of the elements needed to successfully develop and plug in a new MADlib® module.
...
Install MADlib by following the steps in the Installation Guide for MADlib or use the Docker image instructions below.
MADlib source code is organized such that the core logic of a machine learning or statistical module is located in a common location, and the database-port specific code is located in a ports
folder. Since all currently supported databases are based on Postgres, the postgres
port contains all the port-specific files, with greenplum
and hawq
inheriting from it. Before proceeding with this guide, it is recommended that you familiarize yourself with the MADlib architecturemodule anatomy.
Anchor |
---|
...
|
We provide a Docker image with necessary dependencies required to compile and test MADlib on PostgreSQL 9.6. You can view the dependency docker file at ./tool/docker/base/Dockerfile_postgres_9_6
. The image is hosted on docker hub at madlib/postgres_9.6:latest
. Later we will provide a similar Docker image for Greenplum Database.
Some useful commands to use the Docker file
Let's add a new module called hello_world
. Inside this module we implement a User-Defined SQL Aggregate (UDA), called avg_var
which computes the mean and variance for a given numerical column of a table. We'll implement a distributed version of Welford's online algorithm for computing the mean and variance.
Unlike an ordinary UDA in PostgreSQL, avg_var
will also work on a distributed database and take advantage of the underlying distributed network for parallel computations. The usage of avg_var
is very simple; users simply run the following command in psql:
Code Block | ||
---|---|---|
| ||
sql select avg_var(bath) from houses |
which will print three numbers on the screen: mean, variance and number of rows in column bath
of table houses
.
Below are the main steps we will go through:
- Register the module.
- Define the SQL functions.
- Implement the functions in C++.
- Register the C++ header files.
1. Register the module
Add the following line to the file called Modules.yml
under ./src/config/
Code Block | ||
---|---|---|
| ||
- name: hello_world |
and create two folders: ./src/ports/postgres/modules/hello_world
and ./src/modules/hello_world
. The names of the folders need to match the name of the module specified in Modules.yml
.
2. Define the SQL functions
Create file avg_var.sql_in
under folder ./src/ports/postgres/modules/hello_world
. Inside this file we define the aggregate function and other helper functions for computing mean and variance. The actual implementations of those functions will be in separate C++ files which we will describe in the next section.
At the beginning of file avg_var.sql_in
the command m4_include(
SQLCommon.m4')` is necessary to run the m4 macro processor. M4 is used to add platform-specific commands in the SQL definitions and is run while deploying MADlib to the database.
We define the aggregate function avg_var
using built-in PostgreSQL command CREATE AGGREGATE
.
Code Block | ||
---|---|---|
| ||
DROP AGGREGATE IF EXISTS MADLIB_SCHEMA.avg_var(DOUBLE PRECISION);
CREATE AGGREGATE MADLIB_SCHEMA.avg_var(DOUBLE PRECISION) (
SFUNC=MADLIB_SCHEMA.avg_var_transition,
STYPE=double precision[],
FINALFUNC=MADLIB_SCHEMA.avg_var_final,
m4_ifdef(`__POSTGRESQL__', `', `prefunc=MADLIB_SCHEMA.avg_var_merge_states,')
INITCOND='{0, 0, 0}'
);
|
We also define parameters passed to CREATE AGGREGATE
:
| |
## 1) Pull down the `madlib/postgres_9.6:latest` image from docker hub:
docker pull madlib/postgres_9.6:latest
## 2) Launch a container corresponding to the MADlib image, mounting the source code folder to the container:
docker run -d -it --name madlib -v (path to incubator-madlib directory):/incubator-madlib/ madlib/postgres_9.6
where incubator-madlib is the directory where the MADlib source code resides.
############################################## * WARNING * ##################################################
# Please be aware that when mounting a volume as shown above, any changes you make in the "incubator-madlib"
# folder inside the Docker container will be reflected on your local disk (and vice versa). This means that
# deleting data in the mounted volume from a Docker container will delete the data from your local disk also.
#############################################################################################################
## 3) When the container is up, connect to it and build MADlib:
docker exec -it madlib bash
mkdir /incubator-madlib/build-docker
cd /incubator-madlib/build-docker
cmake ..
make
make doc
make install
## 4) Install MADlib:
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install
## 5) Several other madpack commands can now be run:
# Run install check, on all modules:
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install-check
# Run install check, on a specific module, say svm:
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install-check -t svm
# Run dev check, on all modules (more comprehensive than install check):
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres dev-check
# Run dev check, on a specific module, say svm:
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres dev-check -t svm
# Reinstall MADlib:
src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres reinstall
## 6) Kill and remove containers (after exiting the container):
docker kill madlib
docker rm madlib |
Anchor | ||||
---|---|---|---|---|
|
Let's add a new module called hello_world
. Inside this module we implement a User-Defined SQL Aggregate (UDA), called avg_var
which computes the mean and variance for a given numerical column of a table. We'll implement a distributed version of Welford's online algorithm for computing the mean and variance.
Unlike an ordinary UDA in PostgreSQL, avg_var
will also work on a distributed database and take advantage of the underlying distributed network for parallel computations. The usage of avg_var
is very simple; users simply run the following command in psql:
Code Block | ||
---|---|---|
| ||
sql select avg_var(bath) from houses |
which will print three numbers on the screen: mean, variance and number of rows in column bath
of table houses
.
Below are the main steps we will go through:
- Register the module.
- Define the SQL functions.
- Implement the functions in C++.
- Register the C++ header files.
The files for this exercise can be found in the hello world folder of the source code repository.
1. Register the module
Add the following line to the file called Modules.yml
under ./src/config/
Code Block | ||
---|---|---|
| ||
- name: hello_world |
and create two folders: ./src/ports/postgres/modules/hello_world
and ./src/modules/hello_world
. The names of the folders need to match the name of the module specified in Modules.yml
.
2. Define the SQL functions
Create file avg_var.sql_in
under folder ./src/ports/postgres/modules/hello_world
. Inside this file we define the aggregate function and other helper functions for computing mean and variance. The actual implementations of those functions will be in separate C++ files which we will describe in the next section.
At the beginning of file avg_var.sql_in
the command m4_include
('SQLCommon.m4') is necessary to run the m4 macro processor. M4 is used to add platform-specific commands in the SQL definitions and is run while deploying MADlib to the database.
We define the aggregate function avg_var
using built-in PostgreSQL command CREATE AGGREGATE
.
Code Block | ||
---|---|---|
| ||
DROP AGGREGATE IF EXISTS MADLIB_SCHEMA.avg_var(DOUBLE PRECISION);
CREATE AGGREGATE MADLIB_SCHEMA.avg_var(DOUBLE PRECISION) (
SFUNC=MADLIB_SCHEMA.avg_var_transition,
STYPE=double precision[],
FINALFUNC=MADLIB_SCHEMA.avg_var_final,
m4_ifdef(`__POSTGRESQL__', `', `prefunc=MADLIB_SCHEMA.avg_var_merge_states,')
INITCOND='{0, 0, 0}'
);
|
We also define parameters passed to CREATE AGGREGATE
:
SFUNC
- The name of the state transition function to be called for each input row. The state transition function,
avg_var_transition
in this example, is defined in the same fileavg_var.sql_in
and
SFUNC
- The name of the state transition function to be called for each input row. The state transition function,
avg_var_transition
in this example, is defined in the same fileavg_var.sql_in
and implemented later in C++.
- The name of the state transition function to be called for each input row. The state transition function,
FINALFUNC
- The name of the final function called to compute the aggregate's result after all input rows have been traversed. The final function,
avg_var_final
in this example, is defined in the same fileavg_var.sql_in
and implemented later in C++.
- The name of the final function called to compute the aggregate's result after all input rows have been traversed. The final function,
PREFUNC
- The name of the merge function called to combine the aggregate's state values after each segment, or partition, of data has been traversed. The merge function is needed for distributed datasets on Greenplum and HAWQ. For PostgreSQL, the data is not distributed, and the merge function is not necessary. For completeness we implement a merge function called
avg_var_merge_states
in this guide.
- The name of the merge function called to combine the aggregate's state values after each segment, or partition, of data has been traversed. The merge function is needed for distributed datasets on Greenplum and HAWQ. For PostgreSQL, the data is not distributed, and the merge function is not necessary. For completeness we implement a merge function called
INITCOND
- The initial condition for the state value. In this example it is an all-zero double array corresponding to the values of mean, variance, and the number of rows, respectively.
...
Under the hood, each of the three UDFs is declared as a subclass of dbconnector::postgres::UDF
. The behavior of those UDFs is solely determined by its member functiona subclass of dbconnector::postgres::UDF
. The behavior of those UDFs is solely determined by its member function
Code Block | ||
---|---|---|
| ||
AnyType run(AnyType &args); |
In other words, we only need to implement the following methods in the avg_var.cpp
file:
Code Block | ||
---|---|---|
| ||
AnyType avg_var_transition::run(AnyType& args); AnyType avg_var_merge_states::run(AnyType& args); AnyType avg_var_final::run(AnyType& &args); |
...
Here the AnyType
class works for both passing data from the DBMS to the C++ function, as well as returning values back from C++. Refer to TypeTraits_impl.hpp
for more details.
Transition function
Code Block | ||
---|---|---|
| ||
AnyType avg_var_transition::run(AnyType& args); AnyType avg_var_merge_statestransition::run(AnyType& args); AnyType avg_var_final::run(AnyType& args); |
Here the AnyType
class works for both passing data from the DBMS to the C++ function, as well as returning values back from C++. Refer to TypeTraits_impl.hpp
for more details.
Transition function
Code Block | ||
---|---|---|
| ||
AnyType
avg_var_transition::run(AnyType& args) {
// get current state value
AvgVarTransitionState<MutableArrayHandle<double> > state = args[0];
// get current row value
double x = args[1].getAs<double>();
double d = (x - state.avg);
// online update mean
state.avg += d / static_cast<double>(state.numRows + 1);
double new_d = (x - state.avg);
double a = static_cast<double>(state.numRows) / static_cast<double>(state.numRows + 1);
// online update variance
state.var = state.var * a + d * new_d / static_cast<double>(state.numRows + 1);
state.numRows ++;
return state;
} |
- There are two arguments for
avg_var_transition
, as specified inavg_var.sql_in
. The first one is an array of SQ: double type, corresponding to the current mean, variance, and number of rows traversed, and the second one is a double representing the current tuple value.
- We will describe
classAvgVarTransitionState
later. Basically it takesargs[0]
, a SQL double array, passes the data to the appropriate C++ types and stores them in thestate
instance.
- Both the mean and the variance are updated in an online manner to avoid accumulating a large intermediate sum.
...
args) {
// get current state value
AvgVarTransitionState<MutableArrayHandle<double> > state = args[0];
// update state with current row value
double x = args[1].getAs<double>();
state += x;
state.numRows ++;
return state;
} |
- There are two arguments for
avg_var_transition
, as specified inavg_var.sql_in
. The first one is an array of SQL double type, corresponding to the current mean, variance, and number of rows traversed, and the second one is a double representing the current tuple value.
- We will describe
class AvgVarTransitionState
later. Basically it takesargs[0]
, a SQL double array, passes the data to the appropriate C++ types and stores them in thestate
instance.
- We compute the average and variance in an on-line manner by overloading the operator
+=
in the classAvgVarTransitionState.
Merge function
Code Block | ||
---|---|---|
| ||
AnyType
avg_var_merge_states::run(AnyType& args) {
AvgVarTransitionState<MutableArrayHandle<double> > stateLeft = args[0];
AvgVarTransitionState<ArrayHandle<double> > stateRight = args[1];
// Merge states together and return
stateLeft += stateRight;
return stateLeft;
} |
- Again: the arguments contained in
AnyType& args
are defined inavg_var.sql_in
. - The details are hidden in the method of class
AvgVarTransitionState
which overloads the operator+=
Final function
Code Block | ||
---|---|---|
| ||
AnyType avg_var_merge_states::run(AnyType& args) { AvgVarTransitionState<MutableArrayHandle<double> > stateLeft = args[0]; AvgVarTransitionState<ArrayHandle<double> > stateRight = args[1]; // Merge states together and return stateLeft += stateRight; _final::run(AnyType& args) { AvgVarTransitionState<MutableArrayHandle<double> > state = args[0]; // If we haven't seen any data, just return Null. This is the standard // behavior of aggregate function on empty data sets (compare, e.g., // how PostgreSQL handles sum or avg on empty inputs) if (state.numRows == 0) return Null(); return stateLeftstate; } |
- Again: the arguments contained in
AnyType& args
are defined inavg_var.sql_in
. - The details are hidden in the method of class
AvgVarTransitionState
which overloads the operator+=
Final function
- Class
AvgVarTransitionState
overloads theAnyType()
operator such that we can directly return state, an instance ofAvgVarTransitionState
, while the function is expected to return aAnyType
.
Bridging class
Below are the methods that overload the operator +=
for the bridging class AvgVarTransitionState
:
Code Block | ||
---|---|---|
| ||
AnyType avg_var_final::run(AnyType& args) /** * @brief Update state with a new data point */ AvgVarTransitionState &operator+=(const double x){ AvgVarTransitionState<MutableArrayHandle<double> >double statediff = args[0]; (x - avg); double normalizer = static_cast<double>(numRows + 1); // If we haven't seen any data, just return Null. This is the standard // behavior of aggregate function on empty data sets (compare, e.g.,// online update mean this->avg += diff / normalizer; // online update variance // how PostgreSQL handles sumdouble ornew_diff avg= on(x empty- inputsavg); if (state.numRows == 0) double a = static_cast<double>(numRows) / normalizer; returnthis->var = Null(); return state; } |
- Class
AvgVarTransitionState
overloads theAnyType()
operator such that we can directly return state, an instance ofAvgVarTransitionState
, while the function is expected to return aAnyType
.
Bridging class
Below is the method that overloads the operator +=
for the bridging class AvgVarTransitionState
:
Code Block | ||
---|---|---|
| ||
var * a) + (diff * new_diff) / normalizer; } /** * @brief Merge with another State object * * We update mean and variance in a online fashion * to avoid intermediate large sum. */ template <class OtherHandle> AvgVarTransitionState &operator+=( const AvgVarTransitionState<OtherHandle> &inOtherState) { if (mStorage.size() != inOtherState.mStorage.size()) throw std::logic_error("Internal error: Incompatible transition " "states"); double avg_ = inOtherState.avg; double var_ = inOtherState.var; uint16uint64_t numRows_ = static_cast<uint16cast<uint64_t>(inOtherState.numRows); double totalNumRows = static_cast<double>(numRows + numRows_); double p = static_cast<double>(numRows) / totalNumRows; double p_ = static_cast<double>(numRows_) / totalNumRows; double totalAvg = avg * p + avg_ * p_; double a = avg - totalAvg; double a_ = avg_ - totalAvg; numRows += numRows_; var = p * var + p_ * var_ + p * a * a + p_ * a_ * a_; avg = totalAvg; return *this; } |
...
Now let's run an example using the new module. First, rebuild and reinstall MADLib according to the instructions from Installation Guide. We use the patients
dataset from the MADlib Quick Start Guide for Users for testing purposes. From the psql
terminal, the result below shows that half of the 20 patients have had second heart attacks within 1 year (yes = 1):
Code Block | ||
---|---|---|
| ||
SELECT madlib.avg_var(second_attack) FROM patients;
-- ************ --
-- Result --
-- ************ --
+-------------------+
| avg_var |
|-------------------|
| [0.5, 0.25, 20.0] |
+-------------------+
-- (average, variance, count) -- |
...
average, variance, count) -- |
Anchor | ||||
---|---|---|---|---|
|
...
Compared to the steps presented in the last session, here we do not need to modify the Modules.yml
file because we are not creating new module. Another difference is that we create an additional .py_in
python file along with the .sql_in
file. That is where most of the iterative logic will be implementednew module. Another difference is that we create an additional .py_in
python file along with the .sql_in
file. That is where most of the iterative logic will be implemented.
The files for this exercise can be found in the hello world folder of the source code repository. Please note that __init__.py_in
is not included in this folder as an empty file will be sufficient for the purposes of this exercise.
1. Overview
The overall logic is split into three parts. All the UDF and UDA are defined in simple_logistic.sql_in
. The transition
, merge
and final
functions are implemented in C++. Those functions together constitute the UDA called __logregr_simple_step
which takes one step from the current state to decrease the logistic regression objective. And finally in simple_logistic.py_in
the plpy
package is used to implement in python a UDF called logregr_simple_train
which invokes __logregr_simple_step
iteratively until convergence.
...
The example below demonstrates the usage of madlib.logregr_simple_train
on the patients
table we used earlier. The trained classification model is stored in the table called logreg_mdl
and can be viewed using standard SQL query.
Code Block | ||
---|---|---|
| ||
SELECT madlib.logregr_simple_train( 'patients', -- source table 'logreg_mdl', -- output table 'second_attack', -- labels 'ARRAY[1, treatment, trait_anxiety]'); -- features SELECT * FROM logreg_mdl; -- ************ -- -- Result -- -- ************ -- +--------------------------------------------------+------------------+ | coef | log_likelihood | |--------------------------------------------------+------------------| | [-6.27176619714, -0.84168872422, 0.116267554551] | -9.42379 | +--------------------------------------------------+------------------+ |
...