...
Code Block | ||
---|---|---|
| ||
madlib.argmax (integer key, float8 value) |
2. Functions and Languages
To simplify this guide, we'd like to introduce three categories of user-defined functions:
UDAs - user-defined aggregates, which perform a single scan of the data source and return an aggregated value for a group of rows. All UDA component functions should be written in PL/C (C/C++) for performance and portability reasons.
Row Level UDFs - functions that operate on their arguments only and do not dispatch any SQL statements. These functions generate a result for each argument set, or for each tuple they are executed on. Recommended language is the same as for UDAs.
Driver UDFs - functions that usually drive an execution of an algorithm, and may perform multiple SQL operations including data modification. In order to make this part of the code portable we suggest using PL/Python wrapper functions based on plain Python modules. The DB access inside the Python modules should be implemented using "classic" PyGreSQL interface ([[http://www.pygresql.org/pg.html]]).
This topic will be covered in much more detail in [[Design Patterns & Best Practices]].*
4. Function Name Overloading
The suggestions below on name overloading apply to all the above-mentioned types of user-defined functions.
Data Types
Some platforms (like PostgreSQL) allow one to use the ANYELEMENT/ANYARRAY datatype which can be used by MADlib routines (whenever it makes sense) in order to minimize code duplication.
If ANYELEMENT/ANYARRAY functionality is not available or not feasible, function-name overloading can be used for different argument data types. For example, function F1 from module M1 can have the following versions:
- TEXT data type example:
Code Block | ||
---|---|---|
| ||
madlib.m1_f1( arg1 TEXT) |
* NUMERIC data type example:
Code Block | ||
---|---|---|
| ||
madlib.m1_f1( arg1 BIGINT/FLOAT/etc.) |
Argument Sets
Overloading mechanisms should also be used for different sets of parameters. For example, if (reqarg1, ..., reqargN)
is a set of required parameters for function F1 from module M1, then the following definitions would be correct:
- A version for required arguments only:
Code Block | ||
---|---|---|
| ||
madlib.m1_f1( reqarg1, ..., reqargN) |
- A version for both required and optional arguments:
Code Block | ||
---|---|---|
| ||
madlib.m1_f1( reqarg1, ..., reqargN, optarg1, ..., optargN) |
5. Guide to Driver UDFs
- Should follow the naming conventions described in section 2.
- Should follow the function overloading rules as described in section 4. On Greenplum and PostgreSQL this can be achieved via PL/Python wrapper UDFs based on the same main Python code.
5.1. Input Definition
Parameters of the execution should be supplied directly in the function call (as opposed to passing a reference ID to a parameter-set stored in a table); for example:
Code Block | ||
---|---|---|
| ||
SELECT madlib.m1_f1( par1 TEXT/INT/etc, par2 TEXT[]/INT[]/etc, ...) |
Data should be passed to the function in the form of a text argument schema.table
representing an existing table or a view, which:
- Can be located in any schema as long as the database user executing the function has read permissions.
- Should be defined in the method documentation; for example:
Code Block | ||
---|---|---|
| ||
TABLE|VIEW (
col_x INT,
col_y FLOAT,
col_z TEXT
) |
- The input relation and its attributes needed by the function should be validated using primitive funtctions from the
helper.py
module. See section 5.4 for more information.
5.2. Output Definition
Returning Simple Results or Models
We recommend using Standard Output to return a predefined single record structure for all cases when the results of a method or a model definition is in a human readable format. See examples below:
- Returning a model ([[Linear Regression|http://doc.madlib.net/groupgrplinreg.html]]):
Code Block | ||
---|---|---|
| ||
SELECT mregr_coef(price, array[1, bedroom, bath, size]) from houses;
mregr_coef
------------------------------------
{27923.4,-35524.8,2269.34,130.794} |
- Returning results ([[FM Sketch|http://doc.madlib.net/groupgrpfmsketch.html]]):
Code Block | ||
---|---|---|
| ||
SELECT madlib.fmsketch_dcount(pronargs) FROM pg_proc;
fmsketch_dcount
-----------------
10 |
Note: If it turns out that a large user population would prefer to have the model output saved in a table, you can add optional parameters as described in the following section.
Returning Complex Models
If a particular method returns a complex model that is represented in multiple rows it should be saved into a table with a pre-defined structure. The name of this table (including target schema) should be passed in the function call as an argument.
- Example ([[Decision Tree|http://doc.madlib.net/groupgrpdectree.html]]):
Code Block | ||
---|---|---|
| ||
SELECT * FROM madlib.dtree_train( 'user_schema.user_table', 3, 10);
output_table
------------------------
user_schema.user_table |
Returning Large Result Sets
The case for returning one or many data sets is similar to returning a complex model. The name(s) for all tables that will be created during the execution of the function must be supplied by the user in the function call. See the below section (Summary Output) for an example of multiple table output method.
Summary Output
Each Driver UDF should return a summary output in the form of a pre-defined record/row. Each attribute of the result should be clearly defined in the method documentation. If any tables/views are created or populated during the execution their full names should be returned in the summary. For example, the output of a k-means clustering algorithm could look like this:
Code Block | ||
---|---|---|
| ||
clusters | pct_input_used | cluster_table | point_table
----------+----------------+-------------------------+---------------------
10 | 100 | my_schema.my_centroids | my_schema.my_points |
The above output can be achieved in the following way:
1) Create data type for the return set madlib.results
:
Code Block | ||
---|---|---|
| ||
CREATE TYPE madlib.kmeans_result AS (
clusters INTEGER,
pct_input_used PERCENT,
output_schema TEXT,
cluster_table TEXT,
point_table TEXT
); |
2) If using the recommended PL/Python language (see section 3 for more info) you can use the following example to generate a single row of output inside a Python routine:
Code Block | ||
---|---|---|
| ||
CREATE OR REPLACE FUNCTION madlib.kmeans_dummy()
RETURNS SETOF madlib.kmeans_result
AS $$
return ( [ 10, 100.0, 'my_schema', 'my_centroids', 'my_points' ]);
$$ LANGUAGE plpythonu; |
5.3. Logging
- ERROR
If a function encounters a problem it should raise an error using the plpy.error( message)
function (see section 6.1). This will ensure the proper end of the execution and error propagation to the calling environment.
- INFO
If specified by the user (verbose flag/parameter), long-running methods can send runtime status to the log. But be aware that this information may not be propagated to clients in many cases, and it will enlarge the stored log file. Informational logging should be turned off by default, and activated only with an explicit user command. Use plpy.info( message)
function (see section 6.1) to properly generate information logs. Example log output:
Code Block | ||
---|---|---|
| ||
SQL> select madlib.kmeans_run( 'my_schema.data_set_1', 10, 1, 'run1', 'my_schema', 1);
INFO: Parameters:
INFO: * k = 10 (number of centroids)
INFO: * input_table = my_schema.data_set_1
INFO: * goodness = 1 (GOF test on)
INFO: * run_id = run1
INFO: * output_schema = my_schema
INFO: * verbose = 1 (on)
INFO: Seeding 10 centroids...
INFO: Using sample data set for analysis... (9200 out of 10000 points)
INFO: ...Iteration 1
INFO: ...Iteration 2
INFO: Exit reason: fraction of reassigned nodes is smaller than the limit: 0.001
INFO: Expanding cluster assignment to all points...
INFO: Calculating goodness of fit...
... |
5.4. Parameter Validation
Parameter validation should be performed in each function to avoid any preventable errors.
For simple arguments (scalar, array) sanity checks should be done by the author. Some common parameters with known value domains should be validated using SQL domains, for general use, for example:
- Percent
Code Block | ||
---|---|---|
| ||
CREATE DOMAIN percent AS FLOAT
CHECK(
VALUE >= 0.0 AND VALUE <= 100.0
); |
- Probability
Code Block | ||
---|---|---|
| ||
CREATE DOMAIN probability AS FLOAT
CHECK(
VALUE >= 0.0 AND VALUE <= 1.0
); |
For table/view and column arguments please see section 6.2 (describing usage of the helper.py module).
5.5. Multi-User and Multi-Session Execution
In order to avoid unpleasant situations of over-writing or deleting results, MADlib functions should be ready for execution in multi-session or multi-user environment. Hence the following requirements should be met:
Input relations (tables or views) should be used for read only purposes.
Any user output table given as an argument must not overwrite an existing database relation. In such case an error should be returned.
Any execution specific tables should be locked in EXCLUSIVE MODE after creation. This functionality will be implemented inside the Python abstraction layer. There is no need to release LOCKS as they will persist anyway until the end of the main UDF.
6. Support Modules
A set of Python modules to make MADlib development easier.
6.1. DB connectivity module: plpy.py
This module serves as the database access layer. Even though currently not used this module will provide easy portability between various MADlib platforms and interfaces. To clarify: PostrgreSQL PL/Python language currently uses an internal plpy.py module to implement seamless DB access (using "classic" PyGreSQL interface - see [[http://www.pygresql.org/pg.html]]). By adding a MADlib version of plpy.py we'll be able to more easily port code written for MADlib.
Currently implemented functionality:
Code Block | ||
---|---|---|
| ||
def connect ( dbname, host, port, user, passwd)
def close()
def execute( sql)
def info( msg)
def error( msg) |
6.2. Python abstraction layer (TO DO)
This module consists of a set of functions to support common data validation and database object management tasks.
Example functions:
- table/view existence check
Code Block | ||
---|---|---|
| ||
def __check_rel_exist( relation):
if relation ~ schema.table:
check if exists
if relation ~ table:
find the first schema using SEARCH_PATH order
returns:
- (schema,table)
- None (if not found) |
- relation column existence check (assuming relation exists)
Code Block | ||
---|---|---|
| ||
def __check_rel_column( relation, column):
returns:
- (schema,table,column)
- None (if not found) |
- relation column data type check (assuming table & column exist)
Code Block | ||
---|---|---|
| ||
def __check_rel_column( relation, column, data_type):
returns:
- (schema,table,column,type)
- None (if not found) |