...
In sqoop2 connectors will supply their own map phase that will import data into HDFS. Because this piece of code will be fully under connector maintenance, we need to agree on common intermediate (map output) form for all connectors and all cases. This page goal is to do comparison of different intermediate representation, so that we can pick up the appropriate one for sqoop 2.
Goals
- Simple
- Fast (no necessary parsing, encoding, ...)
Ideas
List of ideas that we've explored.
MySQL's mysqldump format
Comma separated list of values present in one single Text instance. Various data types are encoded as following:
...
Code Block |
---|
0,'Hello world','Jarcec\'s notes',NULL,66.6,'2012-06-06 06:06:06' |
PostgreSQL's pg_dump format
Similarly as in case of MySQL dump format, data would be represented as one Text instance where multiple colums would be separated by commas. Strings are quoted in single quotes (for example 'string'). All characters are printed as they are with exception of single quote that is double doubled – e.g. two single quotes '' represents one single quote inside the string and not end of the string (for example 'Jarcec''s notes'). One quoted single quote is represented by four single qootes -- '''' represents just one ' (first one is opening, than there are two single quotes in a row that encodes one single quote inside the string and lastly the last single quote is representing end of the string). Null byte (0x00) is not allowed inside string constants. Binary constants are also quoted in single quotes, however all binary bytes are converted to hexa with \x prefix – for example '\x4d7953514c' stands for string 'MySQL' (saved in binary column like bytea).
Data type | Serialized as | ||
---|---|---|---|
INT (and all variants) | Direct value (666) | ||
NUMERIC | Direct value (66.60) | ||
REAL(and all variants) | Direct value (66.5999985, 55e55) or string constant for special cases ('Infinity', '-Infinity', 'NaN') | ||
VARCHAR(text, ...) | String | ||
CHAR | String, unused positions at the end are filled with spaces | ||
TIMESTAMP(date, time, ...) | String in format YYYY-MM-DD HH:MM:SS.ZZZZZZ (Date and hour part) | ||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="44c4ea3ed8d49bbf-bb9d15f6-4873442f-8892b1c9-2f41d12f6b0a29dbc09c6653"><ac:plain-text-body><![CDATA[ | TIMESTAMP with time zone (and others) | String in format YYY-MM-DD HH:MM:SS.ZZZZZZ[+-]XX ('2012-07-03 14:07:11.876239+02') | ]]></ac:plain-text-body></ac:structured-macro> |
BOOLEAN | Constants true and false (not quoted as a String) | ||
ENUM | String | ||
ARRAY | String that contains special structure - '{ITEM1, ITEM2, ITEM3}', ITEMX itself might be in separate quotes if needed. |
Encoded examples:
Code Block |
---|
666, 66.60, 'jarcec', 'Jarcec''s notes', '2012-07-03 14:07:11.876239', true, '{1,2,3}', NULL, '\x4d7953514c'
|
avro
We can also utilize AVRO for intermediate format. Avro do have support only for limited number of data types, so that we would have to encode some types (for example we might encode Date as a string with same format as above). List of all supported types can be found on following wiki page. Example of mapper code in the Connector itself would be:
Code Block |
---|
GenericRecordBuilder builder = new GenericRecordBuilder(getSchemaSomehow());
builder.set("column1", value1);
builder.set("columnX", valueX);
context.write(new AvroKey(builder.build()), NullWritable.get());
|