Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

Intermediate Data Format (IDF)

  • Connectors have FROM and TO parts. A sqoop job represents data transfer between FROM and TO. IDF API represents how the data is represented as it flows between the FROM and TO via sqoop.

...

  •  
  • Connectors basically represent different data sources and each data source can have its custom/ native format that it uses. For instance MongoDb might use JSON as its optimal native format, HDFS can use plain CSV text, S3 can use its own custom format. In simple words, every data source has one thing in common, it is collection of rows and each row is a collection of fields / columns. Most if not all data sources have strict schema that tells what each field type is

...

  • . IDF encapsulated the native format and the schema associated with each field.
  • Before we understand IDF API, it is important to be aware of the two other low level APIs that sqoop defines for data reading and writing between the FROM and TO data sources
          
          DataReader

 

Code Block
languagejava
titleDataReader
collapsetrue
/**
 * An intermediate layer for passing data from the execution engine
 * to the ETL engine.
 */
public abstract class DataReader {
  /**
   * Read data from the execution engine as an object array.
   * @return - array of objects with each column represented as an object
   * @throws Exception
   */
  public abstract Object[] readArrayRecord() throws Exception;
  /**
   * Read data from execution engine as text - as a CSV record.
   * public abstract Object readContent(int type) throws Exception;
   * @return - CSV formatted data.
   * @throws Exception
   */
  public abstract String readTextRecord() throws Exception;
  /**
   * Read data from execution engine as a native format.
   * @return - the content in the native format of the intermediate data
   * format being used.
   * @throws Exception
   */
  public abstract Object readContent() throws Exception;
}

   

               DataWriter

Code Block
languagejava
titleDataWriter
collapsetrue
/**
 * An intermediate layer for passing data from the ETL framework
 * to the MR framework.
 */
public abstract class DataWriter {
  /**
   * Write an array of objects into the execution framework
   * @param array - data to be written
   */
  public abstract void writeArrayRecord(Object[] array);
  /**
   * Write data into execution framework as text. The Intermediate Data Format
   * may choose to convert the data to another format based on how the data
   * format is implemented
   * @param text - data represented as CSV text.
   */
  public abstract void writeStringRecord(String text);
  /**
   * Write data in the intermediate data format's native format.
   * @param obj - data to be written
   */
  public abstract void writeRecord(Object obj);
}

 

 

  • IDF API is primarily influenced by the above low level apis to read and write data and hence this API dictates that each custom implementations support the following 3 formats
  1. Native format - each row in the data source is a native object, for instance in JSONIDF, an entire row and its fields in sqoop will be represented as a JSON object, in AvroIDF, entire row and its fields will be represented as a Avro record
  2. CSV text format - each row and its fields are represented as CSV text
  3. Object Array format  - each field in the row is an element in the object array. Hence a row in the data source is represented as a object array

 

Code Block
languagejava
titleIntermediateDataFormat API
collapsetrue
 /**
   * Get one row of data.
   *
   * @return - One row of data, represented in the internal/native format of
   *         the intermediate data format implementation.
   */
  public T getData() {
    return data;
  }
  /**
   * Set one row of data. If validate is set to true, the data is validated
   * against the schema.
   *
   * @param obj - A single row of data to be moved.
   */
  public void setData(T obj) {
    this.data = obj;
  }

IDF API provides 3 main ways to represent data that flows within sqoop 

  1. Native format - each row in the data source is a native object, for instance in JSONIDF, an entire row and its fields in sqoop will be represented as a JSON object, in AvroIDF, entire row and its fields will be represented as a Avro record
  2. CSV text format - each row and its fields are represented as CSV text
  3. Object Array format  - each field in the row is an element in the object array. Hence a row in the data source is represented as a object array. 
Code Block
languagejava
titleIntermediateDataFormat API
collapsetrue
public abstract class IntermediateDataFormat<T> {
  protected volatile T data;
  public int hashCode() {
    return data.hashCode();
  }

  /**
   * Get one row of data as CSV text. Use {@link #SqoopIDFUtils} for reading and writing
   *
   * @return - One row of data, represented in the internal/native format of into the sqoop specified CSV text format for each {@link #ColumnType} field in the row
   * Why a "native" internal format and then return CSV text too?
   * Imagine a connector that moves data from a thesystem that intermediatestores data formatas implementation.a
   */
 serialization publicformat T getData() {
    return data;
  }
  /**called FooFormat. If I also need the data to be
   * Setwritten oneinto rowHDFS ofas data.FooFormat, Ifthe validateadditional iscycles setburnt toin true,converting
 the data is* validated
the FooFormat to *text againstand theback schema.
is useless - *
so using the *sqoop @paramspecified
 data - A* singleCSV rowtext offormat datasaves tothose beextra moved.cycles
   * <p/>
  public void* setData(T data) {
    this.data = data;
  }
  /**
   * Get one row of data as CSV text. Use SqoopDataUtils for reading and writingMost fast access mechanisms, like mysqldump or pgsqldump write the data
   * out as CSV, and most often the source data is also represented as CSV
   * into- so thehaving sqoopa specifiedminimal CSV support textis formatmandated for each {@link #ColumnType} field in the rowall IDF, so we can easily read the
   * Whydata aout "native" internal format and then return CSV text too?as text and write as text.
   * Imagine<p/>
 a connector that* moves@return data- fromString a system that storesrepresenting the data asin a
CSV text  * serialization format called FooFormat.
 If I also*/
 need thepublic dataabstract to beString getCSVTextData();
   /**
 written into HDFS* asSet FooFormat,one therow additionalof cyclesdata burnt in convertingas CSV.
   */
 the FooFormatpublic toabstract textvoid and back is useless - so using the sqoop specified
   * CSV text format saves those extra cycles
   * <p/>setCSVTextData(String csvText);
  /**
   * Get one row of data as an Object array.
   * Sqoop uses defined object representation
   * Mostfor fasteach accesscolumn mechanisms,type. likeFor mysqldump or pgsqldump write the datainstance org.joda.time to represent date.
   * outUse as CSV, and most often the source data is also represented as CSV{@link #SqoopIDFUtils} for reading and writing into the sqoop
   * specified object -format sofor havingeach a{@link minimal#ColumnType} CSVfield supportin isthe mandatedrow
 for all IDF, so we can easily read the
   * data out as text and write as text.
   * <p/>
   * @return - String representing* </p>
   * @return - String representing the data as an Object array
   * If FROM and TO schema exist, we will use SchemaMatcher to get the data inaccording CSVto text format."TO" schema
   */
  public abstract StringObject[] getCSVTextDatagetObjectData();
  /**
  * Set one row of data as an Object array.
  * It Setalso oneshould rowconstruct ofthe data as CSV.representation
  * *
that the IDF */
represents so publicthat the abstractobject voidis setCSVTextData(String csvText);ready to
  /**
 consume when * Get one row of data as an Object array. Sqoop uses defined object representation
   * for each column type. For instance org.joda.time to represent date.Use SqoopDataUtils
   * for reading and writing into the sqoop specified object format
   * for each {@link #ColumnType} field in the row
   * </p>
   * @return - String representing the data as an Object array
   * If FROM and TO schema exist, we will use SchemaMatcher to get the data according to "TO" schema
   */
  public abstract Object[] getObjectData();
  /**
   * Set one row of data as an Object array.
   *
   */
  public abstract void setObjectData(Object[] data);getData is invoked. Custom implementations
  * will override this method to convert form object array
  * to the data format
  */
  public abstract void setObjectData(Object[] data);
  /**
   * Set the schema for serializing/de-serializing data.
   *
   * @param schema
   *          - the schema used for serializing/de-serializing data
   */
  public void setSchema(Schema schema) {
    if (schema == null) {
      // TODO(SQOOP-1956): throw an exception since working without a schema is dangerous
      return;
    }
    this.schema = schema;
  ..
  }
  /**
   * SetSerialize the fields schemaof for serializing/de-serializing  datathis object to <code>out</code>.
   *
   * @param schema - the schema used for serializing/de-serializing  data out <code>DataOuput</code> to serialize this object into.
   * @throws IOException
   */
  public abstract void setSchemawrite(SchemaDataOutput schemaout) throws IOException;
  /**
   * SerializeDeserialize the fields of this object from <code>in</code>.
   *
   * <p>For efficiency, implementations should attempt to <code>out</code>. re-use storage in the
   * existing object where possible.</p>
   *
   * @param outin <code>DataOuput<<code>DataInput</code> to serializedeseriablize this object intofrom.
   * @throws IOException
   */
  public abstract void writeread(DataOutputDataInput outin) throws IOException;
  /**
   * Deserialize the fields of this object from <code>in</code>.
   *
   * <p>For efficiency, implementations should attempt to re-use storage in the
   * existing object where possible.</p>
   *
   * @param in <code>DataInput</code> to deseriablize this object from.
   * @throws IOException
   */
  public abstract void read(DataInput in) throws IOException;
} Provide the external jars that the IDF depends on
   * @return set of jars
   */
  public Set<String> getJars() {
    return new HashSet<String>();
  }
  @Override
  public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + ((data == null) ? 0 : data.hashCode());
    result = prime * result + ((schema == null) ? 0 : schema.hashCode());
    return result;
  }
  

NOTE: The CSV Text format and the Object Array format are custom to Sqoop and the details of this format for every supported column/field type in the schema are described below.

...

Column is an abstraction to represent a field in a row. There are custom classes for sub type such as String, Number, Date, Map, Array. It has attributes that provide metadata about the column data such as is that field nullable?, if that field is a String, what is its maxsize?, if it is DateTime, does it support timezone?, if it is a Map, what is the type of the key and what the is the type of the value?, if it is Array, what is the type of the elements?, if it is Enum, what are the supported options for the enum?

Code Block
languagejava
titleColumn
collapsetrue
/**
 * Base class for all the supported types in the Sqoop {@link #Schema}
 */
public abstract class Column {
  /**
   * Name of the column. It is optional
   */
  String name;
  /**
   * Whether the column value can be empty/null
   */
  Boolean nullable;
  /**
   * By default a column is nullable
   */

...

Column TypeCSV FormatObject Format
NULL value in the field

  public static final String NULL_FIELD = "NULL";

java null
ARRAY
  • Will be encoded as String (and hence enclosed with '\, inside there will be JSON encoding of the top level array elements (hence the entire value will be enclosed in [] pair), Nested values are not JSON encoded..
  • Few examples:
    • Array of FixedPoint '[1,2,3]'
    • Array of Text '["A","B","C"]'
    • Array of Objects of type FixedPoint '["[11, 12]","[14, 15]"]'
    • Array of Objects of type Text '["[A, B]","[X, Y]"]' 

Refer https://issues.apache.org/jira/browse/SQOOP-1771 for more details

java Object[]
BINARY
byte array enclosed in quotes and encoded with ISO-8859-1 charsetjava byte[]
BIT

true, TRUE, 1

false, FALSE, 0

( not encoded in quotes )

Unsupported values should throw an exception

java boolean

DATE
YYYY-MM-DD ( no time)org.joda.time.LocalDate
DATE_TIME

YYYY-MM-DD HH:MM:DD[.ZZZ][+/-XX] ( fraction and timezone are optional)

Refer https://issues.apache.org/jira/browse/SQOOP-1846 for more details

org.joda.time. DateTime

or

org.joda.time. LocalDateTime

(depends on timezone attribute )

DECIMAL

BigDecimal (not encoded in quotes ),

 

java BigDecimal

scale and precision fields are handled via :

https://issues.apache.org/jira/browse/SQOOP-2027

ENUM
Same as TEXTjava String
FIXED_POINT

Integer or Long, ( not encoded in quotes )


java Integer

or

java Long

( depends on

byteSize attributebyteSize

and signed attribute)

https://issues.apache.org/jira/browse/SQOOP-2022

FLOATING_POINT
Float or Double ( not encoded in quotes )

java Double

or

java Float

( depends on

byteSize attribute)

https://issues.apache.org/jira/browse/SQOOP-2022

MAP
  • Will be encoded as String (and hence enclosed with '\, inside there will be JSON encoding of the map (hence the entire value will be enclosed in  pair { }, nested values are also encoded as JSON
  • Map<Number, Number> '{1:20}'
  • Map<String, String> - '{"testKey":"testValue"}'


    Refer https://issues.apache.org/jira/browse/SQOOP-1771 for more details
java.util.Map<Object, Object>
SET
same as ARRAY

java Object[]

TEXT

Entire string will be enclosed in single quotes and all bytes will be printed as they are will exception of following bytes

Byte

Encoded as

0x5C

\ \ (no space) 

0x27

\'

0x22

\"

0x1A

\Z

0x0D

\r

0x0A

\n

0x00

\0

java String
TIME

HH:MM:DD[.ZZZ] ( fraction is optional )

3 digit milli second support only for time

org.joda.time.LocalTime ( No Timezone)
UNKNOWN
same as BINARYsame as java byte[]

...

CSVIntermediateDataFormat

Relevant JIRA : SQOOP-555 and SQOOP-1350

...

NOTE: It may not be obvious but the current IDF design expect every new implementation of it to expose the CSV an ObjectArray formats in addition to its native format.

JSONIntermediateDataFormat

Relevant JIRA: SQOOP-1901

Avro Intermediate Data Format

SqoopIDFUtils 

It is a utility class in sqoop to aid connectors in encoding data into expected CSV format and object format and also parsing the CSV string back to the prescribed object format.

No Format
 https://issues.apache.org/jira/browse/SQOOP-1813

 

...

Food for Thought.?

(Some of the below are some serious shortcomings of the current design as it exists)

  • The choice of using CSVText and ObjectArray as  as mandated formats for Sqoop IDF are influenced from the Sqoop 1 design, It favors some traditional fast dump databases but there is no real benchmark to prove how optimal it is vs using Avro or other formats for representing the data
  • Using  intermediate format might lead to discrepancies in some specific column types, for instance using JODA for representing the date time objects only gives 3 digit precision, where as the sql timestamp from JDBC sources supports 6 digit precision
  • More importantly SqoopConnector API has a getIDF..() method, that ties a connector to a  specific intermediate format for all the supported directions ( i.e both FROM and TO at this point) . This means the connector in both FROM and TO side has to provide this format and expect this format respectively. 
  • There are 3 different formats as described above in each IDF implementation, so each connector can potentially support one of these formats and that is not obvious at all when a connector proclaims to use a particular implementation of IDF such as CSVIntermediateDataFormat. For instance the GenericJDBCConnector says it uses CSVIntermediateDataFormat but chooses to write objectArray in extractor and readObjectArray in Loader. Hence it is not obvious what is the format underneath that it will read and write to. On the other hand, HDFSConnector also says it uses CSVIntermediateDataFormat but, uses only the CSV text format in the extractor and loader at this point. May change in future. 
  • A connector possibly should be able to handle multiple IDFs, and expose the supported IDFs per direction. It is not possible today, For instance a sqoop job should be able to dynamically choose the IDF for HDFSConnector when used in the TO direction. The job might be able to say, use AVRO IDF for the TO side and hence load all my data into HDFS in avro format. This means when doing the load, the HDFS will use the readContent API of the  SqoopOutputFormatDataReader

    . But today HDFS can only say it uses CSVIntermediateDataFormat and the data loaded into HDFS will need conversion from CSV to Avro as a separate step.
  • Assuming that every IDF support to implement a CSVText equivalent is a overkill. If at all we mandated to use CSV and ObjectArray as the 2 formats, we should have made IDF not an API, but in fact a standard implementation, it could have been further extended.  Imagine having to write a JSONIDF or a AvroIDF and still having to replicate the same logic that the default/degenerate CSVIntermediateDataFormat provides. 

 

 

 

 

 

...

  • . But today HDFS can only say it uses CSVIntermediateDataFormat and the data loaded into HDFS will need conversion from CSV to Avro as a separate step.