THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
...
- Allow development of data connectors against a stable API, independent on Sqoop2 implementation internals (such as choice of execution engine, dependency on Hadoop components, etc). For example: Oracle connector can't assume a tnsnames.ora exists in the environment, Kite connector can't assume that hive-site.xml will exist. The connector can still ask for a location of hive-site.xml or tnsnames.ora as an input when creating a link though.
- Connectors focus on how to get data in and out of data systems. The framework include execution life-cycle - kicking off tasks / workers and such. We never rely on the framework to handle data reads and writes (even though most frameworks have IO capability) - this is the responsibility of the connectors.
- End-user actions should be exposed through a Java Client API, a REST API and a command line utility. All three are mandatory for new features.
Note |
---|
Adding some fun facts about the design are encouraged! |