...
list all resources
get information for a resource
column name, type for tables
preview for tables
get a resource
If the resource is table, it should be downloaded using streaming
5. Discussion
5.1. How can a user decide to create TableData instance
...
for sharing the resource?
For interpreters which use SQL
provide an interpreter option: create TableData whenever executing a paragraph
or provide new interpreter magic for it: %spark.sql_share, %jdbc.mysql_share, …
or automatically put all table results into the resource pool if they are not heavy (e.g keeping query only, or just reference for RDD)
If interpreter supports runtime interpreter, we can use this syntax: %jdbc(share=true) to specify whether share the table result or not
For interpreters which use programming language (e.g python)
provide API like z.put()
Code Block language scala linenumbers true // infer instance type and convert it to predefined the `TableData` subclass such as `SparkDataFrameTableData` z.put (“myTable01”, myDataFrame01) // or force user to put the `TableData` subclass val myTableData01 = new SparkRDDTableData(myRdd01) z.put(“myTable01”, myTableData01)
...