...
For interpreters which use SQL
provide an interpreter option: create TableData whenever executing a paragraph
or provide new interpreter magic for it: %spark.sql_share, %jdbc.mysql_share, …
or automatically put all table results into the resource pool if they are not heavy (e.g keeping query only, or just reference for RDD)
If interpreter supports runtime interpreterparameters, we can use this syntax: %jdbc(share=true) to specify whether share the table result or not
For interpreters which use programming language (e.g python)
provide API like z.put()
Code Block language scala linenumbers true // infer instance type and convert it to predefined the `TableData` subclass such as `SparkDataFrameTableData` z.put (“myTable01”, myDataFrame01) // or force user to put the `TableData` subclass val myTableData01 = new SparkRDDTableData(myRdd01) z.put(“myTable01”, myTableData01)
...