THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!
Based on community interests and user feedbacks, roadmap is summarized in 7 different categories. (discussion mailing list thread)
0.11.0 - Prepare for 1.0.0
- Junit5
- Spark 3.4.0
- Minimize default interpreters
- Exclude Shell interpreter to minimize security issues by default
- Download interpreters easily
- Could use Github release for individual interpreters
- Fluent docker support
- JDK 11
OUTDATED
- Enterprise ready
- Authentication
- Shiro authentication ZEPPELIN-548
- Authorization
- Notebook authorization PR-681
- Security
- Multi-tenancy
- user impersonation
- Stability
- Better memory management for notebook
- Job scheduler
- High availability and Disaster Recovery
- Monitoring (exposing JMX metrics)
- Authentication
- Usability Improvement
- Pluggability ZEPPELIN-533
- Pluggable visualization
- Dynamic Interpreter, notebook, visualization loading
- Repository and registry for pluggable components
- Improve documentation
- Improve contents and readability
- more tutorials, examples
- Interpreter
- Generic JDBC Interpreter
- (spark)R Interpreter
- Cluster manager for interpreter (Proposal)
- more interpreters
- Developer support (including easier debugging)
- Notebook storage
- Versioning ZEPPELIN-540
- more notebook storages (github push/pull)
- Visualization
0.7.0 - Enterprise-ready
- Enterprise support
- multi user support (ZEPPELIN-1337)
- Impersonation
- Job management
- Monitoring support (e.g. JMX)
- multi user support (ZEPPELIN-1337)
- Interpreter
- Improve JDBC / Python interpreter
- New Interpreters
- Front end performance improvement
- Pluggable visualization
0.6.0 - Next major release.
- Job management
- New menu 'Job'. Which is displaying all job status, job histories. Display scheduled job information at a glance.
- Better Python support
- Better integration with libraries such as matplotlib. Better python repl integration (like auto completion, etc)
- R language support
- Implementation of sparkR interpreter
- Output streaming
- Output stream to the front-end side.
- Pluggable visualization
- Visualization can be pluggable
- Pivot in a backend-side
- Pivot runs in backend-side so large dataset can be transformed in backend-side before transfer to the front-end side.
- Folder structure for notebook
- Let's organize notebooks
- Bring AngularDisplay feature from 'experimental' to 'beta'
0.5.0 - First release in Apache incubator
Focusing on basic features and backend integration
- Tajo Interpreter
- Hive Interpreter
- Flink interpreter
- Any other driver that can be included by release.