Quick Links
Issue Management
Actual issue tracking is in Apache JIRA! We use this page to ground ourselves.
- Any issue you file, please file as "Issue" and not as "
Sub task" (sub tasks cannot be added to Epics) - Please attach issues to an Epic as much as possible, so it does not scatter around. (see 1.0 Epics)
- Keep issues unassigned, unless you are about to begin working on it.
- Issue must be tagged with
Fix Version/s: 1.0.0
to show up on the board. - If you have a PR up, please ensure the JIRA is in "Review" state and mark the "Reviewers" field with who your review is blocked on.
- Vinoth Chandar will move issues from
1.0.0
to1.1.0
if it does not seem important.
Roadmap to visualize which epics are in what phase.
Sync Meeting Format
Daily 7pm PST, ping Vinoth Chandar to be added
- Report status, planned next steps, call out any blockers/discussion items (1 min each max)
- Update this execution planner, see if we need to change course, adjust plans
- DIscuss blockers, Live jams to resolve issues within bounds of meeting.
Execution Phase 1 (Aug 15-Sept 15)
Focus: Spark, Flink (for NB Concurrency Control)
- (Vinoth) Identify & land all critical outstanding PRs (that solve critical issues, take us forward in our 1.0 path)
- (Vinoth) to identify. https://github.com/apache/hudi/pulls?q=is%3Apr+is%3Aopen+label%3Arelease-1.0.0
- Land all relevant prs
- (Sagar) Move
master
to 1.0.0
- (Sagar & Vinoth & Danny) Land storage format 1.0
- (Vinoth) Put up a 1.0 tech specs doc
- (Sagar) Make all format changes described here. https://issues.apache.org/jira/browse/HUDI-6242
- (Sagar) Standardization of serialization - log blocks, timeline meta files.
- (Sagar) Change Timeline/FileSystemView to support snapshot, incremental, CDC, time-travel queries correctly.
- (Danny) Introduce TrueTime API or equivalent, to explain the foundations more clearly. (reuse HUDI-3057)
- (Sagar) Changes to make multiple base file formats within each file group.
- (Sagar) No Java classes show up in table properties. HUDI-5761
- (Danny) Introduce transition time into the active timeline
- (Danny) Land LSM Timeline in well-tested, performant shape (HUDI-309, HUDI-6626, HUDI-6698)
- Design:
- (Sagar) Multi-table transactions? ( )
- (Lin) Keys: UUIDs vs. what we do today.
- (Vinoth) OCC/Time-Travel Read (+Write)
- (Danny) Time-Travel read on NB CC & finalize NB CC design
- (Danny) TrueTime API implementation for Hudi (wait based, or filesystem/stateless based)
- (Vinoth/Shawn) Cloud native storage layout design (Udit's RFC-60)
- (Ethan???) Logical partitioning/Index Functions API (Java, Native) and its integration into Spark/Presto/Trino. (HUDI-512)
- (Sagar + ???) Schema Evolution and version tracking in MT.
- Implementation
- (Lin) Finalize RFC-46/RecordMerger API, cross-platform support, only invoked for
hoodie.merge.mode=custom
? (complete HUDI-3217) - (Ethan) Implement MoR snapshot query (positional/key based updates, deletes), partial updates, custom merges on new File Format code path.
- (Lin) Implement a uniform way to fetch incremental data files based on new timeline (https://issues.apache.org/jira/browse/HUDI-2750)
- (Ethan) Implement writers for positional updates, deletes, partial updates, ordering field based merging.
- (Ethan) Implement engine agnostic FileGroup Read APIs across Spark/Hive
- (Vinoth) Implement DataFrame based write path; Take HoodieData abstraction to completion and end-end row writing for Spark? All write operations work with rows end-end (HUDI-4857)
- (Sagar) Async indexer is in final shape (complete HUDI-2488)
- (Sagar) Existing Optimistic Concurrency Control is in final shape (complete HUDI-1456)
- (Lin) Land Parquet keyed lookup code (???)
- (Danny) Flink/Non-blocking CC (HUDI-6640, HUDI-6495 )
- <what are some other code refactoring.. to burn down?> (, HUDI-2261, HUDI-6243, HUDI-3614, HUDI-4444, HUDI-4756)
- (Lin) Finalize RFC-46/RecordMerger API, cross-platform support, only invoked for
- (Sagar) Open/Risk Items:
(Ethan/Danny) _hoodie_operation
metafield. Spark/Flink interop.- (Vinoth) Are we happy with DT <> MT sync mechanism? does this need to be revisited? (HUDI-2461 + other issues with Flink OCC)
- (Vinoth) Are we happy with how log compaction is implemented? (https://issues.apache.org/jira/browse/HUDI-3580)
- (Vinoth) Should we retain virtual keys support? https://issues.apache.org/jira/browse/HUDI-2235
Execution Phase 2 (Sept 15-Oct 30)
- APIs: (https://issues.apache.org/jira/browse/HUDI-4141)
- FileGroup APIs in Java
- Rust/C++ APIs for Timeline, Metadata, FileGroup Read/Write (https://issues.apache.org/jira/browse/HUDI-6486)
- Internal APIs/Abstractions/Code Refactoring (https://issues.apache.org/jira/browse/HUDI-6243)
- HUDI-43
- HoodieSchema ? https://issues.apache.org/jira/browse/HUDI-6499
- Design
- General purpose, global timeline (no active vs archived distinction) (HUDI-309)
- Non-blocking concurrency control/clustering + updates, inserts + inserts for Spark + Flink.
- Spark SQL statements to complete DB vision. (vinoth has a list. ???)
- (Vinoth) Lance file format + storing blobs/images.(Needs an epic)
- Implementation
- Multi-table transaction
- Secondary indexes (Bloom, RLI, VectorIndex, ..) on Spark read/write path. (HUDI-3907, HUDI-4128)
- MT/RLI on Parquet base files
- Minimize configs and cleanup defaults (https://issues.apache.org/jira/browse/HUDI-1239)
- Meta Sync to Glue/HMS with reduced storage/API overhead (HUDI-2519, HUDI-5108, HUDI-6488), seamless inc query, cdc query, ro/rt experience
- Broader Performance improvements (HUDI-3249)
- Encoding updates as deletes + inserts. (HUDI-6490)
- SQL experience for timeline, metadata. (HUDI-6498)
- [???] Parquet Rewriting at Page Level for Spark Rows (Writer perf) (HUDI-4790)
- Introduce HudiStorage APIs to abstract out Hadoop FileSystem. (HUDI-6497)
- MT integration across Presto, Trino (HUDI-4552, HUDI-4394)
- Presto : Snapshot, Incremental, Time Travel, CDC queries (on MT) (https://issues.apache.org/jira/browse/HUDI-3210)
- Trino: (repeat above https://issues.apache.org/jira/browse/HUDI-2687)
Packaging Phase (Nov 1- Nov 15)(Marked 1.1.0 for now)
- Release (if still pending!)
- Docs
- Examples
- Bundles & Packages (HUDI-3529)
- Site updates
- Deprecate/Cleanup cWiki
Below the line (Marked 1.1.0 for now)
- Unstructured Hudi table.
- Implement Non blocking CC for Spark...
- Native HFile reader/writer in Hudi. (VC: This was punted since we'd default to Parquet based MDT)
- Streaming Performance: optimize the current upsert DAG on MetadataIndex (hybrid of RLI, Bloom Index, ....)
- Column family use-case (sparse rows on wide tables??)
- Cool new indexes
- Spatial Index
- Search/Lucene Index
- Bitmap Index,
- Hive Storage Handler
- Demos
- Killer dbt demo (https://issues.apache.org/jira/browse/HUDI-6586)
- Dev Hygiene
- Tests