Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

[GSOC] [SkyWalking] Self-Observability of the query subsystem in BanyanDB

Background

SkyWalking BanyanDB is an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data.

Objectives

  1. Support EXPLAIN[1] for both measure query and stream query
  2. Add self-observability including trace and metrics for query subsystem
  3. Support EXPLAIN in the client SDK & CLI and add query plan visualization in the UI

[1]: EXPLAIN in MySQL

Recommended Skills

  1. Familiar with Go
  2. Have a basic understanding of database query engine
  3. Have an experience of Apache SkyWalking or other APMs

Mentor

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Jiajing Lu, mail: lujiajing (at) apache.org
Project Devs, mail: dev (at) skywalking.apache.org

Doris

[GSoC][Doris]Dictionary encoding optimization

Background

Apache Doris is a modern data warehouse for real-time analytics.
It delivers lightning-fast analytics on real-time data at scale.

Objectives

Dictionary encoding optimization
To save storage space, Doris uses dictionary encoding when storing string-type data in the storage layer if the cardinality is relatively low. Dictionary encoding involves mapping string values to integer values using a dictionary. The data can be stored directly as integers, and the dictionary information is stored separately. When reading the data, the integers are converted back to their corresponding string values based on the dictionary.

The storage layer doesn't know whether a column has low or high cardinality when the data comes in. Currently, the implementation encodes the first page using dictionary encoding, and if the dictionary becomes too large, it indicates a column with high cardinality. Subsequent pages will not use dictionary encoding. However, even for columns with high cardinality, a dictionary page is still retained, which doesn't save storage space and adds additional memory overhead during reading as well as extra CPU overhead during decoding.
Optimizations can be made to improve the memory and CPU overhead caused by dictionary encoding.

Recommended Skills
 
Familiar with C++ programming
Familiar with the storage layer of Doris
 

Mentor

 
Mentor: Xin Liao, Apache Doris Committer, liaoxinbit@gmail.com
Mentor: YongQiang Yang, Apache Doris PMC Member, dataroaring@gmail.com
Mailing List: dev@doris.apache.org
Website: https://doris.apache.org
Source Code: https://github.com/apache/doris
 
 

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin Kirs, mail: kirs (at) apache.org
Project Devs, mail: dev (at) doris.apache.org

[GSoC][Doris]Support UPDATE for Doris Duplicate Key Table

Objectives

Support UPDATE for Doris Duplicate Key Table

Currently, Doris supports three data models, Duplicate Key / Aggregate Key / Unique Key, of which Unique Key has perfect data update support (including UPDATE statement). With the widespread popularity of Doris, users have more demands on Doris. For example, some user needs to perform ETL processing operations inside Doris, but they uses Duplicate Key table and hopes that Duplicate Key can also support UPDATE. For Duplicate Key, since there is no primary key can help we locate one specific row, UPDATE is low efficient. The usual practice is to rewrite all the data, even if the user only updates one field of a row of data, he must rewrite at least the segment file it is in. Another potentially more efficient solution is to implement Duplicate Key by combining Unique Key's Merge-on-Write, and the auto_increment column. i.e., let's change the underlying implementation of Duplicate Key to use Unique Key MoW, and add a hidden auto_increment column in the primary key, so that all the keys written by the user to the Unique Key MoW table are not duplicated, which realizes the semantics of Duplicate Key, and since each row of data has a unique primary key, we can reuse the UPDATE capability of Unique Key to support the Duplicate Key's UPDATE

We would like participants to help design and implement the solution, and perform performance testing for comparison and performance optimization.

Recommended Skills

Familiar with C++ programming

Familiar with the storage layer of Doris

Mentor

Mentor: Chen Zhang, Apache Doris Committer, chzhang1987@gmail.com

Mentor: Guolei Yi, Apache Doris PMC Member, yiguolei@gmail.com

Mailing List: dev@doris.apache.org

Website: https://doris.apache.org

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin Kirs, mail: kirs (at) apache.org
Project Devs, mail: dev (at) doris.apache.org

Openmeetings

Add blur background filter options on video sharing - AI-ML

OpenMeetings uses webRTC and HTML5 video to share audio video. Purely browser based.

One feature missing is the ability to blur your webcam's camera background.

There are multiple ways to achieve it, Google Meet seems to use: https://www.tensorflow.org/ 

Tensorflow are AI/ML models, they provide precompiled models into JS, for detection of face/body it seems: https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation is the best model.

Since Chrome 14 there is also a Background Blur API (relying on operating system APIs): https://developer.chrome.com/blog/background-blur - but that doesn't seem to be widely or reliable supported by operating systems yet.

The project would be about adding the background blur into a simple demo and then integrate into the OpenMeetings project. Additionally other types of backgrounds can be added.

Tensorflow TFJS is under the Apache 2.0 License (See LICENSE) and should be possible to redistribute with Apache OpenMeetings.

Other live demos and examples:

https://blog.francium.tech/edit-live-video-background-with-webrtc-and-tensorflow-js-c67f92307ac5



Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Sebastian Wagner, mail: sebawagner (at) apache.org
Project Devs, mail: dev (at) openmeetings.apache.org