Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Airavata

Local user interaface for Airavata MFT

NOte: This is an issue in github - https://github.com/apache/airavata-mft/issues/114 cross posting in Jira for GSoC purposes. 

Currently, Airavata MFT can be accessed through its command line interface and the gRPC API. However, it is really easy if a Docker desktop-like user interface is provided for a locally running Airavata MFT. The functionalities of such an interface can be summarized as follows

  1. Start / Stop MFT Instance
  2. Register/ List/ Remove Storage endpoints
  3. Access data (list, download, delete, upload) in configured storage endpoints
  4. Move data between storage endpoints
  5. Search data across multiple storage endpoints
  6. Analytics - Performance numbers (data transfer rates in each agent)

We can use ElectonJS to develop this cross-platform user interface. The node.js backend of ElectronJS can use gRPC to connect to Airavata MFT to perform management operations

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Suresh Marru, mail: smarru (at) apache.org
Project Devs, mail: dev (at) airavata.apache.org

Apache NuttX

Airavata

Local user interaface for Airavata MFT

NOte: This is an issue in github - https://github.com/apache/airavata-mft/issues/114 cross posting in Jira for GSoC purposes. 

Currently, Airavata MFT can be accessed through its command line interface and the gRPC API. However, it is really easy if a Docker desktop-like user interface is provided for a locally running Airavata MFT. The functionalities of such an interface can be summarized as follows

  1. Start / Stop MFT Instance
  2. Register/ List/ Remove Storage endpoints
  3. Access data (list, download, delete, upload) in configured storage endpoints
  4. Move data between storage endpoints
  5. Search data across multiple storage endpoints
  6. Analytics - Performance numbers (data transfer rates in each agent)

We can use ElectonJS to develop this cross-platform user interface. The node.js backend of ElectronJS can use gRPC to connect to Airavata MFT to perform management operations

NuttX NAND Flash Subsystem

Currently NuttX has support only for NOR Flash and eMMC as solid state storage.

Although for low-end embedded systems NOR Flash still much used, for some devices that need bigger storage, NAND Flash is a better option, because its price per MB is very low.

In the other NAND Flash brings many challenges: you need to map and track all the bad-blocks, you need to have a good filesystem for wear leveling. Currently the SmartFS and LittleFS offer some kind wear leveling for NOR Flash. It needs to be adapted to NAND Flash.

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de AssisSuresh Marru, mail: acassis smarru (at) apache.org
Project Devs, mail: dev (at) nuttxairavata.apache.org

Apache NuttX

NuttX NAND Flash Subsystem

Currently NuttX has support only for NOR Flash and eMMC as solid state storage.

Although for low-end embedded systems NOR Flash still much used, for some devices that need bigger storage, NAND Flash is a better option, because its price per MB is very low.

In the other NAND Flash brings many challenges: you need to map and track all the bad-blocks, you need to have a good filesystem for wear leveling. Currently the SmartFS and LittleFS offer some kind wear leveling for NOR Flash. It needs to be adapted to NAND Flash.

Difficulty: Major

Rust integration on NuttX

The Rust language is gain some momentum as an alternative to C and C++ for embedded system (https://www.rust-lang.org/what/embedded) and it should be very useful to be able to develop NuttX applications using Rust language.

Sometime Yoshiro Sugino already ported the Rust standard libraries, but it was not a complete port and wasn't integrated on NuttX. Anyway this initial port could be used as starting point for some student willing to add official support on NuttX.

Also it needs to pave the way to support developing NuttX driver in Rust and an complement to C drivers.

Difficulty: Normal
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org

Device Tree support for NuttX

Rust integration on NuttX

The Rust language is gain some momentum as an alternative to C and C++ for embedded system (https://www.rust-lang.org/what/embedded) and it should be very useful to be able to develop NuttX applications using Rust language.

Sometime Yoshiro Sugino already ported the Rust standard libraries, but it was not a complete port and wasn't integrated on NuttX. Anyway this initial port could be used as starting point for some student willing to add official support on NuttX.

Also it needs to pave the way to support developing NuttX driver in Rust and an complement to C drivers.

Difficulty: Normal

Device Tree will simplify the way as boards are configured to support NuttX. Currently for each board the developer/user need to manually create an initialization file for each feature or device (expect when the device is already in the common board folder).

Matias Nitsche (aka v0id) create a very descriptive and information explanation here: https://github.com/apache/incubator-nuttx/issues/1020

The goal for this project is to add Device Tree support for NuttX and let it to be configurable (low end board should be able to avoid using Device Tree for instance).

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org

Micro-ROS integration on NuttX

Micro-ROS (https://micro.ros.org) is a ROS2 support to Microcontrollers. Initially the project was developed over NuttX by Bosch and other EU organizations. Later on they added support to FreeRTOS and Zephyr. After that NuttX support started ageing and we didn't get anyone working to fix it (with few exceptions like Roberto Bucher work to test it with pysimCoder).

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org

Device Tree support for NuttX

Device Tree will simplify the way as boards are configured to support NuttX. Currently for each board the developer/user need to manually create an initialization file for each feature or device (expect when the device is already in the common board folder).

Matias Nitsche (aka v0id) create a very descriptive and information explanation here: https://github.com/apache/incubator-nuttx/issues/1020

The goal for this project is to add Device Tree support for NuttX and let it to be configurable (low end board should be able to avoid using Device Tree for instance).


Add X11 graphic support on NuttX using NanoX

NanoX/Microwindows is a small graphic library what allow Unix/Linux X11 application to run on embedded systems that cannot support X-Server because it is too big. Add it to NuttX will allow many applications to be ported to NuttX. More importantly: it will allow FLTK 1.3 run on NuttX and that could big Dillo web browser.

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org
TinyGL support

Micro-ROS integration on NuttX

TinyGL is a small 3D graphical library created by Fabrice Bellard (same creator of QEMU) designed for embedded system. Currently NuttX RTOS doesn´t have a 3D library and this could enable people to add more 3D programs on NuttXMicro-ROS (https://micro.ros.org) is a ROS2 support to Microcontrollers. Initially the project was developed over NuttX by Bosch and other EU organizations. Later on they added support to FreeRTOS and Zephyr. After that NuttX support started ageing and we didn't get anyone working to fix it (with few exceptions like Roberto Bucher work to test it with pysimCoder).

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org

SkyWalking

Add X11 graphic support on NuttX using NanoX

NanoX/Microwindows is a small graphic library what allow Unix/Linux X11 application to run on embedded systems that cannot support X-Server because it is too big. Add it to NuttX will allow many applications to be ported to NuttX. More importantly: it will allow FLTK 1.3 run on NuttX and that could big Dillo web browser.

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis (at) apache.org
Project Devs, mail: dev (at) nuttx.apache.org

TinyGL support on NuttX

TinyGL is a small 3D graphical library created by Fabrice Bellard (same creator of QEMU) designed for embedded system. Currently NuttX RTOS doesn´t have a 3D library and this could enable people to add more 3D programs on NuttX.

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Alan Carvalho de Assis, mail: acassis

[GSOC] [SkyWalking] Self-Observability of the query subsystem in BanyanDB

Background

SkyWalking BanyanDB is an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data.

Objectives

  1. Support EXPLAIN[1] for both measure query and stream query
  2. Add self-observability including trace and metrics for query subsystem
  3. Support EXPLAIN in the client SDK & CLI and add query plan visualization in the UI

[1]: EXPLAIN in MySQL

Recommended Skills

  1. Familiar with Go
  2. Have a basic understanding of database query engine
  3. Have an experience of Apache SkyWalking or other APMs

Mentor

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Jiajing Lu, mail: lujiajing (at) apache.org
Project Devs, mail: dev (at) skywalkingnuttx.apache.org

...

SkyWalking

[

GSoC

GSOC] [SkyWalking] Self-Observability of the query subsystem in BanyanDB

Background

SkyWalking BanyanDB is an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data.

Objectives

  1. Support EXPLAIN[1] for both measure query and stream query
  2. Add self-observability including trace and metrics for query subsystem
  3. Support EXPLAIN in the client SDK & CLI and add query plan visualization in the UI

[1]: EXPLAIN in MySQL

Recommended Skills

  1. Familiar with Go
  2. Have a basic understanding of database query engine
  3. Have an experience of Apache SkyWalking or other APMs

Mentor

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Jiajing Lu, mail: lujiajing (at) apache.org
Project Devs, mail: dev (at) skywalking.apache.org

[GSOC] [SkyWalking] Add Overview page in BanyanDB UI

Background

SkyWalking BanyanDB is an observability database, aims to ingest, analyze and store Metrics, Tracing and Logging data.


The BanyanDB UI is a web interface provided BanyanDB server. It's developed with Vue3 and Vite3

Objectives

The UI should have a user-friendly Overview page.
The Overview page must display a list of nodes running in a cluster.
For each node in the list, the following information must be shown:

  • Node ID or name
  • Uptime
  • CPU usage (percentage)
  • Memory usage (percentage)
  • Disk usage (percentage)
  • Ports(gRPC and HTTP)

The web app must automatically refresh the node data at a configurable interval to show the most recent information.

Recommended Skills

  1. Familiar with Vue and Vite
  2. Have a basic understanding of RESTFul
  3. Have an experience of Apache SkyWalking
Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Hongtao Gao, mail: hanahmily (at) apache.org
Project Devs, mail: dev (at) skywalking.apache.org

Doris

[GSoC][Doris]Support UPDATE for Doris Duplicate Key Table

Objectives

Support UPDATE for Doris Duplicate Key Table

Currently, Doris supports three data models, Duplicate Key / Aggregate Key / Unique Key, of which Unique Key has perfect data update support (including UPDATE statement). With the widespread popularity of Doris, users have more demands on Doris. For example, some user needs to perform ETL processing operations inside Doris, but they uses Duplicate Key table and hopes that Duplicate Key can also support UPDATE. For Duplicate Key, since there is no

Doris]Dictionary encoding optimization

Background

Apache Doris is a modern data warehouse for real-time analytics.
It delivers lightning-fast analytics on real-time data at scale.

Objectives

Dictionary encoding optimization
To save storage space, Doris uses dictionary encoding when storing string-type data in the storage layer if the cardinality is relatively low. Dictionary encoding involves mapping string values to integer values using a dictionary. The data can be stored directly as integers, and the dictionary information is stored separately. When reading the data, the integers are converted back to their corresponding string values based on the dictionary.

The storage layer doesn't know whether a column has low or high cardinality when the data comes in. Currently, the implementation encodes the first page using dictionary encoding, and if the dictionary becomes too large, it indicates a column with high cardinality. Subsequent pages will not use dictionary encoding. However, even for columns with high cardinality, a dictionary page is still retained, which doesn't save storage space and adds additional memory overhead during reading as well as extra CPU overhead during decoding.
Optimizations can be made to improve the memory and CPU overhead caused by dictionary encoding.

Recommended Skills
 
Familiar with C++ programming
Familiar with the storage layer of Doris
 

Mentor

 
Mentor: Xin Liao, Apache Doris Committer, liaoxinbit@gmail.com
Mentor: YongQiang Yang, Apache Doris PMC Member, dataroaring@gmail.com
Mailing List: dev@doris.apache.org
Website: https://doris.apache.org
Source Code: https://github.com/apache/doris
 
 

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin Kirs, mail: kirs (at) apache.org
Project Devs, mail: dev (at) doris.apache.org

[GSoC][Doris]Support UPDATE for Doris Duplicate Key Table

Objectives

Support UPDATE for Doris Duplicate Key Table

Currently, Doris supports three data models, Duplicate Key / Aggregate Key / Unique Key, of which Unique Key has perfect data update support (including UPDATE statement). With the widespread popularity of Doris, users have more demands on Doris. For example, some user needs to perform ETL processing operations inside Doris, but they uses Duplicate Key table and hopes that Duplicate Key can also support UPDATE. For Duplicate Key, since there is no primary key can help we locate one specific row, UPDATE is low efficient. The usual practice is to rewrite all the data, even if the user only updates one field of a row of data, he must rewrite at least the segment file it is in. Another potentially more efficient solution is to implement Duplicate Key by combining Unique Key's Merge-on-Write, and the auto_increment column. i.e., let's change the underlying implementation of Duplicate Key to use Unique Key MoW, and add a hidden auto_increment column in the primary key, so that all the keys written by the user to the Unique Key MoW table are not duplicated, which realizes the semantics of Duplicate Key, and since each row of data has a unique primary key, we can reuse the UPDATE capability of Duplicate Key, and since each row of data has a unique primary key, we can reuse the UPDATE capability of Unique Key to support the Duplicate Key's UPDATE

We would like participants to help design and implement the solution, and perform performance testing for comparison and performance optimization.

Recommended Skills

Familiar with C++ programming

Familiar with the storage layer of Doris

Mentor

Mentor: Chen Zhang, Apache Doris Committer, chzhang1987@gmail.com

Mentor: Guolei Yi, Apache Doris PMC Member, yiguolei@gmail.com

Mailing List: dev@doris.apache.org

Unique Key to support the Duplicate Key's UPDATE

We would like participants to help design and implement the solution, and perform performance testing for comparison and performance optimization.

Recommended Skills

Familiar with C++ programming

Familiar with the storage layer of Doris

Mentor

Mentor: Chen Zhang, Apache Doris Committer, chzhang1987@gmail.com

Mentor: Guolei Yi, Apache Doris PMC Member, yiguolei@gmail.com

Mailing List: dev@doris.apache.org

Website: https://doris.apache.org

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin Kirs, mail: kirs (at) apache.org
Project Devs, mail: dev (at) doris.apache.org

[GSoC][Doris]Dictionary encoding optimization

Background

Apache Doris is a modern data warehouse for real-time analytics.
It delivers lightning-fast analytics on real-time data at scale.

Objectives

Dictionary encoding optimization
To save storage space, Doris uses dictionary encoding when storing string-type data in the storage layer if the cardinality is relatively low. Dictionary encoding involves mapping string values to integer values using a dictionary. The data can be stored directly as integers, and the dictionary information is stored separately. When reading the data, the integers are converted back to their corresponding string values based on the dictionary.

The storage layer doesn't know whether a column has low or high cardinality when the data comes in. Currently, the implementation encodes the first page using dictionary encoding, and if the dictionary becomes too large, it indicates a column with high cardinality. Subsequent pages will not use dictionary encoding. However, even for columns with high cardinality, a dictionary page is still retained, which doesn't save storage space and adds additional memory overhead during reading as well as extra CPU overhead during decoding.
Optimizations can be made to improve the memory and CPU overhead caused by dictionary encoding.

Recommended Skills
 
Familiar with C++ programming
Familiar with the storage layer of Doris
 

Mentor

 
Mentor: Xin Liao, Apache Doris Committer, liaoxinbit@gmail.com
Mentor: YongQiang Yang, Apache Doris PMC Member, dataroaring@gmail.com
Mailing List: dev@doris.apache.org
Website: https://doris.apache.org
Source Code: https://github.com/apache/doris
 
 

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin Kirs, mail: kirs (at) apache.org
Project Devs, mail: dev (at) doris.apache.org

Beam

[GSOC][Beam] Build out Beam Use Cases

Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and Runners for executing them on distributed processing backends. On top of providing lower level primitives, Beam has also introduced several higher level transforms used for machine learning and some general data processing use cases. This project focuses on identifying and implementing real world use cases that use these transforms

Objectives:
1. Add real world use cases demonstrating Beam's MLTransform for preprocessing data and generating embeddings
2. Add real world use cases demonstrating Beam's Enrichment transform for enriching existing data with data from a slowly changing source.
3. (Stretch) Implement 1 or more additional "enrichment handlers" for interacting with currently unsupported sources

Useful links:
Apache Beam repo - https://github.com/apache/beam
MLTransform docs - https://beam.apache.org/documentation/transforms/python/elementwise/mltransform/
Enrichment code - https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/enrichment.py
Enrichment docs (should be published soon) - https://github.com/apache/beam/pull/30187

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Danny McCormick, mail: damccorm (at) apache.org
Project Devs, mail: dev (at) beam.apache.org

[GSOC][Beam] Add connectors to Beam ManagedIO

Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and Runners for executing them on distributed processing backends. On top of providing lower level primitives, Beam has also introduced several higher level transforms used for machine learning and some general data processing use cases. One new transform that is being actively worked on is a unified ManagedIO transform which gives runners the ability to manage (upgrade, optimize, etc...) an IO (input source or output sink) without upgrading the whole pipeline. This project will be about adding one or more IO integrations to ManagedIO

Objectives:
1. Add a BigTable integration to ManagedIO
2. Add a Spanner integration to ManagedIO

Useful links:
Apache Beam repo - https://github.com/apache/beam
Docs on ManagedIO are relatively light since this is a new project, but here are some docs on existing IOs in Beam - https://beam.apache.org/documentation/io/connectors/

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Danny McCormick, mail: damccorm (at) apache.org
Project Devs, mail: dev (at) beam.apache.org

[GSOC][Beam] Build out Beam Yaml features

Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and Runners for executing them on distributed processing backends. Beam recently added support for launching jobs using Yaml on top of its other SDKs, this project would focus on adding more features and transforms to the Yaml SDK so that it can be the easiest way to define your data pipelines.

Objectives:
1. Add support for existing Beam transforms (IOs, Machine Learning transforms, and others) to the Yaml SDK
2. Add end to end pipeline use cases using the Yaml SDK
3. (stretch) Add Yaml SDK support to the Beam playground

Useful links:
Apache Beam repo - https://github.com/apache/beam
Yaml SDK code + docs - https://github.com/apache/beam/tree/master/sdks/python/apache_beam/yaml
Open issues for the Yaml SDK - https://github.com/apache/beam/issues?q=is%3Aopen+is%3Aissue+label%3Ayaml

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Danny McCormick, mail: damccorm (at) apache.org
Project Devs, mail: dev (at) beam.apache.org

Kvrocks

[GSoC] [Kvrocks] Support time series data structure and commands like Redis

RedisTimeSeries is a redis module used to operate and query time series data, giving redis basic time series database capabilities.

As Apache Kvrocks is characterized by being compatible with the Redis protocol and commands, we also hope to provide temporal data processing capabilities that are compatible with RedisTimeSeries.

This task is to implement the time series data structure and its commands on Kvrocks. Note: Since Kvrocks is an on-disk database based on RocksDB, the implementation will be quite different from Redis.

Recommended Skills

Modern C++, Database Internals (especially for time series databases), Software Engineering and Testing

References

https://redis.io/docs/data-types/timeseries/

https://kvrocks.apache.org/community/data-structure-on-rocksdb

Mentor

Mentor: Mingyang Liu, Apache Kvrocks PMC Member, twice@apache.org

Mailing List: dev@kvrocks.apache.orgImage Added

Website: https://kvrocks.apache.org

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Mingyang Liu, mail: twice (at) apache.org
Project Devs, mail: dev (at) kvrocks.apache.org

[GSoC] [Kvrocks] Support embedded storage for Kvrocks cluster controller

Currently, the Kvrocks controller supports using multiple external storages like Apache Zookeeer / ETCD and also plans to support more common databases in the future. However, using external components will bring extra operation complexity for users. So it would be great if we could support embedded storage inside the controller, making it easier to maintain the controller service.

We would like participants to help design and implement the solution.

Recommended Skills

Familiar with the Go programming language and Know how the Raft algorithm works.

Mentor

Mentor: Hulk Lin, Apache Kvrocks PMC Member, hulk.website@gmail.comImage Added

Mailing List: dev@kvrocks.apache.orgImage Added

Website: https://kvrocks.apache.org

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Hulk Lin, mail: hulk (at) apache.org
Project Devs, mail: dev (at) kvrocks.apache.org

OpenDAL

Apache OpenDAL ovirtiofs, OpenDAL File System via Virtio

cross posted at https://github.com/apache/opendal/issues/4133


Background

OpenDAL is a data access layer that allows users to easily and efficiently retrieve data from various storage services in a unified way. ovirtiofs can expose OpenDAL power in virtio way that allow users to mount storage services to VM or contianer directly.

Objectives

Features

Similiar to virtiofsd

In Scope:

  • Continuous reading
  • Continuous writing
  • Random reading
  • List dir
  • Stat file

Out Scope:

  • Random Write
  • Xattrs
  • Permissions

Tasks

  • Implement features that in scope
  • Implement tests suite

Recommended Skills

  • Familiar with Rust
  • Familiar with basic ideas of file system and virtio
  • Familiar with OpenDAL Rust Core

Mentor

Mentor: Xuanwo, Apache Apache PMC Member Chair, xuanwo@apache.orgImage Added
Mailing List: dev@opendal.apache.orgImage Added

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Hao Ding, mail: xuanwo (at) apache.org
Project Devs, mail: dev (at) opendal.apache.org

Apache OpenDAL ofs, OpenDAL File System via FUSE

Cross posted at https://github.com/apache/opendal/issues/4130


Background

OpenDAL is a data access layer that allows users to easily and efficiently retrieve data from various storage services in a unified way. ofs can expose OpenDAL power in fuse way that allow users to mount storage services locally.

Objectives

Implement ofs, allowing users to mount storage services locally for read and write.

Features

In Scope:

  • Continuous reading
  • Continuous writing
  • Random reading
  • List dir
  • Stat file

Out Scope:

  • Random Write
  • Xattrs
  • Permissions

Tasks

  • Implement features that in scope
  • Implement tests suite

Recommended Skills

  • Familiar with Rust
  • Familiar with basic ideas of file system and fuse
  • Familiar with OpenDAL Rust Core

Mentor

Mailing List: dev@opendal.apache.orgImage Added

Mentor: junouyang, Apache OpenDAL PMC Member, junouyang@apache.orgImage Added

Please leave comments if you want to be a mentor

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Hao Ding, mail: xuanwo (at) apache.org
Project Devs, mail: dev (at) opendal.apache.org

Apache OpenDAL oftp, OpenDAL FTP Server

cross posted at https://github.com/apache/opendal/issues/4132

Background

OpenDAL is a data access layer that allows users to easily and efficiently retrieve data from various storage services in a unified way. oftp can expose OpenDAL power in FTP way that allow users to access storage services via FTP protocol.

Objectives

Features

  • Impelment a FTP Server based on opendal

Tasks

  • Implement features that in scope
  • Implement tests suite

Recommended Skills

  • Familiar with Rust
  • Familiar with basic ideas of FTP protocol
  • Familiar with OpenDAL Rust Core

Mentor

Mentor: PsiACE, Apache Apache Member, psiace@apache.orgImage Added
Mailing List: dev@opendal.apache.orgImage Added
Please leave comments if you want to be a mentorWebsite: https://doris.apache.org

Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Calvin KirsHao Ding, mail: kirs xuanwo (at) apache.org
Project Devs, mail: dev (at) dorisopendal.apache.org

Openmeetings

Add blur background filter options on video sharing - AI-ML

OpenMeetings uses webRTC and HTML5 video to share audio video. Purely browser based.

One feature missing is the ability to blur your webcam's camera background.

There are multiple ways to achieve it, Google Meet seems to use: https://www.tensorflow.org/ 

Tensorflow are AI/ML models, they provide precompiled models into JS, for detection of face/body it seems: https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation is the best model.

Since Chrome 14 there is also a Background Blur API (relying on operating system APIs): https://developer.chrome.com/blog/background-blur - but that doesn't seem to be widely or reliable supported by operating systems yet.

The project would be about adding the background blur into a simple demo and then integrate into the OpenMeetings project. Additionally other types of backgrounds can be added.

Tensorflow TFJS is under the Apache 2.0 License (See LICENSE) and should be possible to redistribute with Apache OpenMeetings.

Other live demos and examples:

https://blog.francium.tech/edit-live-video-background-with-webrtc-and-tensorflow-js-c67f92307ac5



Difficulty: Major
Project size: ~350 hour (large)
Potential mentors:
Sebastian Wagner, mail: sebawagner (at) apache.org
Project Devs, mail: dev (at) openmeetings.apache.org