This page is auto-generated! Please do NOT edit it, all changes will be lost on next update
Contents
James Server
Adopt Pulsar as the messaging technology backing the distributed James server
https://www.mail-archive.com/server-dev@james.apache.org/msg71462.html
A good long term objective for the PMC is to drop RabbitMQ in
favor of pulsar (third parties could package their own components using
RabbitMQ if they wishes...)
This means:
- Solve the bugs that were found during the Pulsar MailQueue review
- Pulsar MailQueue need to allow listing blobs in order to be
deduplication friendly. - Provide an event bus based on Pulsar
- Provide a task manager based on Pulsar
- Package a distributed server backed by pulsar, deprecate then replace
the current one. - (optionally) support mail queue priorities
While contributions would of course be welcomed on this topic, we could
offer it as part of GSOC 2022, and we could co-mentor it with mentors of
the Pulsar community (see [3])
[3] https://lists.apache.org/thread/y9s7f6hmh51ky30l20yx0dlz458gw259
Would such a plan gain traction around here ?
Implement a web ui for James administration
James today provides a command line tool to do administration tasks like creating a domain, listing users, setting quota, etc.
It requires access to JMX port and even if lot of admins are confortable with such tools, to make our user base broader, we probably should expose the same commands in Rest and provide a fancy default web ui.
The task would need some basic skills on frontend tools to design an administration board, knowledge on what REST mean and enough Java understanding to add commands to existing Rest backend.
In the team, we have a strong focus on test (who want a mail server that is not tested enough ?) so we will explain and/or teach the student how to have the right test coverage of the features using modern tools like Cucumber, Selenium, rest-assured, etc.
[GSOC] James as a (distributed) MX server
Why ?
Alternatives like Postfix...
- Do not offer a unified view of the mail queue across nodes
- Requires statefull persistent storage
Given Apache James recent push to adopt a distributed mail queue based on Pulsar supporting delays (JAMES-3687), it starts making sense developing tooling for MX related tooling.
I propose myself to mentor a Gsoc on this topic.
Benefits for the student
At the end of this GSOC you will...
- Have a solid understanding of email relaying and associated mechanics
- Understand James modular architecture (mailet/ matcher / routes)
- Have a hands-on expertise in SQL / NoSQL working with technologies like Cassandra, Redis, JPA...
- Identify fix and solve architecture problems.
- Conduct performance tests and develop an operational mindset
Inventory...
James ships a couple of MX related tools within smtp-hooks/mailets in default packages. It would make sense to me to move those as an extension.
James supports today...
checks agains DNS blacklists. `DNSRBLHandler` or `URIRBLHandler` smtp hook for instance. This can be moved as an extension IMO.
We would need a little performance benchmark to document performance implications of activating DNS-RBL.
Finally as quoted by a gitter guy: it would make more sens to have this done as a MailHook rather as a RcptHook as it would avoid doing the same job again and over again for each recipients. See JAMES-3820 .
Grey listing. There's an existing implementation using JDBC as an underlying storage.
Move it as an extension.
Remove JDBC storage, propose 2 storage possibilities: in-memory for single node, REDIS for a distributed topology.
Some work around whitelist mailets? Move it as an extension, propose JPA, Cassandra, and XML configured implementations ? With a route to manage entries in there for JPA + Cassandra ?
I would expect a student to do his own little audit and come up with extra suggestions!
Beam
[GSoC][Beam] An IntelliJ plugin to develop Apache Beam pipelines and the Apache Beam SDKs
Beam library developers and Beam users would appreciate this : )
This project involves prototyping a few different solutions, so it will be large.
TrafficControl
GSOC Varnish Cache support in Apache Traffic Control
Background
Apache Traffic Control is a Content Delivery Network (CDN) control plane for large scale content distribution.
Traffic Control currently requires Apache Traffic Server as the underlying cache. Help us expand the scope by integrating with the very popular Varnish Cache.
There are multiple aspects to this project:
- Configuration Generation: Write software to build Varnish configuration files (VCL). This code will be implemented in our Traffic Ops and cache client side utilities, both written in Go.
- Health Monitoring: Implement monitoring of the Varnish cache health and performance. This code will run both in the Traffic Monitor component and within Varnish. Traffic Monitor is written in Go and Varnish is written in C.
- Testing: Adding automated tests for new code
Skills:
- Proficiency in Go is required
- A basic knowledge of HTTP and caching is preferred, but not required for this project.
Commons Statistics
GSoC
Placeholder for tasks that could be undertaken in this year's GSoC.
Ideas:
- Design an updated summary statistics API for use with Java 8 streams based on the summary statistic implementations in the Commons Math stat.descriptive package including moments, rank and summary sub-packages.
Commons Numbers
Add support for extended precision floating-point numbers
Add implementations of extended precision floating point numbers.
An extended precision floating point number is a series of floating-point numbers that are non-overlapping such that:
double-double (a, b): |a| > |b| a == a + b
Common representations are double-double and quad-double (see for example David Bailey's paper on a quad-double library: QD).
Many computations in the Commons Numbers and Statistics libraries use extended precision computations where the accumulated error of a double would lead to complete cancellation of all significant bits; or create intermediate overflow of integer values.
This project would formalise the code underlying these use cases with a generic library applicable for use in the case where the result is expected to be a finite value and using Java's BigDecimal and/or BigInteger negatively impacts performance.
An example would be the average of long values where the intermediate sum overflows or the conversion to a double loses bits:
long[] values = {Long.MAX_VALUE, Long.MAX_VALUE}; System.out.println(Arrays.stream(values).average().getAsDouble()); System.out.println(Arrays.stream(values).mapToObj(BigDecimal::valueOf) .reduce(BigDecimal.ZERO, BigDecimal::add) .divide(BigDecimal.valueOf(values.length)).doubleValue()); long[] values2 = {Long.MAX_VALUE, Long.MIN_VALUE}; System.out.println(Arrays.stream(values2).asDoubleStream().average().getAsDouble()); System.out.println(Arrays.stream(values2).mapToObj(BigDecimal::valueOf) .reduce(BigDecimal.ZERO, BigDecimal::add) .divide(BigDecimal.valueOf(values2.length)).doubleValue());
Outputs:
-1.0 9.223372036854776E18 0.0 -0.5
Commons Math
GSoC
Placeholder for tasks that could be undertaken in this year's GSoC.
Ideas (extracted from the "dev" ML):
- Redesign and modularize the "ml" package
-> main goal: enable multi-thread usage. - Abstract the linear algebra utilities
-> main goal: allow switching to alternative implementations. - Redesign and modularize the "random" package
-> main goal: general support of low-discrepancy sequences. - Refactor and modularize the "special" package
-> main goals: ensure accuracy and performance and better API,
add other functions. - Upgrade the test suite to Junit 5
-> additional goal: collect a list of "odd" expectations.
Other suggestions welcome, as well as
- delineating additional and/or intermediate goals,
- signalling potential pitfalls and/or alternative approaches to the intended goal(s).
Commons Imaging
Placeholder for 1.0 release
A placeholder ticket, to link other issues and organize tasks related to the 1.0 release of Commons Imaging.
The 1.0 release of Commons Imaging has been postponed several times. Now we have a more clear idea of what's necessary for the 1.0 (see issues with fixVersion 1.0 and 1.0-alpha3, and other open issues), and the tasks are interesting as it involves both basic and advanced programming for tasks such as organize how test images are loaded, or work on performance improvements at byte level and following image format specifications.
The tasks are not too hard to follow, as normally there are example images that need to work with Imaging, as well as other libraries in C, C++, Rust, PHP, etc., that process these images correctly. Our goal with this issue is to a) improve our docs, b) improve our tests, c) fix possible security issues, d) get the parsers in Commons Imaging ready for the 1.0 release.
Assigning the label for GSoC 2023, and full time. Although it would be possible to work on a smaller set of tasks for 1.0 as a part time too.
Apache Commons All
[SKIN] Update Commons Skin Bootstrap
Our Commons components use Commons Skin, a skin, or theme, for Apache Maven Site.
Our skin uses Bootstrap 2.x, but Bootstrap is already at 5.x release, and we are missing several improvements (UIUX, accessibility, browser compatibility) and JS/CSS bugs fixed over the years.
Work happening on Apache Maven Skins. Maybe we could adapt/use that one?
https://issues.apache.org/jira/browse/MSKINS-97
Airavata
[GSoC] Integrate JupyterHub with Airavata Django Portal
The Airavata Django Portal [1] allows users to create, execute and monitor computational experiments. However, when a user wants to then post-process or visualize the output of that computational experiment they must then download the output files and run tools that they may have on their computer or other systems. By integrating with JupyterHub the Django Portal can give users an environment in which they can explore the experiment's output data and gain insights.
The main requirements are:
- from the Django Portal a user can click a button and navigate to a JupyterHub instance that the user is immediately logged into using single sign on
- the user can save the Jupyter notebook and later retrieve it
- the user's files are available within the context of the running Jupyter instance
- ideally users can also generate new outputs in the Jupyter instance and have them saved back in their portal data storage
- users can share their notebooks with other portal users
- (bonus) portal admins can suggest notebooks to use with specific applications so that with one click a user can open an experiment in a provided notebook
- users can manage their notebooks and can, for example, clone a notebook
Apache Superset Dashboards to Airavata Catalogs
Integrate Apache Superset (https://superset.apache.org/) to visualize Airavata Catalogs (https://github.com/apache/airavata/tree/master/modules/registry)
- Examples like this and stack overflow threads seem to indicate it is possible - https://medium.com/@s.akashb/apache-superset-integration-with-keycloak-a302840c290c
- Integrate with Custos
- We can start out by directly interfacing with MariaDB, but need to explore if we can write a superset DB Driver following the Hive example - https://github.com/apache/superset/blob/0409b12a55e893d88f6e992a7df247841a2da8f0/superset/db_engines/hive.py
Airavata Jupyter Platform Services
- UI Framework
- To host the jupyter environment we will need to envolop the notebooks in a user interface and connect it with Apache Airavata services
- Leverage Airavata communications from within the Django Portal - https://github.com/apache/airavata-django-portal
- Explore if the platform is better to be developed as VSCode extensions leveraging jupyter extensions like - https://github.com/Microsoft/vscode-jupyter
- Alternatively, explore developing a standalone native application using ElectronJS
- Draft up a platform architecture - Airavata based infrastructure with functionality similar to collab.
- Authenticate with Airavata Custos Framework - https://github.com/apache/airavata-custos
- Extend Notebook filesystem using the virtual file system approaching integration with Airavata based storage and catalog
- Make the notebooks registered with Airavata app catalog and experiment catalog.
Advanced Possibilities:
Explore Multi-tenanted JupyterHub
- Can K8 namespace isolation accomplish?
- Make deployment of Jupyter support as part of the default core
- Data and the user-level tenancy can be assumed, how to make sure infrastructure can isolate them, like not one gateway crashing a hosting environment.
- How to leverage computational resources jupypter hub
Dashboards to get quick statistics
Gateway admins need period reports for various reporting and planning.
Features Include:
- Compute resources across that had at least one job submitted during the period <start date - End date>
- User groups created within a given period and how many users are in those and with permission levels and also number of jobs each user have submitted.
- List applications and number of jobs for each applications for a given period and group them by job status.
- Number of users that at least submitted a single job for the period <start date - End date>
- Total number of Unique Users
- User Registration Trends
- Number of experiments for a given period <Start date - End date> grouped by the experiment status
- The total cpu-hours used by a users, sorted, quarterly, plotted over a period of time
- The total cpu-hours consumed by application, sorted, quarterly, plotted over a period of time
Provide meta scheduling capabilities within Airavata
As discussed on the architecture mailing list [1] and summarized at [2], Airavata will need to develop a metascheduler. In the short term, a user request (demeler, gobert) is to have airavata throttle jobs to resources. In the future more informed scheduling strategies needs to be integrated. Hopefully, the actual scheduling algorithms can be borrowed from third party implementations.
[1] - http://markmail.org/message/tdae5y3togyq4duv
[2] - https://cwiki.apache.org/confluence/display/AIRAVATA/Airavata+Metascheduler
Enhance File Transports in MFT
Complete all transports in MFT
- Currently SCP, S3 is known to work
- Others need effort to optimize, test, and declare readiness
- Develop a complete a fully functional MFT Command-line interface
- Have a feature-complete Python SDK
- A minimum implementation will be prvoided, students need to complete it and test it.
Custos Backup and Restore
Custos does not have the capabilities to efficiently backup and restore a live instance. This is essential for high available services.
Airavata Rich Client based on ElectronJS
Using SEAGrid Rich Client as an example, develop a native application based on electronJS to mimic Airavata Django Portal.
Reference example - https://github.com/SciGaP/seagrid-rich-client