...
Contents
...
GSOC Varnish Cache support in Apache Traffic Control
Background
Apache Traffic Control is a Content Delivery Network (CDN) control plane for large scale content distribution.
Traffic Control currently requires Apache Traffic Server as the underlying cache. Help us expand the scope by integrating with the very popular Varnish Cache.
There are multiple aspects to this project:
- Configuration Generation: Write software to build Varnish configuration files (VCL). This code will be implemented in our Traffic Ops and cache client side utilities, both written in Go.
- Health Monitoring: Implement monitoring of the Varnish cache health and performance. This code will run both in the Traffic Monitor component and within Varnish. Traffic Monitor is written in Go and Varnish is written in C.
- Testing: Adding automated tests for new code
Skills:
- Proficiency in Go is required
- A basic knowledge of HTTP and caching is preferred, but not required for this project.
SkyWalking
[GSOC] [SkyWalking] GSOC 2023 Tasks
This is a placeholder for Apache SkyWalking GSOC 2023 ideas, we are currently brainstorming for projects and will update asap.
There will be at least two projects, one around AIOps algorithms.
...
GSoC Integrate RocketMQ 5.0 client with Spring
Apache RocketMQ
Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.
Page: https://rocketmq.apache.org
Github: https://github.com/apache/rocketmq
Background
RocketMQ 5.0 client has been released recently, we need to integrate it with Spring.
Task
- Familiar with RocketMQ 5.0 java client usage, you could see more details from https://github.com/apache/rocketmq-clients/tree/master/java and https://rocketmq.apache.org/docs/quickStart/01quickstart
- Integrate with Spring.
Relevant Skills
- Java language
- Basic knowledge of RocketMQ 5.0
- Spring
Mentor
Yangkun Ai, PMC of Apache RocketMQ, aaronai@apache.org
StreamPipes
Code Insights for Apache StreamPipes
Apache StreamPipes
Apache StreamPipes (incubating) is a self-service (Industrial) IoT toolbox to enable non-technical users to connect, analyze and explore IoT data streams. StreamPipes offers several modules including StreamPipes Connect to easily connect data from industrial IoT sources, the Pipeline Editor to quickly create processing pipelines and several visualization modules for live and historic data exploration. Under the hood, StreamPipes utilizes an event-driven microservice paradigm of standalone, so-called analytics microservices making the system easy to extend for individual needs.
Background
StreamPipes has grown significantly throughout recent years. We were able to introduce a lot of new features and attracted both users and contributors. Putting the cherry on the cake, we were graduated as an Apache top level project in December 2022. We will of course continue developing new features and never rest to make StreamPipes even more amazing. Although, since we are approaching with full stream towards our `1.0` release, we want to project also to get more mature. Therefore, we want to address one of our Achilles' heels: our test coverage.
Don't worry, this issue is not about implementing myriads of tests for our code base. As a first step, we would like to make the status quo transparent. That means we want to measure our code coverage consistently across the whole codebase (Backend, UI, Python library) and report the coverage to codecov. Furthermore, to benchmark ourselves and motivate us to provide tests with every contributing, we would like to lock the current test coverage as an lower threshold that we always want to achieve (meaning in case we drop CI builds fail etc). With time we then can increase the required coverage lever step to step.
More than monitoring our test coverage, we also want to invest in better and more clean code. Therefore, we would like to adopt sonarcloud for our repository.
Tasks
- [ ] calculate test coverage for all main parts of the repo
- [ ] send coverage to codeCov
- [ ] determine coverage threshold and let CI fail if below
- [ ] include sonarcloud in CI setup
- [ ] include automatic coverage report in PR validation (see an example here ) -> optional
- [ ] include automatic sonarcloud report in PR validation -> optional
- [ ] what ever comes in your mind 💡 further ideas are always welcome
❗Important Note❗
Do not create any account in behalf of Apache StreamPipes in Sonarcloud or in CodeCov or using the name of Apache StreamPipes for any account creation. Your mentor will take care of it.
Relevant Skills
- basic knowledge about GitHub worfklows
Learning Material
- GitHub workflow docs
- Apache StreamPipes workflows
- Sonarcloud for Monorepos
- Using code cov for a monorepo: https://www.curtiscode.dev/post/tools/codecov-monorepo/ & https://docs.codecov.com/docs/flags
Name and Contact Information
Name: Tim Bossenmaier
email: bossenti[at]apache.org
community: dev[at]apache.org
website: https://streampipes.apache.org/
Commons Statistics
...
Dubbo GSoC 2023 - Refactor the http layer
Background
Dubbo currently supports the rest protocol based on http1, and the triple protocol based on http2, but currently the two protocols based on the http protocol are implemented independently, and at the same time, they cannot replace the underlying implementation, and their respective implementation costs are relatively high.
Target
In order to reduce maintenance costs, we hope to be able to abstract http. The underlying implementation of the target implementation of http has nothing to do with the protocol, and we hope that different protocols can reuse related implementations.
Dubbo GSoC 2023 - Refactor Connection
Background
At present, the abstraction of connection by client in different protocols in Dubbo is not perfect. For example, there is a big discrepancy between the client abstraction of connection in dubbo and triple protocols. As a result, the enhancement of connection-related functions in the client is more complicated, and the implementation cannot be reused. At the same time, the client also needs to implement a lot of repetitive code when extending the protocol.
Target
Reduce the complexity of the client part when extending the protocol, and increase the reuse of connection-related modules.
Dubbo GSoC 2023 - IDL management
Background
Dubbo currently supports protobuf as a serialization method. Protobuf relies on proto (Idl) for code generation, but currently lacks tools for managing Idl files. For example, for java users, proto files are used for each compilation. It is more troublesome, and everyone is used to using jar packages for dependencies.
Target
Implement an Idl management and control platform, support idl files to automatically generate dependency packages in various languages, and push them to relevant dependency warehouses
...