Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Contents

CloudStack

Add virt-v2v support in CloudStack for cold migration of VM from VMware/XenServer to KVM

Background

There are many existing users who have deployed their infrastructure on XenServer or VMware but the migration to an IaaS platform such as CloudStack requires a lot of work.

Such a feature will allow the Apache CloudStack project to onboard and facilitate users who have an existing VM based infra on VMware/XenServer by using tools such as virt-v2v which can help automate shutdown (cold) VMs to KVM/CloudStack.

Proposed Tasks

  1. Get started on basic CloudStack codebase and development (building and running CloudStack)
  2. Setup KVM based CloudStack dev/test environment
  3. Setup a test environment (XenServer and/or VMware) with test VMs
  4. R&D - virt-v2v tool and usage; define the overall workflow (steps) for import/migration
  5. Re-define scope/requirements of this feature (KVM distro, libvirt version; support source hypervisor versions XenServer/VMware)
  6. Integrate API that in-turn uses virt-v2v
  7. Deliverables: documentation and community pull request, end-to-end demo of feature

Relevant Skills

  1. Java and Python
  2. Basic libvirt domain knowledge and usage
  3. virt-v2v

Proposed Mentor
Rohit Yadav (rohit (at) apache.org), PMC/commiter Apache CloudStack

Difficulty: Major

Potential Mentors

Rohit Yadav (rohit (at) apache.org), PMC/commiter Apache CloudStack

Example and references
https://libguestfs.org/virt-v2v.1.html
https://github.com/libguestfs/virt-v2v
https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
https://blogs.oracle.com/scoter/virt-v2v-automated-migration-from-oracle-vm-to-oracle-linux-kvm
https://access.redhat.com/articles/1351473

Synapse

Open Telemetry based Tracing for Apache Synapse

Currently, Apache Synapse does not have sophisticated support for modern tracing standardized. Therefore this new feature is intended to implement OpenTelemetery based tracing implementation for apache synapse.


This feature will include request-response training and inbound/outbound tracing at the transport level and the orchestration layer. Further, this also needs a really good investigation on Opentelemetry specification[1] and the Apache synapse transport component [1].


Relevant Skills

  1. JAVA language
  2. Understanding about observability 
  3. Integration and Synapse configuration language.

[1]https://opentelemetry.io/ 
[2] http://synapse.apache.org/userguide/transports/pass_through.html

Difficulty: Major
Potential mentors:
Vanjikumaran Sivajothy, mail: vanjikumaran@gmail.com (at) apache.org
Project Devs, mail:

StreamPipes

More powerful real-time visualizations for StreamPipes

Apache StreamPipes

Apache StreamPipes (incubating) is a self-service (Industrial) IoT toolbox to enable non-technical users to connect, analyze and explore IoT data streams. StreamPipes offers several modules including StreamPipes Connect to easily connect data from industrial IoT sources, the Pipeline Editor to quickly create processing pipelines and several visualization modules for live and historic data exploration. Under the hood, StreamPipes utilizes an event-driven microservice paradigm of standalone, so-called analytics microservices making the system easy to extend for individual needs.


Background

Currently, the live dashboard (implemented in Angular) offers an initial set of simple visualizations, such as line charts, gauges, tables and single values. More advanced visualizations, especially those relevant for condition monitoring tasks (e.g., monitoring sensor measurements from industrial machines) is easy. Visualizations can be flexibly created by users and there is an SDK that allows to express requirements (e.g., based on data type or semantic type) for visualizations to better guide users through the creation process.


Tasks

  1. Extend the set of real-time visualizations in StreamPipes, e.g., by integrating existing visualizations from Apache ECharts.
  2. Improve the existing dashboard, e.g., by introducing better filtering or more advanced customization options.


Relevant Skills

0. Don't be afraid! We'll guide you through your first steps with StreamPipes.

  1. Angular
  2. Basic knowledge of Apache ECharts


Mentor

Dominik Riemer, PPMC Apache StreamPipes (riemer@apache.org)


Difficulty: Major
Potential mentors:
Dominik Riemer, mail: riemer (at) apache.org
Project Devs, mail:

Pulsar

Improve the message backlogs for the topic

In Pulsar, the client usually sends several messages with a batch. From the broker side, the broker receives a batch and write the batch message to the storage layer.

The message backlog is maintaining how many messages should be handled for a subscription. But unfortunately, the current backlog is based on the batches, not the messages. This will confuse users that they have pushed 1000 messages to the topic, but from the subscription side, when to check the backlog, will return a value that lower than 1000 messages such as 100 batches. Not able to get the message based backlog is it's so expensive to calculate the number of messages in each batch.


PIP-70 https://github.com/apache/pulsar/wiki/PIP-70%3A-Introduce-lightweight-raw-Message-metadata Introduced a broker level entry metadata which can support message index for a topic(or message offset of a topic). This will provide the ability to calculate the number of messages between a message index to another message index. So we can leverage PIP-70 to improve the message backlog implementation to able to get the message-based backlog.


For the Exclusive subscription or Failover subscription, it easy to implement by calculating the messages between the mark delete position and the LAC position. But for the Shared and Key_Shared subscription, the individual acknowledgment will bring some complexity. We can cache the individual acknowledgment count in the broker memory, so the way to calculate the message backlog for the Shared and Key_Shared subscription is `backlogOfTheMarkdeletePosition` - `IndividualAckCount`

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

Support reset cursor by message index

Currently, Pulsar supports resetting the cursor according to time and message-id, e.g. you can reset the cursor to 3 hours ago or reset the cursor to a specific message-id. For some cases that users want to reset to the 10,000 earlier messages, Pulsar has not supported this operation yet

PIP-70 https://github.com/apache/pulsar/wiki/PIP-70%3A-Introduce-lightweight-raw-Message-metadata Introduced a broker level entry metadata which can support message index for a topic(or message offset of a topic), this will provide the ability to support reset cursor according to the message index.

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

Expose the broker level message metadata to the client.

PIP-70 https://github.com/apache/pulsar/wiki/PIP-70%3A-Introduce-lightweight-raw-Message-metadata Introduced a broker level entry metadata and already support add message index and broker add a timestamp for the message. 

But currently, the client can't get the broker level message metadata since the broker skip this information when dispatching messages to the client. Provide a way to expose the broker level message metadata to the client.

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

Integration with Apache Ranger

Currently, Pulsar only supports store authorization policies under local-zookeeper. Is it possible to support [ranger](https://github.com/apache/ranger), apache ranger can provide a framework for central administration of security policies and monitoring of user access.

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

Throttle the ledger rollover for the broker

In Pulsar, the ledger rollover is split the data of a topic into multiple segments. For each ledger roll over operation, the metadata of the topic needs to be updated in the ZookKeeper. High ledger rollover frequency may lead to the ZookKeeper cluster in heavy workload. In order to make the ZookKeeper run more stable, we should limit the ledger rollover rate.

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

Support publish and consume avro objects in pulsar-perf

We should use perf tool to benchmark producing and consuming messages using Schema.

Difficulty: Major
Potential mentors:
Penghui Li, mail: penghui (at) apache.org
Project Devs, mail:

SkyWalking

Apache SkyWalking: Python agent supports profiling

Apache SkyWalking [1] is an application performance monitor (APM) tool for distributed systems, especially designed for microservices, cloud native and container-based (Docker, K8s, Mesos) architectures.

SkyWalking is based on agent to instrument (automatically) monitored services, for now, we have many agents for different languages, Python agent [2] is one of them, which supports automatic instrumentations.

The goal of this project is to extend the agent's features by supporting profiling [3] a function's invocation stack, help the users to analyze which method costs the most major time in a cross-services call.

To complete this task, you must be comfortable with Python, have some knowledge of tracing system, otherwise you'll have a hard time coming up to speed..

[1] http://skywalking.apache.org
[2] http://github.com/apache/skywalking-python
[3] https://thenewstack.io/apache-skywalking-use-profiling-to-fix-the-blind-spot-of-distributed-tracing/

Difficulty: Major
Potential mentors:
Zhenxu Ke, mail: kezhenxu94 (at) apache.org
Project Devs, mail: dev (at) skywalking.apache.org

Apache SkyWalking: Python agent collects and reports PVM metrics to backend

Apache SkyWalking [1] is an application performance monitor (APM) tool for distributed systems, especially designed for microservices, cloud native and container-based (Docker, K8s, Mesos) architectures.

Tracing distributed systems is one of the main features of SkyWalking, with those traces, it can analyze some service metrics such as CPM, success rate, error rate, apdex, etc. SkyWalking also supports receiving metrics from the agent side directly.

In this task, we expect the Python agent to report its Python Virtual Machine (PVM) metrics, including (but not limited to, whatever metrics useful are also acceptable) CPU usage (%), memory used (MB), (active) thread/coroutine counts, garbage collection count, etc.

To complete this task, you must be comfortable with Python and gRPC, otherwise you'll have a hard time coming up to speed.

Live demo to play around: http://122.112.182.72:8080 (under reconstruction, maybe unavailable but latest demo address can be found at the GitHub index page http://github.com/apache/skywalking)

[1] http://skywalking.apache.org

Difficulty: Major
Potential mentors:
Zhenxu Ke, mail: kezhenxu94 (at) apache.org
Project Devs, mail: dev (at) skywalking.apache.org

TrafficControl

GSOC: Varnish Cache support in Apache Traffic Control

Background
Apache Traffic Control is a Content Delivery Network (CDN) control plane for large scale content distribution.

Traffic Control currently requires Apache Traffic Server as the underlying cache. Help us expand the scope by integrating with the very popular Varnish Cache.

There are multiple aspects to this project:

  • Configuration Generation: Write software to build Varnish configuration files (VCL). This code will be implemented in our Traffic Ops and cache client side utilities, both written in Go.
  • Health Monitoring: Implement monitoring of the Varnish cache health and performance. This code will run both in the Traffic Monitor component and within Varnish. Traffic Monitor is written in Go and Varnish is written in C.
  • Testing: Adding automated tests for new code

Skills:

  • Proficiency in Go is required
  • A basic knowledge of HTTP and caching is preferred, but not required for this project.
Difficulty: Major
Potential mentors:
Eric Friedrich, mail: friede (at) apache.org
Project Devs, mail: dev (at) trafficcontrol.apache.org

ShardingSphere

Apache ShardingSphere: Proofread the SQL definitions for ShardingSphere Parser

Apache ShardingSphere

Apache ShardingSphere is a distributed database middleware ecosystem, including 2 independent products, ShardingSphere JDBC and ShardingSphere Proxy presently. They all provide functions of data sharding, distributed transaction, and database orchestration.
Page: https://shardingsphere.apache.org
Github: https://github.com/apache/shardingsphere

Background

ShardingSphere parser engine helps users parse a SQL to get the AST (Abstract Syntax Tree) and visit this tree to get SQLStatement (Java Object). At present, this parser engine can handle SQLs for `MySQL`, `PostgreSQL`, `SQLServer` and `Oracle`, which means we have to understand different database dialect SQLs.
More details: https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse/

Task

This issue is to proofread the DML(SELECT/UPDATE/DELETE/INSERT) SQL definitions for Oracle. As we have a basic Oracle SQL syntax definitions but do not keep in line with Oracle DOC, we need you to find out the vague SQL grammar definitions and correct them referring to Oracle DOC.

Notice, when you review these DML(SELECT/UPDATE/DELETE/INSERT) SQLs, you will find that these definitions will involve some basic elements of Oracle SQL. No doubt, these elements are included in this task as well.

Relevant Skills

1. Master JAVA language
2. Have a basic understanding of Antlr g4 file
3. Be familiar with Oracle SQLs

Targets files

1. DML SQLs g4 file: https://github.com/apache/shardingsphere/blob/master/shardingsphere-sql-parser/shardingsphere-sql-parser-dialect/shardingsphere-sql-parser-oracle/src/main/antlr4/imports/oracle/DMLStatement.g4
2. Basic elements g4 file: https://github.com/apache/shardingsphere/blob/master/shardingsphere-sql-parser/shardingsphere-sql-parser-dialect/shardingsphere-sql-parser-oracle/src/main/antlr4/imports/oracle/BaseRule.g4

References

1. Oracle SQL quick reference: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlqr/SQL-Statements.html#GUID-1FA35EAD-AED2-4619-BFEE-348FF05D1F4A
2. Detailed Oracle SQL info: https://docs.oracle.com/pls/topic/lookup?ctx=en/database/oracle/oracle-database/19/sqlqr&id=SQLRF008

Mentor

Juan Pan, PMC of Apache ShardingSphere, panjuan@apache.orgImage Modified

Difficulty: Major
Potential mentors:
Juan Pan, mail: panjuan (at) apache.org
Project Devs, mail: dev (at) shardingsphere.apache.org

Apache Hudi

[UMBRELLA] Support schema inference for unstructured data

(More details to be added)

Difficulty: Major
Potential mentors:
Raymond Xu, mail: xushiyan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Survey indexing technique for better query performance

(More details to be added)

Difficulty: Major
Potential mentors:
Raymond Xu, mail: xushiyan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Improve CLI features and usabilities

(More details to be added)

Difficulty: Major
Potential mentors:
Raymond Xu, mail: xushiyan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Support Apache Calcite for writing/querying Hudi datasets

(More details to be added)

Difficulty: Major
Potential mentors:
Raymond Xu, mail: xushiyan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

Snowflake integration w/ Apache Hudi


Difficulty: Major
Potential mentors:
sivabalan narayanan, mail: shivnarayan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

Apache Airflow integration w/ Apache Hudi


Difficulty: Major
Potential mentors:
sivabalan narayanan, mail: shivnarayan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

Pandas(python) integration w/ Apache Hudi


Difficulty: Major
Potential mentors:
sivabalan narayanan, mail: shivnarayan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

Pyspark w/ Apache Hudi


Difficulty: Major
Potential mentors:
sivabalan narayanan, mail: shivnarayan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Improve source ingestion support in DeltaStreamer

(More details to be added)

Difficulty: Major
Potential mentors:
Raymond Xu, mail: rxu (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Checkstyle, formatting, warnings, spotless

Umbrella ticket to track all tickets related to checkstyle, spotless, warnings etc.

Difficulty: Major
Potential mentors:
sivabalan narayanan, mail: shivnarayan (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

[UMBRELLA] Support Apache Beam for incremental tailing

(More details to be added)

Difficulty: Major
Potential mentors:
Vinoth Chandar, mail: vinoth (at) apache.org
Project Devs, mail: dev (at) hudi.apache.org

APISIX

Apache APISIX: supports obtaining etcd data information through plugin

Apache APISIX

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd.

APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use Apache APISIX to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.

Background
 
When we get the stored data of etcd, we need to manually execute the URI request method to get each piece of data, and we cannot monitor the changed data in etcd. This is not friendly to issues such as obtaining multiple etcd stored data and monitoring etcd data changes. Therefore, we need to design a method to solve this problem.

Related issue: https://github.com/apache/apisix/issues/2453

Task

In the Apache APISIX (https://github.com/apache/apisix) project, implement a plug-in with the following functions:

1.Find route based on URI;
2.Watch etcd to print out the object that has recently changed;
3.Query the corresponding data based on ID (route, service, consumer, etc.).

Relevant Skills

1. Master Lua language;
2. Have a basic understanding of API Gateway or Web server;
3. Be familiar with ETCD.

Mentor

Yuelin Zheng, yuelinz99@gmail.com

Difficulty: Major
Potential mentors:
Yuelin Zheng, mail: firstsawyou (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX Dashboard: Enhancement plugin orchestration

The Apache APISIX Dashboard is designed to make it as easy as possible for users to operate Apache APISIX through a frontend interface.

The Dashboard is the control plane and performs all parameter checks; Apache APISIX mixes data and control planes and will evolve to a pure data plane.

This project includes manager-api, which will gradually replace admin-api in Apache APISIX.

Background

The plugin orchestration feature allows users to define the order of plugins to meet their scenarios. At present, we have implemented the plugin scheduling feature, but there are still many points to be optimized.

Task

  1. develop a new plugin, conditional judgment card style. Currently, both the conditional judgment card and the plug-in card are square shaped, which makes it difficult for users to distinguish them, so in this task the conditional judgment card needs to be changed to a diamond shape. As shown in the figure below. Image Modified 2. Add arrows for connecting lines.The connection lines in the current plugin orchestration are not directional, we need to add arrows to the connection lines as shown in the figure below.  Image Modified 3. Limit plugin orchestration operations. We need to restrict the connection of each card to ensure the proper use of the plugin orchestration, and the situation shown in the arrow below is not allowed.

Image Modified

Relevant Skills

1. Basic use of HTML, CSS, and JavaScript.

2. Basic use of  React Framework.

Mentor

Yi Sun, committer of Apache APISIX,sunyi@apache.orgImage Modified
 

Difficulty: Major
Potential mentors:
Yi Sun, mail: sunyi (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: enhanced authentication for Dashboard

Apache APISIX

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd.

APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use Apache APISIX to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.

Background

At present, Apache APISIX Dashboard only supports simple username and password login, we need a universal authentication mechanism that can connect to user's existing identity provider.

Task

In the Apache APISIX dashboard (https://github.com/apache/apisix-dashboard) project
1. Implement a universal login class
2. Support LDAP connection
3. Support OAuth2 connection

Relevant Skills
1. Golang
2. TypeScript
3. Be familiar with ETCD

Mentor
Junxu Chen, PMC of Apache APISIX, chenjunxu@apache.org


Difficulty: Major
Potential mentors:
Junxu Chen, mail: chenjunxu (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: support to fetch more useful information of client request

What's Apache APISIX?

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd.

APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use Apache APISIX to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.

Background (route matching and run plugins)

When the client completes a request, there is a lot of useful information inside Apache APISIX. 


Image Modified

Task

Needs a way to show it. It is convenient for callers to troubleshoot problems and understand the workflow of Apache APISIX.

The first version target can display:
1. Which route is matched.
2. Which plugins are loaded.

In subsequent versions, we will add more information that the caller cares about, such as:

  • Whether the global plugin is executed
  • Time consumption statistics
  • The return value when the plugin is executed

    Relevant Skills

1. Master Lua language
2. Have a basic understanding of API Gateway or Web server

Difficulty: Major
Potential mentors:
YuanSheng Wang, mail: membphis (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: Support Nacos in a native way

Apache APISIX is a dynamic, real-time, high-performance cloud-native API gateway, based on the Nginx library and etcd.

Page: https://apisix.apache.org
Github: https://github.com/apache/apisix

Background

To get the upstream information dynamically, APISIX need to be integrated with other service discovery systems. Currently we already support Eureka, and many people hope we can support Nacos too.

Nacos is a widely adopted service discovery system: https://nacos.io/en-us/index.html

Previously we try to support Nacos via DNS. Nacos provides a CoreDNS plugin to expose the information via DNS: https://github.com/nacos-group/nacos-coredns-plugin

However, this plugin seems to be unmaintained.

Therefore, it would be good if we can support Nacos natively via its API, which is expected to be maintained.


Task

Integrate Nacos with APISIX via Nacos's HTTP API.


Relevant Skills

1. Master Lua language and HTTP protocol
2. Have a basic understanding of APISIX / Nacos


Targets files

1. https://github.com/apache/apisix/tree/master/apisix/discovery

References

1. Nacos Open API: https://nacos.io/en-us/docs/open-api.html

Mentor

Zexuan Luo, committer of Apache APISIX, spacewander@apache.orgImage Modified

Difficulty: Major
Potential mentors:
Zexuan Luo, mail: spacewander (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: support apply certificates from Let’s Encrypt or any other ACMEv2 service

Apache APISIX

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd.

APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use Apache APISIX to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.

Background

The data plane of Apache APISIX supports dynamic loading of ssl certificates, but the control plane does not have the support of ACME.
Although users can use other tools to obtain ACME certificates, and then call the admin API to write them into Apache APISIX, this is not convenient for many users.

Task

In the Apache APISIX dashboard (https://github.com/apache/apisix-dashboard) project, add support for ACME, which can automatically obtain and update certificates

Relevant Skills
TypeScript
Golang
familiar with Apache APISIX's admin API

Mentor
Ming Wen, PMC of Apache APISIX, wenming@apache.org

Difficulty: Major
Potential mentors:
Ming Wen, mail: wenming (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: Enhanced verification for APISIX ingress controller

Apache APISIX

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd.

APISIX provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use Apache APISIX to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a k8s ingress controller.

Background

We can use APISIX as kubernetes ingress.Use CRD (Custom Resource Definition) on kubernetes to define APISIX objects, such as route/service/upstream/plugins.

We have done basic structural verification of CRD, but we still need more verification. For example, plug-in schema verification, dependency verification between APISIX objects, rule conflict verification, etc. All these verifications need to be completed before CRD is applied.

Task

1. Implement a validating admission webhook.
2. Support plugins schema verification.
3. Support object dependency verification.

Relevant Skills

1. Golang
2. Be familiar with Apache APISIX's admin API
3. Be familiar with kubernetes

Mentor

Wei Jin, PMC of Apache APISIX, kvn@apache.org

Difficulty: Major
Potential mentors:
Wei Jin, mail: kvn (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org

Apache APISIX: improve the website

Apache APISIX

Apache APISIX is a dynamic, real-time, high-performance API gateway, based on the Nginx library and etcd, and we have a standalone website to let more people know about the Apache APISIX. 

Background

The website of Apache APISIX is used for showing people what's Apache APISIX is, and it will include up to date docs to let developers searching guides more easily, and so on.

Task

In the website[1]  and its repo[2], we are going to refactor the homepage, improve those docs which include apisix's docs and some like release guide.

Relevant Skills
TypeScript

React.js

Mentor

Zhiyuan, PMC of Apache APISIX, juzhiyuan@apache.org


[1] https://apisix.apache.org/

[2]https://github.com/apache/apisix-website

Difficulty: Major
Potential mentors:
Zhiyuan, mail: juzhiyuan (at) apache.org
Project Devs, mail: dev (at) apisix.apache.org