THIS IS A TEST INSTANCE. ALL YOUR CHANGES WILL BE LOST!!!!

Apache Kylin : Analytical Data Warehouse for Big Data

Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Q1. What are you trying to do? Articulate your objectives using absolutely no jargon.

...

Q2. What problem is this proposal NOT designed to solve?

This system testing framework can't and isn't prepared to replace the coverage of kylin-IT. It should be used as a supplement to the IT to cover the scope that the IT can't meet. It should interact with Kylin instance through REST-API/JDBC/CLI, etc., and verify the features by verifying whether the response/return result meets the expectation.

Q3. How is it done today, and what are the limits of current practice?

When verifying and merging patches contributed by the community, Kylin's maintainer team needs to analyze the influence scope of this patch, and then make a test plan. Most of the time, it is not enough to only run IT, but we need to test manually. When releasing the new version, we need to write a test plan called the main story to cover the core usage scenarios of Kylin; At the same time, in order to ensure the compatibility of Hadoop versions, we need to manually execute these test cases in each Hadoop distribution to ensure that there is no regression bugs in the main features of kylin RC package. However, for more complex functions, such as CubePlanner or read-write separation deploy, testing is difficult to cover.

Secondly, the current IT depends on the deployment environment of HDP2.4. For developers who can't get this environment, it becomes particularly difficult to complete the IT of Kylin. Therefore, Kylin contributors can only run unit tests before submitting PR, which is not enough.

Q4. What is new in your approach and why do you think it will be successful?

...

Q5. Who cares? If you are successful, what difference will it make?

Kylin User

We believe that Kylin users can also learn the deployment mode of Kylin through this framework, and can do some learning and verification according to their own scenarios (PoC).

...

2. By run docker/build_cluster_images.sh, the image construction of Hadoop components can be completed:

TODO

3. By run build/CI/run-ci.sh, you can package Kylin, deploy Hadoop clusters, deploy Kylin instances, and run test cases in turn. The following is the Docker container of Hadoop cluster and Kylin instance started under Hadoop 2.8.5 version:

TODO

4. The following is the HTML report obtained after the execution of the test case:

TODO

Reference

...