This page is meant as a template for writing a FLIP. To create a FLIP choose Tools->Copy on this page and modify with your content and replace the heading with the next FLIP number and a description of your issue. Replace anything in italics with your own description.
Page properties | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Document the state by adding a label to the FLIP page with one of "discussion", "accepted", "released", "rejected".
|
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Describe the problems you are trying to solve.
Public Interfaces
Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.
A public interface is any change to the following:
Binary log formatThe network protocol and api behaviorAny class in the public packages under clientsConfiguration, especially client configurationorg/apache/kafka/common/serializationorg/apache/kafka/commonorg/apache/kafka/common/errorsorg/apache/kafka/clients/producerorg/apache/kafka/clients/consumer (eventually, once stable)
MonitoringCommand line tools and argumentsAnything else that will likely break existing users in some way when they upgrade
Proposed Changes
Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.
Compatibility, Deprecation, and Migration Plan
- What impact (if any) will there be on existing users?
- If we are changing behavior how will we phase out the older behavior?
- If we need special migration tools, describe them here.
- When will we remove the existing behavior?
Test Plan
Describe in few sentences how the FLIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?
Rejected Alternatives
FLIP-274: Introduce metric group for OperatorCoordinator introduced a new metric group for source enumerators to emit metrics. However, there is no way from the Flink REST API to fetch these metrics.
The current metric related REST APIs are the following:
Code Block |
---|
/jobmanager/metrics
/taskmanagers/<taskmanagerid>/metrics
/jobs/<jobid>/metrics
/jobs/<jobid>/vertices/<vertexid>/subtasks/<subtaskindex>
Request metrics aggregated across all entities of the respective type:
/taskmanagers/metrics
/jobs/metrics
/jobs/<jobid>/vertices/<vertexid>/subtasks/metrics |
These APIs allow external services to fetch, for example, subtask metrics to do analysis. The coordinator metrics are also an essential set of metrics for jobs that expose how a job is behaving. Currently, the main clients of these REST APIs are the Web UI and the Kubernetes Operator.
Public Interfaces
The REST API needs to add an endpoint and adds a new config option, metrics.scope.coordinator.
Proposed Changes
Thus, I propose a new REST API
Code Block |
---|
/jobs/<jobid>/vertices/<vertexid>/coordinator-metrics |
with the query parameter get
that accepts comma separated metric names, like the other APIs. This path is based on https://github.com/apache/flink/blob/7bebd2d9fac517c28afc24c0c034d77cfe2b43a6/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/QueryScopeInfo.java#L234.
Response
Code Block |
---|
[
{
"id": "<source_name>.<metric_name1>"
},
{
"id": "<source_name>.<metric_name2>"
},
...
] |
This response is consistent with other APIs and extends from the same utility classes (AbstractMetricsHandler).
`coordinator-metrics` makes it obvious that the metrics are from the OperatorCoordinator, not to be confused with something like `operator-metrics` (which operator is it?). This endpoint should also be integrated with the Flink UI.
In addition, I also propose to fix the metric scope [1]:
metrics.scope.operator
- Default: <host>.taskmanager.<tm_id>.<job_name>.<operator_name>.<subtask_index>
- Applied to all metrics that were scoped to an operator.
The default should not contain the subtask index, since the coordinator does not correspond to a subtask index. In addition, this configuration could be renamed to `metrics.scope.coordinator` since `operator` is vague. While we will point to the new config in the docs, backward compatibility will be provided for the old config key.
[1] https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#system-scope
Compatibility, Deprecation, and Migration Plan
No compatibility concerns as this is introducing a new API without modifying older APIs. The code modifications are otherwise to internal code.
The changes to the QueryScopeInfo is internal and is not expected to be breaking because we don't expose a method to query these metrics yet. There is no backward incompatibility for the metric scope config.
Test Plan
Unit tests.
Rejected Alternatives
1. Exposing the operator id in the API. e.g. /jobs/<jobid>/vertices/<vertexid>/operators/<operatorid>/metrics
.
2 considerations:
1. Integrate Flink UI to show source coordinator metrics
The Flink UI currently doesn't expose per operator metrics, only task metrics. For operator metrics, metric reporters provide that extensibility to expose operator granularity metrics. So, the operator id is unnecessary for this case.
2. Integrate Flink Kubernetes Operator to read autoscaling metrics from source enumerator
The K8s operator currently reads vertex metrics from the Flink Metric REST API to perform autoscaling. In this situation, the operator id is unnecessary as well and in fact, a vertex can only contain 1 source. Therefore, we don't need a parameter for the operator idIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.