You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Polish the architecture of metrics aggregator (#465)
* Update ci workflow to 0.1.18, which go version is 1.24
Signed-off-by: kerthcet <[email protected]>
* add miss integration test case for tensorrt-llm backend (#455)
* Update ci workflow to 0.1.18, which go version is 1.24 (#457)
* Update ci workflow to 0.1.18, which go version is 1.24
Signed-off-by: kerthcet <[email protected]>
* Fix lint
Signed-off-by: kerthcet <[email protected]>
* Fix test
Signed-off-by: kerthcet <[email protected]>
---------
Signed-off-by: kerthcet <[email protected]>
* Move DataStore to memStore
Signed-off-by: kerthcet <[email protected]>
* Polish the architecture
Signed-off-by: kerthcet <[email protected]>
---------
Signed-off-by: kerthcet <[email protected]>
Co-authored-by: CYJiang <[email protected]>
Copy file name to clipboardExpand all lines: docs/proposals/376-metric-aggregagor/README.md
+15-43Lines changed: 15 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Proposal-376: Gateway Metric Aggregator
1
+
# Proposal-376: Metric Aggregator
2
2
3
3
<!--
4
4
This is the title of your Proposal. Keep it short, simple, and descriptive. A good
@@ -85,9 +85,8 @@ List the specific goals of the Proposal. What is it trying to achieve? How will
85
85
know that this has succeeded?
86
86
-->
87
87
88
-
- A simple implementation with least-latency scheduling algorithm
89
-
- Extensible with different consumers in the cluster, like the Lora autoscaler or the ai gateway
90
-
- Metrics visualization support, like Grafana
88
+
- A simple implementation with latency aware dispatching algorithm
89
+
- Extensible with different consumers in the cluster, like the HPA autoscaler or the ai gateway
91
90
92
91
### Non-Goals
93
92
@@ -99,6 +98,7 @@ and make progress.
99
98
- Different scheduling algorithm implementations in ai gateway, like prefix-cache aware
100
99
- LoRA aware scheduling implementation, will be left to another KEP
101
100
- Performance consideration in big clusters should be left to the Beta level
101
+
- How HPA consumers the metrics should be left to another KEP.
102
102
103
103
## Proposal
104
104
@@ -175,41 +175,16 @@ The overall flow looks like:
175
175
Let's break down the flow into several steps:
176
176
177
177
- Step 1: we'll collect the metrics from the inference workloads in metrics aggregator.
178
-
- Step 2: the aggregator will parse the metrics and store them in the redis, this is for HA consideration and cache sharing. Once the instance is down, we can still retrieve the metrics from redis. And if we have multiple instances, we can share the metrics with each other via redis. Considering Envoy AI gateway already uses Redis for limit rating, we'll reuse the Redis here.
179
-
- Step 3 & 4: Traffic comes, the gateway plugin (we'll call it router later) will retrieve the metrics from Redis and make routing decisions based on different algorithms, like queue size aware scheduling.
178
+
- Step 2: the aggregator will parse the metrics and store them in the disk memory. We'll use the disk memory at first for quick starting and fast access. We may upgrade the architecture in the future, see Drawbacks section for more details.
179
+
- Step 3 & 4: Traffic comes, the gateway plugin (we'll call it router later) will retrieve the metrics from the storage and make routing decisions based on different algorithms, like latency aware scheduling.
180
180
- Step 5: The router will send the request to the selected instance, and the instance will return the result to the router, return to the user finally.
181
181
182
182
### Additional components introduced:
183
183
184
-
- Metrics Aggregator (MA): MA is working as the controller plane to sync the metrics, this is also one of the reason why we want to decouple it from the router, which working as a data plane. MA has several components:
185
-
- A Pod controller to manage the Pod lifecycle, for example, once a Pod is ready, it will add it to the internal store, and each Pod will fork a background goroutine to sync the metrics continuously, 50ms interval by default. Once the Pod is deleted, the goroutine will be stopped and removed from the store.
186
-
- A internal store to parse the metric results, and store it in the backend storage, like Redis.
187
-
- Redis: a Redis instance is necessary for the metrics storage and sharing, we can use the existing Redis instance in the cluster, or deploy a new one if necessary. We should have storage interface to support different backends in the future.
188
-
- Router: a new router or [DynamicLoadBalancingBackend](https://github.com/envoyproxy/ai-gateway/blob/be2b479b04bc7a219b0c8239143bfbabebdcd615/filterapi/filterconfig.go#L199-L208) specifically in Envoy AI gateway to pick the best-fit Pod endpoints. However, we may block by the upstream issue [here](https://github.com/envoyproxy/ai-gateway/issues/604), we'll work with the Envoy AI Gateway team to resolve it ASAP. Maybe the final design will impact our implementation a bit but not much I think.
189
-
190
-
### Data Structure
191
-
192
-
The data structure could be varied based on the metrics we want to collect, let's take the queue size as an example:
193
-
194
-
Because redis is a kv store, we'll use the ZSET to store the results, `LeastLatency::ModelName` as the key, Pod name as the member and the (runningQueueSize * 0.3 + waitingQueueSize * 0.7) as the score, the factor of waitingQueueSize is higher because metric is a delayed indicator. RunningQueueSize and WaitingQueueSize are two metrics most of the inference engines support.
195
-
196
-
We'll also have another key to record the update timestamp. For example, a Pod named "default/fake-pod" with the score = 0.5, the set commands look like:
197
-
198
-
```bash
199
-
# set the update timestamp
200
-
SET default/fake-pod "2025-05-12T06:16:27Z"
201
-
202
-
# set the score
203
-
ZADD LeastLatency::ModelName 0.5 default/fake-pod
204
-
```
205
-
206
-
When collecting, we'll update the timestamp and score together. Setting the top 5 is enough for us to help reduce the storage pressure since it's a memory-based database. We don't use the expiration key here is just because most of the time, the metrics should be updated at a regular interval.
207
-
208
-
When retrieving, we'll first query the ZSET to get the top 5 records, and iterate them one by one to verify that `currentTimestamp - recordTimestamp < 5s`, if not, skipping to the next one. This is to avoid outdated metrics. Once picked the exact endpoint, we'll reset the score with waitingQueueSize + 1 to avoid hotspot issues, especially when metrics update is blocked by some reasons.
209
-
210
-
If all metrics are outdated, we'll fallback to the default service.
211
-
212
-
Note: the algorithm is not the final one, we'll have more discussions with the community to find the best one.
184
+
- Metrics Aggregator (MA): MA is working as the controller plane to sync the metrics, however, it works as a data plane as well at this moment, we will revisit this once we graduate to Beta/GA. MA has several components:
185
+
- A Pod controller to manage the Pod lifecycle, for example, once a Pod is ready, it will add it to the internal store, and each Pod will fork a background goroutine to sync the metrics continuously, 100ms interval by default. Once the Pod is deleted, the goroutine will be stopped and removed from the store.
186
+
- A internal store to parse the metric results, and store it in the backend storage, right now we only support disk memory, but the interface is defined and we can extend it later.
187
+
- Router: A LLM request dispatcher to route the requests to specific Pods based on the metrics reading from the MA. However, we may block by the upstream issue [here](https://github.com/envoyproxy/ai-gateway/issues/604), we'll work with the Envoy AI Gateway team to resolve it ASAP. Maybe the final design will impact our implementation a bit but not much I think.
213
188
214
189
### Test Plan
215
190
@@ -308,10 +283,8 @@ milestones with these graduation criteria:
308
283
309
284
Beta:
310
285
311
-
- Other storages rather than KV store who supports only key-value pairs which might be not enough for more complex scenarios, like the prefix-cache aware scenario.
312
-
- HA support, once the metrics aggregator is down, the system should still work.
313
-
- No performance issues in big clusters, we may use daemonsets to report metrics.
314
-
- Once the picked Pod is down after the routing decision, router will fallback to the default service. Fallback mode is already supported in Envoy AI gateway.
286
+
- No performance issues in big clusters, especially we have multiple router instances there.
287
+
- The data plane and the control plane should be decoupled.
315
288
316
289
## Implementation History
317
290
@@ -327,12 +300,11 @@ Major milestones might include:
327
300
-->
328
301
329
302
- 2025-05-08: Proposal initialized and submitted for review
303
+
- 2025-05-19: Proposal polished with the new architecture design and flow diagram.
330
304
331
305
## Drawbacks
332
306
333
-
<!--
334
-
Why should this Proposal _not_ be implemented?
335
-
-->
307
+
The biggest drawback of this proposal is that the router is now coupled with the metrics aggregator because of the shared memory store. In the future, we should optimize this either by using a database or hammer the metric report logics to the inference engines directly, which works as a event driven architecture, then the router instances will watch the events to build a local memory, together with the metrics aggregator.
336
308
337
309
## Alternatives
338
310
@@ -342,4 +314,4 @@ not need to be as detailed as the proposal, but should include enough
342
314
information to express the idea and why it was not acceptable.
343
315
-->
344
316
345
-
- When collecting metrics from the inference workloads, `PUSH` mode will put less pressure on the gateway side, or the gateway will have iterate all the Pods which obviously will lead to performance issues. We didn't pick the approach because it will either add additional load to the inference workload and introduces more complexity to the system. The current approach will fork as much goroutines as the number of inference workloads to sync the metrics in parallel, this is feasible because goroutine is lightweight. Once the metrics aggregator becomes the bottleneck, we can consider to use `PUSH` mode at node level.
317
+
- When collecting metrics from the inference workloads, `PUSH` mode will put less pressure on the gateway side, or the gateway will have iterate all the Pods which obviously will lead to performance issues. We didn't pick the approach because it will either add additional load to the inference workload and introduces more complexity to the system. The current approach will fork as much goroutines as the number of inference workloads to sync the metrics in parallel, this is feasible because goroutine is lightweight. Once the metrics aggregator becomes the bottleneck, we can consider to use `PUSH` mode at node level.
0 commit comments