Skip to content

Conversation

@skkra0
Copy link
Collaborator

@skkra0 skkra0 commented Jan 22, 2026

  • Deploy apps without volume mounts as deployments rather than statefulsets.

    • Updated getAppStatus to get metadata from deployments

    • Note: Scaling a deployment through AnvilOps triggers a new ReplicaSet to be created, replacing every pod even if no other configuration changes were made. This is because every update made by AnvilOps changes the deployment id, which is included in the pod spec(labels, environment variables). To avoid this, the deployment id would need to be removed from the pod spec without breaking the log shipper. Or, replicas could scale separately from the updateApp operation to avoid increasing the deployment id at all.

  • Some minor fixes

    • Helm allowed checks
    • Retry listRepoBranches on error

@skkra0
Copy link
Collaborator Author

skkra0 commented Jan 22, 2026

Some notes for a future PR after removing deployment id from the labels and environment variables

The Status tab needs a way to identify the pods from the newest deployment. Without a convenient deployment id label, this must be handled differently between Deployments and StatefulSets.
StatefulSet Pod: Test pod.metadata.labels["controller-revision-hash"] === statefulset.status.updateRevision
Deployment Pod:

  1. Get the revision deployment.metadata.annotations["deployment.kubernetes.io/revision"]
  2. Get the ReplicaSet with the same revision
  3. Test pod.metadata.labels["pod-template-hash"] === replicaset.labels["pod-template-hash"]

@FluxCapacitor2
Copy link
Collaborator

  1. On the StatefulSet->Deployment change, are you doing this because StatefulSets are slow to scale? If so, I think we might be able to get most of the way to Deployment-like behavior by changing this option from the default OrderedReady to Parallel, but I haven't actually tried it:
❯ kubectl explain statefulset.spec.podManagementPolicy
GROUP:      apps
KIND:       StatefulSet
VERSION:    v1

FIELD: podManagementPolicy <string>

DESCRIPTION:
    podManagementPolicy controls how pods are created during initial scale up,
    when replacing pods on nodes, or when scaling down. The default policy is
    `OrderedReady`, where pods are created in increasing order (pod-0, then
    pod-1, etc) and the controller will wait until each pod is ready before
    continuing. When scaling down, the pods are removed in the opposite order.
    The alternative policy is `Parallel` which will create pods in parallel to
    match the desired scale without waiting, and on scale down will delete all
    pods at once.
    
    Possible enum values:
     - `"OrderedReady"` will create pods in strictly increasing order on scale
    up and strictly decreasing order on scale down, progressing only when the
    previous pod is ready or terminated. At most one pod will be changed at any
    time.
     - `"Parallel"` will create and delete pods as soon as the stateful set
    replica count is changed, and will not wait for pods to be ready or complete
    termination.
  1. If switching applicable apps to Deployments is necessary, please modify the template generation so that there's one shared function that returns a V1PodTemplate or V1PodTemplateSpec and use that in both the Deployment and the StatefulSet generators.

  2. I think the labels are valuable for observability tools, and they make deployments feel more "atomic" since it's more explicit which version of the app a pod is running. Are you saying that the restarts are a problem because they take a minute, because they introduce a bit of downtime, or something else? If your answer is downtime, we can probably solve this by switching to Deployment when applicable, since they can start up new pods before terminating old ones. That way, changing the labels wouldn't result in any perceived downtime for stateless apps.

@skkra0
Copy link
Collaborator Author

skkra0 commented Jan 27, 2026

  1. That's very interesting. Yes, I wrote this because deployments tend to scale faster than statefulsets. Setting the podManagementPolicy to parallel is much simpler, so let's do that instead and close this PR for now.
  2. 👍
  3. I think it's unnecessary that even when all you do is change the replicas, every pod is replaced. What do you think of splitting off replicas from the rest of the config, so it can be changed without incrementing the deployment id?

@skkra0 skkra0 closed this Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants