Skip to content

Conversation

@SangJunBak
Copy link
Contributor

Motivation

Tips for reviewer

Checklist

  • This PR has adequate test coverage / QA involvement has been duly considered. (trigger-ci for additional test/nightly runs)
  • This PR has an associated up-to-date design doc, is a design doc (template), or is sufficiently small to not require a design.
  • If this PR evolves an existing $T ⇔ Proto$T mapping (possibly in a backwards-incompatible way), then it is tagged with a T-proto label.
  • If this PR will require changes to cloud orchestration or tests, there is a companion cloud PR to account for those changes that is tagged with the release-blocker label (example).
  • If this PR includes major user-facing behavior changes, I have pinged the relevant PM to schedule a changelog post.

mz_instance_name: &String,
) -> Result<Vec<ServiceInfo>> {
// Get the Materialize CR to extract the active generation
let mz_cr = get_materialize_cr(client, k8s_namespace, mz_instance_name).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick, it would be nicer to just get this once, rather than in each function that needs it. Not a big deal, though.

})?;

// Filter by active generation - only include services whose owner StatefulSet
// has metadata.generation matching the Materialize CR's active_generation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// has metadata.generation matching the Materialize CR's active_generation
// has materialize.cloud/generation matching the Materialize CR's active_generation

Kubernetes also has a concept of generation and it means something else.

Comment on lines 219 to 258
let cluster_services: Vec<ServiceInfo> = services
.iter()
.filter_map(|service| {
let name = service.metadata.name.clone()?;
let spec = service.spec.clone()?;
let selector = spec.selector?;
let ports = spec.ports?;

// Check if this is a cluster service
if selector.get("environmentd.materialize.cloud/namespace")? != "cluster" {
return None;
}

// Check if the owner reference points to environmentd StatefulSet in the same mz instance
// Check if the owner reference points to an environmentd StatefulSet
let envd_statefulset_reference_name = service
.metadata
.owner_references
.as_ref()?
.iter()
// There should only be one StatefulSet reference to environmentd
// There should only be one StatefulSet reference to environmentd
.find(|owner_reference| owner_reference.kind == "StatefulSet")?
.name
.clone();

if !statefulsets
.iter()
.filter_map(|statefulset| statefulset.metadata.name.clone())
.any(|name| name == envd_statefulset_reference_name)
{
// Find the StatefulSet that owns this service and check if it matches the active generation
let matching_statefulset = statefulsets.iter().find(|statefulset| {
statefulset.metadata.name.as_ref() == Some(&envd_statefulset_reference_name)
})?;

// Filter by active generation - only include services whose owner StatefulSet
// has metadata.generation matching the Materialize CR's active_generation
let statefulset_generation = matching_statefulset
.metadata
.annotations
.as_ref()?
.get("materialize.cloud/generation")?
.parse::<i64>()
.ok()?;
let active_generation_i64 = i64::try_from(active_generation).ok()?;
if statefulset_generation != active_generation_i64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we just do the following?

  1. Determine the environmentd statefulset name.
  2. Iterate through all services, filtering on if they are owned by the environmentd statefulset.

We shouldn't need to do any other filtering, since the environmentd statefulset is per-generation and doesn't own any objects of other generations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants