Skip to content

Conversation

@Christopher-Chianelli
Copy link
Contributor

Also settle the node network in Bavet when
the working solution is set to a new instance so
time taken by precomputes are not considered by
terminations.

Copy link
Collaborator

@triceo triceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not approving yet, since I think that the node network reset issue can possibly lead to large changes to this PR.

Copy link
Collaborator

@triceo triceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM when comments resolved.

public void insert(Tuple_ tuple) {
if (tupleRecorder != null) {
throw new IllegalStateException("Impossible state: tuple %s was inserted during recording".formatted(tuple));
throw new IllegalStateException("""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we're addressing this in a follow-up PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is basically unique to flattenLast, since it the only operation I am aware of that can do inserts on an update (primary, when different instances are detected. For example:

record ShiftLocationPair(Shift shift, Location location) {}

precomputeFactory.forEachUnfiltered(Shift.class)
     .flattenLast(shift -> shift.getLocations()
             .stream()
             .map(location -> new ShiftLocationPair(shift, location))
             .toList()
     )

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know, but I assumed that we were going to solve this in a follow-up PR. Is that not the case?

IMO the solution you proposed - the distinct approach - is not acceptable. Are you saying there isn't any other one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for how address it, we either treat the entire precompute as distinct (so we can identify tuple by their contents), or we treat it as an illegal argument to precompute. To see the particular problem, imagine we have a very bad flattenLast:

constraintFactory.forEachUnfiltered(Shift.class)
     .flattenLast(shift -> List.of(new Random().nextInt()))

The list contents differs, so each update produces new tuples.

We could try to make flattenLast try to identify the tuple via iteration order in precompute, but that relies on iteration order being consistent, and flattenLast can be used with any iterable, including sets.

Copy link
Collaborator

@triceo triceo Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking for a long time that flattenLast needs a redesign. The implementation is horrible - it works, but arguably it could be better. And this particular problem could/should be fixed there.

If we once produced a tuple for element X, and we see element X in the new collection, we should not be triggering a new tuple. If we originally produced 2 tuples for element X and now we only see one element X, then we of course need to retract one. Whether tuples are triggered new, or the existing ones updated, is an implementation detail - maybe we should reconsider how flattenLast does things.

Wrt. your example - I agree that it is bad, and it is bad for more reason than one. This cannot be precomputed, because this is not static. Random is ultimately a variable, and therefore doesn't meet the basic requirements. Let's discuss a different example, one that can actually happen. The idea is that the collection will always produce the same thing - otherwise it is not static.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, flattenLast already do updates if the reference in the collection are the same (i.e.

constraintFactory.forEachUnfiltered(Shift.class)
     .flattenLast(shift -> List.of(shift))

)

If we do updates even when references are not the same, we depend on iteration order, which would break things like this:

constraintFactory.forEachUnfiltered(Shift.class)
     .flattenLast(shift -> {
        // Fact has no equals or hashCode, so identity hash code is used
        var a = new Fact(shift.getA());
        var b = new Fact(shift.getB());
        var set = new HashSet<Fact>();
        set.add(a);
        set.add(b):
        return set;
})

(i.e. one iteration, a might be first, another, b might be first).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The iteration order shouldn't matter though - that's why I'm saying flattenLast is currently wrong.

Assume that originally we inserted X, X, Y, X.
Now at update, we see X, Y, X.

What this tells me is that we need to retract X once. (Doesn't matter which, they are all the same X.) The other 2 Xs and the Y need to be updated.

I don't see why the iteration order should matter here. And if it does, then arguably precompute requires LinkedHashSet or List.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Do note the X and Y we are talking about here are NEW INSTANCES of equal value. Iteration order that we are discussing is to be used as a proxy for what should be equivalent. i.e. we are talking about flattenLast(x -> List.of(new Holder(x)), and not flattenLast(x -> List.of(x))

If we inserted X, X, Y, X and then it was X, Y, X on update, then the stream being precomputed returned different values.

The issue is we need to identify shared tuples; for example:

precomputeFactory.forEachUnfilteredUniquePair(Shift.class)
    .map(Pair::of)
    .flattenLast(pair -> List.of(pair));

with two shifts A and B, there is one output tuple (A, B), and an update on either A or B updates that output tuple, and it should only penalize by 1, not two. We use update when precomputing to discover what tuples A/B directly affects.

Assume on insert, flattenLast returned X, Y (stored in tuples Tx and Ty respectively), Tx pass a filter, Ty does not. When the node network is updated to query A tuples, flattenLast returned X, Y, so A get Tx as its tuple. When the node network is updated to query B tuples, flattenLast returned Y, X, so B get Ty as its tuple. And thus a penalty of 2 instead of 1, since A and B were both suppose to get Tx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for how flattenLast currently operates on update, it keep track of how many times it encountered a duplicate reference (i.e. by id/pointer), having a separate tuple for each duplicate. If more duplicates/a new item are encountered, new tuples are created. If less duplicates are encountered, some tuples are retracted.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, talking about this specifically:

precomputeFactory.forEachUnfilteredUniquePair(Shift.class)
    .map(Pair::of)
    .flattenLast(pair -> List.of(pair));

We receive a Pair(a, b), such that:

  • a_then.equals(a_now) is true,
  • b_then.equals(b_now) is true,
  • therefore pair_then.equals(pair_now) is true.

flattenLast sees that the pairs are equal, and therefore should trigger an update, not an insert+retract. If that is already the case, I don't understand what we're talking about. If that is not the case, then flattenLast ought to be fixed.

Also settle the node network in Bavet when
the working solution is set to a new instance so
time taken by precomputes are not considered by
terminations.
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants