-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat(e2e): support multiple aggregators in the e2e tests #2378
base: main
Are you sure you want to change the base?
Changes from all commits
7b05361
99f5881
19b4507
6e4b6b9
4e22900
6906938
7e3ac58
7ce4531
59233ef
932360d
40f32af
e9e0d2f
ed50cc8
b6e18ea
f7c754c
1c7f425
80c2021
bd12065
8bd1e56
be5f9ab
4731590
cad83fb
0b21674
42b4d01
752292f
124810c
33ddc53
366f910
618f376
9e47514
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -143,7 +143,6 @@ impl AggregatorRuntime { | |
info!(self.logger, "→ Trying to transition to READY"; "last_time_point" => ?last_time_point); | ||
|
||
let can_try_transition_from_idle_to_ready = if self.config.is_slave { | ||
println!("Checking if slave aggregator is at the same epoch as master"); | ||
self.runner | ||
.is_slave_aggregator_at_same_epoch_as_master(&last_time_point) | ||
.await? | ||
|
@@ -265,18 +264,19 @@ impl AggregatorRuntime { | |
self.runner | ||
.update_stake_distribution(&new_time_point) | ||
.await?; | ||
if self.config.is_slave { | ||
self.runner | ||
.synchronize_slave_aggregator_signer_registration() | ||
.await?; | ||
} | ||
self.runner.inform_new_epoch(new_time_point.epoch).await?; | ||
|
||
self.runner.upkeep(new_time_point.epoch).await?; | ||
self.runner | ||
.open_signer_registration_round(&new_time_point) | ||
.await?; | ||
self.runner.update_epoch_settings().await?; | ||
if self.config.is_slave { | ||
self.runner | ||
.synchronize_slave_aggregator_signer_registration() | ||
.await?; | ||
// Needed to recompute epoch data for the next signing round on the slave | ||
self.runner.inform_new_epoch(new_time_point.epoch).await?; | ||
} | ||
self.runner.precompute_epoch_data().await?; | ||
} | ||
|
||
|
@@ -940,7 +940,7 @@ mod tests { | |
runner | ||
.expect_inform_new_epoch() | ||
.with(predicate::eq(new_time_point_clone.clone().epoch)) | ||
.once() | ||
.times(2) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why we need to change number of calls ? |
||
.returning(|_| Ok(())); | ||
runner | ||
.expect_update_epoch_settings() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain how this change help to stabilize the e2e tests ? I'm quite puzzled over the fact that we need to call
runner.inform_new_epoch
twice.From what I understand this doesn't impact the methods called between the
inform_new_epoch
calls:runner.upkeep
call should not be impactedopen_signer_registration_round
do nothing on slaveupdate_epoch_settings
should not be impacted as the data registered by the epoch service (protocol parameters and transactions signing config) don't depends on the master aggregatorThe functional impacts should be:
next_signers
in the interval between the twoinform_new_epoch
callsinform_epoch
calls will be done without needing a roundtrip to the master aggregatorIs the last point the problem on fast network ? Maybe the synchronizer should be able to "edit" the next signers in the
epoch_service
instead ?