-
Notifications
You must be signed in to change notification settings - Fork 43
[6/n] simplify FetchArtifactImpl interface #8040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[6/n] simplify FetchArtifactImpl interface #8040
Conversation
Created using spr 1.3.6-beta.1 [skip ci]
Created using spr 1.3.6-beta.1
Created using spr 1.3.6-beta.1 [skip ci]
installinator/src/fetch.rs
Outdated
#[async_trait] | ||
pub(crate) trait FetchArtifactImpl: fmt::Debug + Send + Sync { | ||
fn peers(&self) -> Box<dyn Iterator<Item = PeerAddress> + Send + '_>; | ||
fn peer_count(&self) -> usize; | ||
fn peers(&self) -> PeerAddresses; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be
fn peers(&self) -> PeerAddresses; | |
fn peers(&self) -> &PeerAddresses; |
and let the caller clone it if desired?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah actually -- the mock/fake implementation generates a PeerAddresses
on the fly. We could store the addresses in the required form, I guess.
Created using spr 1.3.6-beta.1 [skip ci]
Created using spr 1.3.6-beta.1
There's been a long-standing issue with the installinator where reports to wicketd get delayed quite substantially. That happens because we use one task to send reports out to every peer, blocking until all of those peers come back. I'd assumed in the past that if installinator reached out to an unreachable peer, it would receive a TCP connection refused message, but that isn't the case -- instead, it times out. This causes reports to only be sent out roughly every 15 seconds which isn't ideal. To fix this issue, spin up separate report tasks for each peer. Introduce separate tasks for: * discovery (make this a persistent task that publishes updates to a watch channel) * peer reconciliation (new peers have new report tasks spun up, while removed peers' report tasks are cancelled) * reporting (each peer now has its own report loop) Also simulate some kinds of network flakiness in our property-based tests. I did a mupdate on dublin and saw that installinator reports started coming through every 2 seconds rather than every 15 or so, as expected. Depends on: - #8035 - #8036 - #8037 - #8038 - #8039 - #8040
Rather than returning a boxed iterator, simply return the set of peers as a concrete type.
I wanted to also align
FetchArtifactBackend
withReportProgressBackend
(single persistent backend rather than new ones generated each time discovery happens), but it was a bigger lift than I wanted to handle right now, so decided to punt on that. I've added a note about doing this in the future.