Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: store pubsubfrontend clone in rpcinner #1977

Merged
merged 7 commits into from
Jan 31, 2025

Conversation

mattsse
Copy link
Member

@mattsse mattsse commented Jan 30, 2025

closes #1972

this is a bit cursed.
It described this in the docs but due to how:

RpcClient::new(self.builder.service(transport), is_local)

creates the transport service object if there's an additional layer, this downcast does not work:

pub fn pubsub_frontend(&self) -> Option<&alloy_pubsub::PubSubFrontend> {
self.transport.as_any().downcast_ref::<alloy_pubsub::PubSubFrontend>()

the solution to this is a workaround that captures the Pubsubfrontend before we collapse the service builder.

this can still be bypassed manually via

pub fn new(t: impl IntoBoxTransport, is_local: bool) -> Self {

but all the builtin transport fn should now capture the pubsubfrontend.
pubsubfrontend is just a channel so capturing an additional clone seems fine


let rpc_client = RpcClient::builder().layer(retry_layer).ws(ws).await?;

let provider = ProviderBuilder::new().disable_recommended_fillers().on_client(rpc_client);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: no need for this

Suggested change
let provider = ProviderBuilder::new().disable_recommended_fillers().on_client(rpc_client);
let provider = ProviderBuilder::new().on_client(rpc_client);

also i wish we had the ability to configure layers on the ProviderBuilder w/o having to drop to RpcClient, so you'd just do

Providerbuilder::new().layer(retry_layer).on_builtin(url).await?

Copy link
Member

@DaniPopes DaniPopes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, the only thing would be that it creates a copy of the internal state instead of sharing it, so we might want to arc pubsubfrontend internally?

@mattsse
Copy link
Member Author

mattsse commented Jan 31, 2025

hat it creates a copy of the internal state instead of sharing

can you elaborate on this? unclear what state you mean

@DaniPopes
Copy link
Member

the Pubsubfrontend struct itself is now both inside of transport and in the new field

@mattsse
Copy link
Member Author

mattsse commented Jan 31, 2025

this is fine because this is just another copy of the sender half

pub struct PubSubFrontend {
tx: mpsc::UnboundedSender<PubSubInstruction>,
/// The number of items to buffer in new subscription channels. Defaults to
/// 16. See [`tokio::sync::broadcast::channel`] for a description.
channel_size: AtomicUsize,

which also gets dropped when the transport drops

@mattsse mattsse merged commit cfb13aa into main Jan 31, 2025
27 checks passed
@mattsse mattsse deleted the matt/store-pubsubfrontendtransport branch January 31, 2025 00:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] subscribe_blocks could not downcast to pubsub when creating provider using RetryBackoffLayer
3 participants