-
Notifications
You must be signed in to change notification settings - Fork 439
refactor!: Refactor storage creation and caching, configuration and services #1386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Rework of service_locator implicit setting of services, storages and storage creation.
Can you please expand the PR description with an explanation of how this updated logic works with the Apify SDK? Namely I'm interested in the way it overrides the global storage client. For instance,
Judging from apify/apify-sdk-python#576, it should be fine. But we should make sure that this is covered by tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments
The SDK PR includes the tests that cover the interaction of BasicCrawler and Actor with respect to init and services, the description is also included in the linked PR.
No, configuring the crawler with a custom storage_client will not set it in the global servcice_locator. This side effect was removed.
It will work and Actor.init will take it from global service locator, but if you set (explicitly or implicitly) storage client in global service locator and then you try to use different storage client in Actor it will raise ServiceConflictError. See tests from linked PR:
|
That means that this: crawler = BasicCrawler(storage_client=MemoryStorageClient())
# ...
await crawler.run()
dataset = await Dataset.open()
await dataset.export_data()` will behave differently after this PR, correct? While I agree that it is better to be explicit about this, I'm pretty sure that it will surprise someone. |
Yes, it can surprise people who are used to the old behavior. But I think this makes more sense, especially since we have public methods for getting storages on BasicCrawler.
Alternativelly you can set the storage_client globally and not pass it to the crawler |
Can we come up with some warning if this happens? Another concern, if a crawler uses a slightly modified instance of |
Adding warning and test to SDK PR apify/apify-sdk-python@5bf51f7 |
…-configurations-2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it.
Just a few comments.
Co-authored-by: Vlada Dusek <[email protected]>
Co-authored-by: Vlada Dusek <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one last nit, otherwise LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: Vlada Dusek <[email protected]>
…576) ### Description - All relevant parts of `Actor` are initialized in `async init,` not in `__init__`. - `Actor` is considered finalized after `Actor.init` was run. This also means that the same configuration used by the `Actor` is set in the global `service_locator`. - There are three valid scenarios for setting up the configuration. - Setting global configuration in `service_locator` before the `Actor.init` - Having no configuration set in `service_locator` and set it through `Actor.(configuration=...)` and running `Actor.init()` - Having no configuration set in `service_locator` and no configuration passed to `Actor` will create and set implicit default configuration - Properly set `ApifyFileSystemStorageClient` as local client to support pre-existing input file. - Depends on apify/crawlee-python/pull/1386 - Enable caching of `ApifyStorageClient` based on `token` and `api_public_url` and update NDU storage handling. ### Issues Rated to: #513, #590 ### Testing - Added many new initialization tests that show possible and prohibited use cases https://github.com/apify/apify-sdk-python/pull/576/files#diff-d64e1d346cc84a225ace3eb1d1ca826ff1e25c77064c9b1e0145552845fa7b41 - Running benchmark actor based on this and the related Crawlee branch --------- Co-authored-by: Vlada Dusek <[email protected]>
Description
This is a collection of closely related changes that are hard to separate from one another. The main purpose is to enable flexible storage use across the code base without unexpected limitations and limit unexpected side effects in global services.
Top-level changes:
ServiceConflictError
)StorageInstanceManager
allows for similar but different storage instances to be used at the same time(Previously, similar storage instances could be incorrectly retrieved instead of creating a new storage instance).StorageClient
and are different only by using differentConfiguration
.Crawler
can no longer cause side effects in the global service_locator (apart from adding new instances toStorageInstanceManager
).service_locator
can be used at the same time as local instances ofServiceLocator
(for example, each Crawler has its ownServiceLocator
instance, which does not interfere with the global service_locator.)ServiceLocator
can be set only once. Any attempt to reset them will throw an Error. Not setting the services and using them is possible. That will set services inServiceLocator
to some implicit default, and it will log warnings as implicit services can lead to hard-to-predict code. The preferred way is to set services explicitly. Either manually or through some helper code, for example, throughActor
. See related PRImplementation notes:
name
,id
,storage_type
,storage_client_type
, there is also anadditional_cache_key
. This can be used by theStorageClient
to define a unique way to distinguish between two similar but different instances. For example,FileSystemStorageClient
depends onConfiguration.storage_dir
, which is included in the custom cache key forFileSystemStorageClient
, but this is not true forMemoryStorageClient
as thestorage_dir
is not relevant for it, see example:(This
additional_cache_key
could possibly be used for caching of NDU in feat: Add support for NDU storages #1401)ServiceLocator
. It will either use explicitly passed services(configuration, storage client, event_manager) to crawler init or services from the globalservice_locator
as implicit defaults. This allows multiple differently configured crawlers to work in the same code. For example:ServiceLocator
is now way more strict when it comes to setting the services. Previously, it allowed changing services until some service had_was_retrieved
flag set toTrue
. Then it would throw a runtime error. This led to hard-to-predict code as the globalservice_locator
could be changed as a side effect from many places. Now the services inServiceLocator
can be set only once, and the side effects of attempting to change the services are limited as much as possible. Such side effects are also accompanied by warning messages to draw attention to code that could cause RuntimeError.Issues
Closes: #1379
Connected to:
StorageInstanceManagaer
)StorageInstanceManagaer
and storage clients/configuration related changes inservice_locator
)Testing
Apify
platform together with SDK changes in related PR