Skip to content

Conversation

Pijukatel
Copy link
Collaborator

@Pijukatel Pijukatel commented Sep 2, 2025

Description

This is a collection of closely related changes that are hard to separate from one another. The main purpose is to enable flexible storage use across the code base without unexpected limitations and limit unexpected side effects in global services.

Top-level changes:

  • There can be multiple crawlers with different storage clients, configurations, or event managers. (Previously, this would cause ServiceConflictError)
  • StorageInstanceManager allows for similar but different storage instances to be used at the same time(Previously, similar storage instances could be incorrectly retrieved instead of creating a new storage instance).
  • Differently configured storages can be used at the same time, even the storages that are using the same StorageClient and are different only by using different Configuration.
  • Crawler can no longer cause side effects in the global service_locator (apart from adding new instances to StorageInstanceManager).
  • Global service_locator can be used at the same time as local instances of ServiceLocator (for example, each Crawler has its own ServiceLocator instance, which does not interfere with the global service_locator.)
  • Services in ServiceLocator can be set only once. Any attempt to reset them will throw an Error. Not setting the services and using them is possible. That will set services in ServiceLocator to some implicit default, and it will log warnings as implicit services can lead to hard-to-predict code. The preferred way is to set services explicitly. Either manually or through some helper code, for example, through Actor. See related PR

Implementation notes:

  • Storage caching now supports all relevant ways to distinguish storage instances. Apart from generic parameters like name, id, storage_type, storage_client_type, there is also an additional_cache_key. This can be used by the StorageClient to define a unique way to distinguish between two similar but different instances. For example, FileSystemStorageClient depends on Configuration.storage_dir, which is included in the custom cache key for FileSystemStorageClient, but this is not true for MemoryStorageClient as the storage_dir is not relevant for it, see example:
    (This additional_cache_key could possibly be used for caching of NDU in feat: Add support for NDU storages #1401)
storage_client = FileSystemStorageClient()
d1= await Dataset.open(storage_client=storage_client, configuration=Configuration(storage_dir="path1"))
d2= await Dataset.open(storage_client=storage_client, configuration=Configuration(storage_dir="path2"))
d3= await Dataset.open(storage_client=storage_client, configuration=Configuration(storage_dir="path1"))

assert d2 is not d1
assert d3 is d1

storage_client_2 =MemoryStorageClient()
d4= await Dataset.open(storage_client=storage_client_2, configuration=Configuration(storage_dir="path1"))
d5= await Dataset.open(storage_client=storage_client_2, configuration=Configuration(storage_dir="path2"))
assert d4 is d5
  • Each crawler will create its own instance of ServiceLocator. It will either use explicitly passed services(configuration, storage client, event_manager) to crawler init or services from the global service_locator as implicit defaults. This allows multiple differently configured crawlers to work in the same code. For example:
custom_configuration_1 = Configuration()
custom_event_manager_1 = LocalEventManager.from_config(custom_configuration_1)
custom_storage_client_1 = MemoryStorageClient()

custom_configuration_2 = Configuration()
custom_event_manager_2 = LocalEventManager.from_config(custom_configuration_2)
custom_storage_client_2 = MemoryStorageClient()

crawler_1 = BasicCrawler(
    configuration=custom_configuration_1,
    event_manager=custom_event_manager_1,
    storage_client=custom_storage_client_1,
)

crawler_2 = BasicCrawler(
    configuration=custom_configuration_2,
    event_manager=custom_event_manager_2,
    storage_client=custom_storage_client_2,
  )

# use crawlers without runtime crash...
  • ServiceLocator is now way more strict when it comes to setting the services. Previously, it allowed changing services until some service had _was_retrieved flag set to True. Then it would throw a runtime error. This led to hard-to-predict code as the global service_locator could be changed as a side effect from many places. Now the services in ServiceLocator can be set only once, and the side effects of attempting to change the services are limited as much as possible. Such side effects are also accompanied by warning messages to draw attention to code that could cause RuntimeError.

Issues

Closes: #1379
Connected to:

Testing

  • New unit tests were added.
  • Tested on the Apify platform together with SDK changes in related PR

@github-actions github-actions bot added this to the 122nd sprint - Tooling team milestone Sep 2, 2025
@github-actions github-actions bot added t-tooling Issues with this label are in the ownership of the tooling team. tested Temporary label used only programatically for some analytics. labels Sep 2, 2025
@Pijukatel Pijukatel changed the title feat: Rework storage creation and caching, configuration and services feat!: Rework storage creation and caching, configuration and services Sep 10, 2025
@Pijukatel Pijukatel changed the title feat!: Rework storage creation and caching, configuration and services refactor!: Refactor storage creation and caching, configuration and services Sep 10, 2025
@Pijukatel Pijukatel marked this pull request as ready for review September 10, 2025 15:01
@janbuchar
Copy link
Collaborator

janbuchar commented Sep 11, 2025

Can you please expand the PR description with an explanation of how this updated logic works with the Apify SDK? Namely I'm interested in the way it overrides the global storage client. For instance,

  • If I reconfigure the storage client in a crawler constructor, will that one be preserved after Actor.init?
  • If I reconfigure the storage client in the global service locator, will Actor.init keep it that way? Will it not crash?

Judging from apify/apify-sdk-python#576, it should be fine. But we should make sure that this is covered by tests.

Copy link
Collaborator

@vdusek vdusek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments

@Pijukatel
Copy link
Collaborator Author

Can you please expand the PR description with an explanation of how this updated logic works with the Apify SDK? Namely I'm interested in the way it overrides the global storage client. For instance,

Judging from apify/apify-sdk-python#576, it should be fine. But we should make sure that this is covered by tests.

The SDK PR includes the tests that cover the interaction of BasicCrawler and Actor with respect to init and services, the description is also included in the linked PR.

* If I reconfigure the storage client in a crawler constructor, will that one be preserved after `Actor.init`?

No, configuring the crawler with a custom storage_client will not set it in the global servcice_locator. This side effect was removed.

* If I reconfigure the storage client in the global service locator, will `Actor.init` keep it that way? Will it not crash?

It will work and Actor.init will take it from global service locator, but if you set (explicitly or implicitly) storage client in global service locator and then you try to use different storage client in Actor it will raise ServiceConflictError. See tests from linked PR:

async def test_existing_apify_config_respected_by_actor() -> None:
    """Set Apify Configuration in service_locator and verify that Actor respects it."""
    max_used_cpu_ratio = 0.123456  # Some unique value to verify configuration
    apify_config = ApifyConfiguration(max_used_cpu_ratio=max_used_cpu_ratio)
    service_locator.set_configuration(apify_config)
    async with Actor:
        pass

    returned_config = service_locator.get_configuration()
    assert returned_config is apify_config


async def test_existing_apify_config_throws_error_when_set_in_actor() -> None:
    """Test that passing explicit configuration to actor after service locator configuration was already set,
    raises exception."""
    service_locator.set_configuration(ApifyConfiguration())
    with pytest.raises(ServiceConflictError):
        async with Actor(configuration=ApifyConfiguration()):
            pass

@janbuchar
Copy link
Collaborator

* If I reconfigure the storage client in a crawler constructor, will that one be preserved after `Actor.init`?

No, configuring the crawler with a custom storage_client will not set it in the global servcice_locator. This side effect was removed.

That means that this:

crawler = BasicCrawler(storage_client=MemoryStorageClient())
# ...
await crawler.run()
dataset = await Dataset.open()
await dataset.export_data()`

will behave differently after this PR, correct? While I agree that it is better to be explicit about this, I'm pretty sure that it will surprise someone.

@Pijukatel
Copy link
Collaborator Author

Pijukatel commented Sep 12, 2025

That means that this:

crawler = BasicCrawler(storage_client=MemoryStorageClient())
# ...
await crawler.run()
dataset = await Dataset.open()
await dataset.export_data()`

will behave differently after this PR, correct? While I agree that it is better to be explicit about this, I'm pretty sure that it will surprise someone.

Yes, it can surprise people who are used to the old behavior. But I think this makes more sense, especially since we have public methods for getting storages on BasicCrawler.
Example for dataset (Same situation for RQ and KVS):

# Depends on the global service_locator
default_dataset = await Dataset.open()
# Depends on the crawler service_locator (it can be the same as the previous, but it can be custom)
crawler_default_dataset = await crawler.get_dataset() 

Alternativelly you can set the storage_client globally and not pass it to the crawler service_locator.set_storage_client(MemoryStorageClient())

@Pijukatel Pijukatel requested a review from vdusek September 12, 2025 12:00
@janbuchar
Copy link
Collaborator

janbuchar commented Sep 12, 2025

That means that this:

crawler = BasicCrawler(storage_client=MemoryStorageClient())
# ...
await crawler.run()
dataset = await Dataset.open()
await dataset.export_data()

will behave differently after this PR, correct? While I agree that it is better to be explicit about this, I'm pretty sure that it will surprise someone.

Yes, it can surprise people who are used to the old behavior. But I think this makes more sense, especially since we have public methods for getting storages on BasicCrawler. Example for dataset (Same situation for RQ and KVS):

Can we come up with some warning if this happens?

Another concern, if a crawler uses a slightly modified instance of FilesystemStorageClient (e.g., different path), Actor.init won't replace that. While it's true that it's more predictable, I believe it might worsen the DX... Also that way the user will have to use the specialized ApifyFilesystemStorageClient explicitly if they want to handle input.json as expected.

@Pijukatel
Copy link
Collaborator Author

...
Can we come up with some warning if this happens?

Adding warning and test to SDK PR apify/apify-sdk-python@5bf51f7

@vdusek vdusek mentioned this pull request Sep 12, 2025
1 task
Copy link
Collaborator

@vdusek vdusek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more

Copy link
Collaborator

@Mantisus Mantisus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it.

Just a few comments.

@Pijukatel Pijukatel requested a review from vdusek September 16, 2025 09:21
@Pijukatel Pijukatel requested a review from vdusek September 16, 2025 10:15
Copy link
Collaborator

@vdusek vdusek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one last nit, otherwise LGTM

Copy link
Collaborator

@Mantisus Mantisus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Co-authored-by: Vlada Dusek <[email protected]>
@Pijukatel Pijukatel merged commit 04649bd into master Sep 16, 2025
18 of 19 checks passed
@Pijukatel Pijukatel deleted the storage-clients-and-configurations-2 branch September 16, 2025 15:13
Pijukatel added a commit to apify/apify-sdk-python that referenced this pull request Sep 18, 2025
…576)

### Description

- All relevant parts of `Actor` are initialized in `async init,` not in
`__init__`.
- `Actor` is considered finalized after `Actor.init` was run. This also
means that the same configuration used by the `Actor` is set in the
global `service_locator`.
- There are three valid scenarios for setting up the configuration.
- Setting global configuration in `service_locator` before the
`Actor.init`
- Having no configuration set in `service_locator` and set it through
`Actor.(configuration=...)` and running `Actor.init()`
- Having no configuration set in `service_locator` and no configuration
passed to `Actor` will create and set implicit default configuration
- Properly set `ApifyFileSystemStorageClient` as local client to support
pre-existing input file.
- Depends on apify/crawlee-python/pull/1386
- Enable caching of `ApifyStorageClient` based on `token` and
`api_public_url` and update NDU storage handling.

### Issues

Rated to: #513, #590

### Testing

- Added many new initialization tests that show possible and prohibited
use cases
https://github.com/apify/apify-sdk-python/pull/576/files#diff-d64e1d346cc84a225ace3eb1d1ca826ff1e25c77064c9b1e0145552845fa7b41
- Running benchmark actor based on this and the related Crawlee branch

---------

Co-authored-by: Vlada Dusek <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
t-tooling Issues with this label are in the ownership of the tooling team. tested Temporary label used only programatically for some analytics.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

StorageInstanceManager should support caching based on used configuration
4 participants