You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are multiple scenarios where we want to invalidate the cached data so that the next time we need the data, it's fetched from server but we don't need the new data just yet.
Scenario 1:
On an edit form, we fetch the current data, make edits, then save and go back to the previous screen. This save invalidates the cache for the current data and refetches it again (but we don't need the new data yet because we are going back to the previous screen).
Scenario 2:
Consider a table with pagination and multiple filters. There are actions for each row on the table. One such action can remove all the rows from the current page. If user is currently on the last page and removes all rows, we have to invalidate the cache which triggers a refetch but there is no data for that page anymore. We manually set the page to 1 after the action which triggers another refetch and this time there is data. So, basically 2 api calls going out, once through cache invalidation on mutation and then again after search parameters reset.
We are currently handling scenario 1 by always refetching on mount and not invalidating any tags on mutation. This ensures we have fresh data every time but at the same time, we're not getting the benefit of caching. We could not find any solution for scenario 2.
The text was updated successfully, but these errors were encountered:
There are multiple scenarios where we want to invalidate the cached data so that the next time we need the data, it's fetched from server but we don't need the new data just yet.
Scenario 1:
On an edit form, we fetch the current data, make edits, then save and go back to the previous screen. This save invalidates the cache for the current data and refetches it again (but we don't need the new data yet because we are going back to the previous screen).
Scenario 2:
Consider a table with pagination and multiple filters. There are actions for each row on the table. One such action can remove all the rows from the current page. If user is currently on the last page and removes all rows, we have to invalidate the cache which triggers a refetch but there is no data for that page anymore. We manually set the page to 1 after the action which triggers another refetch and this time there is data. So, basically 2 api calls going out, once through cache invalidation on mutation and then again after search parameters reset.
We are currently handling scenario 1 by always refetching on mount and not invalidating any tags on mutation. This ensures we have fresh data every time but at the same time, we're not getting the benefit of caching. We could not find any solution for scenario 2.
The text was updated successfully, but these errors were encountered: