fix(files_reminders): reduce N+1 query issue for big folders#58288
fix(files_reminders): reduce N+1 query issue for big folders#58288salmart-dev wants to merge 1 commit intomasterfrom
Conversation
Signed-off-by: Salvatore Martire <4652631+salmart-dev@users.noreply.github.com>
|
/backport to stable33 please |
|
/backport to stable32 please |
|
/backport to stable31 please |
|
I am wondering if it makes sense for CappedMemory cache to discard old entries… I am also a bit afraid that 30k entries in cache use a lot of memory |
That is what the current code is already doing. We insert by key, and while inserting we call |
|
Sorry, it wasn't clear ^^ Maybe CappedMemory cache should just stop discarding entries. When it's full, it serves current entries and stop adding new ones. |
But that still means then executing N-512 queries to load data for directories with N files, no? I'm thinking of two things:
|
Summary
Increases the
CappedMemoryCachesize forReminderServiceto 30k.Context
OCA\FilesReminders\Dav\PropFindPluginpreloads reminders information for an entire collection during PROPFIND.ReminderServicehas acacheFoldermethod which uses aCappedMemoryCacheas cache. The problem is that the default size for this cache is512meaning that if a directory has more than 512 files inside we start getting cache misses and start querying the DB.What's worse is that once the condition hits, it always voids the pre-cached data and causes N+1 queries:
preloadCollectionstarts caching reminders for files in a directoryCappedMemoryCacheis reached, the information that was cached earlier is discardedChecklist
3. to review, feature component)stable32)