Skip to content
This repository has been archived by the owner on Jul 9, 2022. It is now read-only.

Benchmark CacheAutoConfiguration and work out how to optimize #5

Open
dsyer opened this issue Dec 21, 2018 · 3 comments
Open

Benchmark CacheAutoConfiguration and work out how to optimize #5

dsyer opened this issue Dec 21, 2018 · 3 comments

Comments

@dsyer
Copy link
Contributor

dsyer commented Dec 21, 2018

There might be nothing we can do here, but you can see the effect on Petclinic (1800ms startup without caching, 2000ms with). Maybe scanning all beans for @Cacheable?

@dsyer
Copy link
Contributor Author

dsyer commented Jan 2, 2019

Some more analysis breaks the 200ms down as follows (it includes transaction annotation processing as well):

50ms: pointcut processing for `@Cacheable`
50ms: pointcut processing for `@Transactional`
70ms: JCache (implementation from EhCache)
30ms: Spring Boot "overhead"

The pointcut processing could maybe be optimized away using an index of some sort - it is all about checking all the methods of all the beans for annotations, and 99% of the time there was no point even looking.

The cache initialization is native to JCache (or EhCache) but it can probably be deferred somehow, e.g. till the first time the cache is used.

The rest is mysterious. Best guess so far is that it is the sheer number of cache provider configurations that have to be processed (there are 10). There are some suspicious sources of possible slowness in there (like a linear search for a provider in a map), but none of them seem to account for the whole 30ms.

@dsyer dsyer changed the title CacheAutoConfiguration is a beast Benchmark CacheAutoConfiguration and work out how to optimize Jan 7, 2019
@dsyer
Copy link
Contributor Author

dsyer commented Jan 8, 2019

A different experiment, slight more granular breakdown:

CacheBenchmarkIT.bench  annotation     empty  avgt   10  0.308 ± 0.009   s/op
CacheBenchmarkIT.bench  annotation    simple  avgt   10  0.386 ± 0.008   s/op
CacheBenchmarkIT.bench  annotation     cache  avgt   10  0.403 ± 0.010   s/op
CacheBenchmarkIT.bench  annotation    jcache  avgt   10  0.529 ± 0.043   s/op
CacheBenchmarkIT.bench  annotation    manual  avgt   10  0.368 ± 0.012   s/op

which translates to

60ms: pointcut processing = "manual" - "empty"
20ms: Spring Boot overhead - conditions, and import selection = "simple" - "manual"
10ms: beans added by Spring Framework to support JSR107 = "cache" - "simple"
120ms: JCache (actually EhCache) = "jcache" - "cache"

The "empty" sample has no caching. The "simple" sample is Spring Boot without JSR107 and manually selecting the cache type. The "cache" sample is the same, but with JSR107 on the classpath so Spring Framework adds some extra features. The "jcache" sample has EhCache as well and Spring Boot selects the cache provider.

There are no @Cacheable methods anywhere in the app (so all of the >200ms could be saved in principle).

@dsyer
Copy link
Contributor Author

dsyer commented Jan 8, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

1 participant