Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
After splitting out the CI jobs (build/check/typecheck), the build times increased by ~30-45secs.
The reason for this is that the build & check steps each use go and rely on the default caching of go dependencies from
actions/setup-go
. Unfortunately, the action does not contemplate that multiple jobs may result in different modules being downloaded (e.g., build will get everything but fmt only obtains a portion of the modules) and if the inputs are the same (which they are) across jobs, the first job to "save the cache" will win. In our case, thecheck
job finishes first so the cache only contains the modules of what was required forgo fmt
. This results ingo build
needing to download modules every time since some modules aren't in the cache.Unfortunately,
actions/setup-go
does not support a "cache prefix/key" in its options and doesn't appear that it is going to be added any time soon (if ever). See actions/setup-go#358 for full details.This PR resolves the issue by including a job specific text file in the dependency graph to ensure that the cache key is different for each job.
Testing
Manual testing of CI build confirms that each job is getting its own cache and once the cache is built for a given set of input, job times are back to previous duration levels.