-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FossID: Limit fossid snippets #8616
FossID: Limit fossid snippets #8616
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #8616 +/- ##
============================================
- Coverage 67.97% 67.74% -0.23%
Complexity 1005 1005
============================================
Files 244 244
Lines 7844 7844
Branches 876 876
============================================
- Hits 5332 5314 -18
- Misses 2129 2147 +18
Partials 383 383
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
f54a42d
to
e313a9e
Compare
Signed-off-by: Nicolas Nobelis <[email protected]>
This property allows to control the amount of snippets to query from the FossID instance. Indeed, some repositories have a huge amount of snippets (> 400k), and querying them all, putting them in the ORT results and displaying them in the snippet report has a huge performance impact. It also does not make lots of sense on the functional level. This property has a default value of 500 snippets. Signed-off-by: Nicolas Nobelis <[email protected]>
e313a9e
to
5792c9f
Compare
5792c9f
to
af9b0a2
Compare
af9b0a2
to
5d9ea6b
Compare
val snippets = runBlocking(Dispatchers.IO) { | ||
pendingFiles.map { | ||
async { | ||
val snippets = pendingFiles.associateWith { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new implementation is now fully sequential; every concurrent processing of files has been turned off. Could this be a performance issue if there are many pending files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is correct. Let's take an example with n pending files and s snippets in total.
Before the changes, one request was made to list the snippets per pending file, so n requests in total. Then, one request to list the matched lines per snippet. Therefore n + s in total, done concurrently.
With the changes, there is now n + min(s, limit) requests in total, done sequentially,
I would assume this is less efficient but this is nevertheless a huge improvement for diminishing the amount of requests, as soon as the amount of snippet is growing.
Unfortunately, with the introduction of lazy loading, the initial concurrency cannot be get back. However, all the matched lines for all snippets of one pending file can be made concurrently ?
Any suggestion ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guts feeling is n << s, by a factor 10 at least.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put again the code to list snippets of a pending file concurrently.
As we discussed, the proper solution to avoid all these problem would be to put the listing and mapping code together.
However, this is a big refactoring and I worry that the snippet choice feature relies on having all snippets information at once for a given pending file.
Additionally, it seems FossID is moving toward OpenAPI to replace the legacy API, hence I would like to wait for their new release first, before doing such a refactoring.
async { | ||
val snippets = pendingFiles.associateWith { | ||
val filteredSnippets = lazy { | ||
runBlocking(Dispatchers.IO) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since getRawResults()
is a suspend function, it should not call runBlocking()
. Use withContext(Dispatchers.IO)
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the hint. Unfortunately, withContext
is a suspending function and it cannot be used with lazy
, because the latter does not support suspension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid runBlocking
altogether, I refactored the code to generate a Sequence
as @sschuberth suggested.
Unfortunately, the function building a Sequence
cannot be suspendable.
Therefore, I went with a Flow
: values are emitted asynchronously and if the limit has been reached, the collecting job is cancelled.
5d9ea6b
to
a9811b0
Compare
a9811b0
to
d870f59
Compare
d7fea61
to
5c336a3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Any objections from anybody against merging this?
There's a unit test failure now. Please first have a look at this. |
Currently, the FossID snippets are all listed in a first step, including their matching lines. Then in a second step, they are mapped to ORT snippets. A future commit is going to introduce a limit for listing snippets. If this limit is applied to the current logic (i.e. when listing the snippets), it is going to clash with the snippet choice feature: chosen snippets are removed from the results and thus should not be counted when enforcing the limit. Therefore, this commit makes the listed snippets a Flow. When mapping the snippets and applying the snippet choices, the current snippet count can then be compared to the limit and additional snippets can lazily be fetched. Signed-off-by: Nicolas Nobelis <[email protected]>
Since the function `mapSnippetFindings` is going to increase in complexity in a future commit, with the enforcing of the snippets limit, extract this function now. Signed-off-by: Nicolas Nobelis <[email protected]>
The limit is enforced when mapping the FossID snippets to ORT snippets. Signed-off-by: Nicolas Nobelis <[email protected]>
…ched Also display this issue in the FossID snippet report. In case the limit has been exactly reached when listing the snippets of a pending file, adding an issue is the only possibility for the user to know that there may be other snippets beyond the limit, requiring some snippet choices and a second scanner run. A strict comparison could be used instead but deciding to create the issue would then require peeking the next snippet results, which would defeat the optimization brought by the introduction of the limit. Signed-off-by: Nicolas Nobelis <[email protected]>
When the usage of the Flow was introduced, snippets were listed sequentially for a given file. This commit adds concurrent processing again. Signed-off-by: Nicolas Nobelis <[email protected]>
5c336a3
to
08c9e38
Compare
@sschuberth, are you fine with merging? |
I haven't looked at the latest iterations code-wise, and I'm not deep enough into the topic to do so in reasonable time. So I don't feel comfortable to formally approve, but if you've approved and all tests pass (the Docker Build failure seems unrelated, GitHub apparently just had some problems today), I don't see a reason not to merge. Please go ahead as you see fit. |
Please have a look at the individual commit messages for the details.