-
Notifications
You must be signed in to change notification settings - Fork 14
Pass the blob URL for preloads in WasmEMCCBenchmark. #74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for webkit-jetstream-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Right now we pass the original path when loading preloads for WasmEMCCBenchmark. This means we might skip the cache and fetch the content from the network again. We don't want to do this because it means the OS might spin down the CPU, which can punish running faster. I also moved this logic to the `prerunCode` rather than adding it to the `runnerCode` since logically that makes more sense. As a drive by change, this patch has the preloads tuples contain the path for cli runs. This means we no longer have to duplicate the preload paths into the benchmark.js file. Additionally, remove a console.log of the test that just finished which broke the dumpJSONResults option for the CLIs.
a9e68ec
to
21aa2dc
Compare
…ion to get the blob as an array buffer in modern browsers.
… that process I also had to change how 8bitbench loads the rom it's going to use.
Ugh, unfortunately now the benchmark jetsams on iOS... I might need to get creative with a solution here... |
@@ -34,14 +34,24 @@ function dumpFrame(vec) { | |||
|
|||
class Benchmark { | |||
isInstantiated = false; | |||
romBinary; | |||
|
|||
async init() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this is essentially shared code with the getBinary
function in WasmEMCCBenchmark
below, just that there it's only used for initializing the Module["wasmBinary"] = getBinary(...)
and here it's for an arbitrary (non-Wasm) file, that happens to be the emulator ROM, right?
Can we unify this, ideally also with the mechansim for preloading blogs of JavaScript line items (ARES-6/Babylon below)?
} catch { | ||
this.dart2wasmJsModule = await import("./Dart/build/flute.dart2wasm.mjs"); | ||
} | ||
if (!isInBrowser) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what happens in !isInBrowser
environments, where the previous dynamic import threw an exception? In that case, wouldn't we simply not set this.dart2wasmJsModule
and fail later? Or do we assume that we only reach here if isInBrowser == false
? In that case, can we instead change this to an explicit assertion, e.g.
console.assert(!isInBrowser, "relative imports should always succeed in browsers, this code is only for shells");
or something.
|
||
const promise = Promise.all(filePromises).then((texts) => { | ||
if (isInBrowser) { | ||
this._resourcesPromise = Promise.resolve(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't fully understood the (somewhat intertwined, complex) preload, resource loading code. At a high-level, why do we just early return here having added anything to this.scripts
?
return this._resourcesPromise; | ||
} | ||
|
||
const filePromises = this.plan.files.map((file) => fileLoader.load(file)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this in conjunction with fileLoader._loadInternal
: Here, we never reach if isInBrowser
, because of the early return above. But in fileLoader._loadInternal
(from line 199,
Lines 199 to 225 in 6947a46
async _loadInternal(url) { | |
if (!isInBrowser) | |
return Promise.resolve(readFile(url)); | |
let response; | |
const tries = 3; | |
while (tries--) { | |
let hasError = false; | |
try { | |
response = await fetch(url); | |
} catch (e) { | |
hasError = true; | |
} | |
if (!hasError && response.ok) | |
break; | |
if (tries) | |
continue; | |
globalThis.allIsGood = false; | |
throw new Error("Fetch failed"); | |
} | |
if (url.indexOf(".js") !== -1) | |
return response.text(); | |
else if (url.indexOf(".wasm") !== -1) | |
return response.arrayBuffer(); | |
throw new Error("should not be reached!"); | |
} |
if (!isInBrowser) Promise.resolve(readFile(url))
. In other words, isn't this whole remaining code of fileLoader
superfluous?
@@ -994,9 +993,16 @@ class Benchmark { | |||
if (this._resourcesPromise) | |||
return this._resourcesPromise; | |||
|
|||
const filePromises = !isInBrowser ? this.plan.files.map((file) => fileLoader.load(file)) : []; | |||
this.preloads = []; | |||
this.blobs = []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, this.blobs
is never read, can't this be removed?
} | ||
|
||
const filePromises = this.plan.files.map((file) => fileLoader.load(file)); | ||
this._resourcesPromise = Promise.all(filePromises).then((texts) => { | ||
if (isInBrowser) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if (isInBrowser) return;
in lines 1006/1007 is dead code now, because of line 999, right?
} | ||
|
||
// FIXME: Why is this part of the runnerCode and not prerunCode? | ||
// This is in runnerCode rather than prerunCode because prerunCode isn't currently structured to be async by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between prerunCode()
and Benchmark.init()
? The latter is async / called with await for AsyncBenchmark
, see
Line 1138 in 6947a46
await __benchmark.init(); |
Could one unify the two, or move this code into
async init()
?
}; | ||
xhr.send(null); | ||
async function getBinary(key, blobURL) { | ||
const response = await fetch(blobURL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I said above, this getBinary
function is only used for the Wasm binary of Emscripten benchmarks, but could we use some more general mechanism from the JavaScript line items?
for (let i = 0; i < keys.length; ++i) { | ||
str += `loadBlob("${keys[i]}", "${this.plan.preload[keys[i]]}", async () => {\n`; | ||
for (let [ preloadKey, blobURLOrPath ] of this.preloads) { | ||
if (preloadKey == "wasmBinary") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we get rid of this special casing of just the wasmBinary
preloadKey? There are not that many Emscripten workloads, so I'd be happy to just add a line Module["wasmBinary"] = genericPreloadFunc(wasmBinary)
to every one of those (for a little less "magic").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for starting this, the preload/blob loading code is certainly ripe for a simplification/cleanup! (Unfortunately, I haven't fully understood all parts / why it's so complex, so some clarification questions in the review.)
Could you explain what the issue is? (Also TIL a new word "jetsams".) |
Right now we pass the original path when loading preloads for WasmEMCCBenchmark. This means we might skip the cache and fetch the content from the network again. We don't want to do this because it means the OS might spin down the CPU, which can punish running faster.
I also moved this logic to the
prerunCode
rather than adding it to therunnerCode
since logically that makes more sense.Lastly, have the preloads tuples contain the path for cli runs. This means we no longer have to duplicate the preload paths into the benchmark.js file.