-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow spliting the data portion into multiple files. #101
Comments
I think it would be good if logical partitions were used, too. That way, only the data needed for whichever tab a user is viewing would be loaded which would reduce bandwidth since you can assume most viewers (at least of a publicly hosted report, which is the only use case I can think of this) aren't going to be looking in every tab |
I think its a good feature, we can add an option like "split data files into [X MB] parts" or something. We can do it when I eventually get to the configuration UI, I'll leave this open 😄
Every card analysis is generated on the fly using the full database, we can't "split it by tab" |
I wasn't meaning splitting by tab, I was meaning splitting by data type. One file could have all the basic info such as all this, which are used in just about all tabs chat-analytics/pipeline/process/Types.ts Lines 12 to 26 in 055c68c
chat-analytics/pipeline/process/Types.ts Lines 37 to 40 in 055c68c
but then another file could store words , which I believe is only used by the "Language" tab, and another file could store domains , which I believe is only used by the "Links" tab. Although, I do now realise that the majority of the data is probably just going to be words lol. However, I do still think it could be a good idea to store words in separate file(s) since most tabs don't require them, and only load those file(s) if the user visits a tab which does require them.
|
I wanted to host a chat analytics page on github pages since GH Pages seems ideal to host a static file like this, but the discord server I'm using is almost 400MB of data and there's a 100MB file size limit. I can use LFS but I've run into bandwidth issue in the past.
The solution would be to split the data into separate files.
I've managed to do this manually, but it would be ideal if this was supported natively, since my method is not optimal. Here's how I achieved it:
split.py
<head>
tag so it loads before anything else. I used an external file like so:<script src="loader.js"></script>
but inline is fine.loader.js
That's it. The only issue is that rendering is completely locked until all the chunks are downloaded. It would be best if loading was supported natively instead of a hack like this.
The text was updated successfully, but these errors were encountered: