Skip to content

Conversation

@RAMitchell
Copy link
Contributor

Implements #38

@RAMitchell RAMitchell requested a review from mfoerste4 September 5, 2024 13:15
Copy link
Contributor

@mfoerste4 mfoerste4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a bit hard to review these changes as the file is stored in git LFS. I have browsed over it and I have a couple of thoughts:

  • This notebook has grown quite large. Would it be feasible to extract the utilities (data fetchers / runners) into a python module?
  • I don't quite like the inclusions of graphs & performance numbers into the file that is checked in. These numbers are not auto-updated once we change the code and they are only valid for a certain hardware they were run on the last time someone modified the file. I would also prefer to not have the verbosity log all iterations into the notebook output. Maybe we should split it into a (small) notebook example with graphs and a larger benchmark comparison script (that can also output to csv?).

@RAMitchell
Copy link
Contributor Author

I'd rather not extract them at the moment but keep it a self contained example. The notebook is not that large as loading each dataset is only a few lines.

I guess thats the problem with notebooks in general, but I still think its worth checking in even if it might go out of date. Agree with removing the verbosity output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants