-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Packaged way adding a detritus classifier to image processing #32
Comments
Taking |
This is partly completed in #36 - simplest possible DVC pipeline that fits a Kmeans model for an image collection and saves it for reuse - with a web interface for exploring the contents of the different clusters to judge by eye which is primarily detritus You can see there's still an open question about where the metadata goes. I thought about adding a tag right into the EXIF headers, or into the metadata that describes a lot of detail about each image's properties that the microscope exports. It depends what is most useful to the ongoing application! And also how this will be used - is the tagging an extra stage in a Luigi pipeline that's processing and uploading images to an object store, or is it a distinct pipeline that's indexing and analysing images once they've been uploaded? So I've left it open for now - it needs another use case probably, like the phenocam images, show the wider picture |
I've been thinking that it could be very helpful to have this as an API. Rather than add models and python wrappers for them directly into pipelines, POST an image contents and get back a classification, a set of embeddings or both. What originally prompted this was #45 - these new models are promising but they're not yet published in a way that eases programmatic reuse, to deploy them you'd need to access a Google Drive first. Triton Server or similar would be a good way to go, but a minimal FastAPI app would be a useful proof of concept for this |
#53 exists now for serving the recent Turing models (at least the lightweight ResNet18) one, even if there are barriers (Google Drive) to reproducibility. There's an endpoint for returning embeddings + classification, could add a cluster label to it directly As it stands that's not added as a pipeline stage but done after the fact by pointing at an image bucket and searching for unseen entries, that's probably fine at this point. The moral of this story is i'm going to take this out of the TODO list and prioritise a) working through the simplest possible deployment until we have more infrastructure decisions and b) if possible putting an entirely dissimilar image collection through the process |
After the promising work setting up Label Studio for the plankton taxonomy, @Kzra suggested packaging this as a Label Studio ML backend. That's a nice path here. It can be containerised, though we don't have a container registry to store that in. #52 covers most of this already. |
Workflow for generating a classifier: s3 image collection -> Extract and store embeddings -> Fit a clustering model -> save the resulting artifact for reuse in annotation workflow
Could be Luigi or this is an opportunity to try and get started with pyorderly, or is it an opportunity to test this walkthrough of DVC and work with CML
Outline:
intake
to drive the script that does embedding extractionchromadb
(labels, image sizes)The text was updated successfully, but these errors were encountered: