You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was talking with William Vlader, and we ended up on a topic about quantifying the uncertainty in output graphs generated by methods like in BEELINE and SPRAS. We discussed the differences between ensembling outputs across methods for a given dataset vs ensembling outputs across a series of datasets for a given method(s). This got me thinking, it might be valuable to add a feature that ensembles outputs across the datasets; as of right now, our ensembling quantifies the uncertainty based on the method space, more than the dataset space. For instance, if a scientist perturbs their omic data across multiple experiments, it could be useful to compare how frequently an edge appears across all datasets versus how often it emerges across different methods within a single dataset.
The text was updated successfully, but these errors were encountered:
I was talking with William Vlader, and we ended up on a topic about quantifying the uncertainty in output graphs generated by methods like in BEELINE and SPRAS. We discussed the differences between ensembling outputs across methods for a given dataset vs ensembling outputs across a series of datasets for a given method(s). This got me thinking, it might be valuable to add a feature that ensembles outputs across the datasets; as of right now, our ensembling quantifies the uncertainty based on the method space, more than the dataset space. For instance, if a scientist perturbs their omic data across multiple experiments, it could be useful to compare how frequently an edge appears across all datasets versus how often it emerges across different methods within a single dataset.
The text was updated successfully, but these errors were encountered: