You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 19, 2021. It is now read-only.
Something that is really holding back the more cautious folk from jumping into the .Rmd workflow for papers is the lack of user friendly comment and review tools. There is currently no easy way to:
Mark up .Rmd output with tagged comments that identify the commenter, time etc.
Map comments to specific code/text
Highlight new additions/deletions since the previous review.
The comment thing in particular gets mentioned on Twitter periodically. I've had a bash at something lofi here.
@njtierney recently introduced me to another Oz researcher who has created a Shiny app that hosts compiled .Rmd output and captures comments in text inputs. Again it's pretty lofi but it tells you something about the perceived value of this feature that they were prepared to go that far.
Please hit me with any other work in this space you know about.
I imagine most of us here are fine with git/Github for this purpose, but we have to remember how deep into the tail of reproducible research we really are.
The text was updated successfully, but these errors were encountered:
At this stage the examples only allow adding comments to the .Rmd output. But perhaps it's possible to extend this so the comments can be mapped back to the .Rmd source file somehow ...
Something that is really holding back the more cautious folk from jumping into the .Rmd workflow for papers is the lack of user friendly comment and review tools. There is currently no easy way to:
The comment thing in particular gets mentioned on Twitter periodically. I've had a bash at something lofi here.
@njtierney recently introduced me to another Oz researcher who has created a Shiny app that hosts compiled .Rmd output and captures comments in text inputs. Again it's pretty lofi but it tells you something about the perceived value of this feature that they were prepared to go that far.
Please hit me with any other work in this space you know about.
I imagine most of us here are fine with git/Github for this purpose, but we have to remember how deep into the tail of reproducible research we really are.
The text was updated successfully, but these errors were encountered: