Closed MahShaaban closed 6 years ago
Thanks @mahshaaban! ๐
Before I proceed with regular checks since the package has a Shiny app could you have a look at our brand new specific guidance for Shiny apps especially their testing?
https://ropensci.github.io/dev_guide/building.html#interactivegraphical-interfaces
https://ropensci.github.io/dev_guide/building.html#testing
You can ask any question here / on discuss.ropensci.org / Slack (had you gotten an invite to our Slack yet?)
Hey, @maelle
I had a look at links.
shinytest
in the appI don't think I've received any invites on slack.
:wave: @MahShaaban
I am already using the shinytest in the app
Ah sorry! But I don't see where in the tests? In particular, "shinytest" isn't written in DESCRIPTION like "testthat" is?
Code auto-generation sounds like a great idea, do you recommend any resource to get started on that? No, I'll research that and update this thread (and the guide, good point!)
I've now sent you an invite to the Slack.
Thanks for the slack invite, @maelle
Regarding the shiny tests. The tests are in the app directory, inst/colocr_app/tests/
. The reason is I wanted to test the app on Travis separately. If necessary, I will move the tests to run with the package testing.
To be honest I am very new to the testing ofShiny apps inside packages, and once your package is onboarded it'll probably be an example linked from the gitbook. ๐ธ Why test the app separately? What does it mean exactly? Your Travis config file looks normal to me ๐ค
I am also new to shinytest
.
I wanted to test the app separately cuz I intended to make it stand-alone and I have a prototype of it on shinyapps.io, here.
I have the app as a separate repo on github, here. This repo is added to the package repo as a submodule. The app repo contains separate .travis.yml
and tests/
. This way I can link the app as a regular repo on Travis and it runs the tests in with every commit.
Ah interesting! Could you ask in the forum/Slack what best practice is? One thing that I still wonder about is whether your package's README should include a badge of the app's build status.
I'll make other checks as soon as I can.
Alright. Will do. I think I can add the build status badge from travis, if this what you mean.
Thanks
Do you want to look into code generation before your package undergoes review? I tagged you in a commit, see links to apps generating code here.
goodpractice
output
โ use '<-' for assignment instead of '='. '<-' is
the standard, and R users and developers are used it and it
is easier to read your code for them if you use '<-'.
R\methods.R:367:12
R\methods.R:368:12
R\methods.R:369:5
โ avoid long code lines, it is bad for readability.
Also, many people prefer editor windows that are about 80
characters wide. Try make your lines shorter than 80
characters
inst\extdata\manual_analysis.R:6:1
R\methods.R:378:1
R\methods.R:379:1
R\methods.R:380:1
R\methods.R:381:1
and devtools::spell_check
WORD FOUND IN
caculated coloc_test.Rd:23
determin roi_select.Rd:42
differen roi_show.Rd:19
fluoresence description:1
indecies intensity_get.Rd:15
indicies roi_show.Rd:16
manders coloc_test.Rd:12
pearsons coloc_test.Rd:12
reslution roi_show.Rd:24
Thes roi_select.Rd:36
I made the following changes
goodpractice
devtools::spell_check
README.md
Cool, I'll now look for reviewers.
Could you please add this badge to the README of your package:
[![](https://badges.ropensci.org/243_status.svg)](https://github.com/ropensci/onboarding/issues/243)
Already tackled :wink:
Reviewers: @haozhu233 @seaaan Due date: 2018-09-03
Thanks @haozhu233 and @seaaan for accepting to review this package! Your reviews are due on 2018-09-03.
Here are links to our reviewer guide and review template (as you're both experienced rOpenSci reviewers/authors comments on the guide are welcome in e.g. Slack but don't feel obliged to review both the package and the reviewing guide :wink:).
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).Estimated hours spent reviewing: 3
I tested on two computers: Windows 10, running R 3.5.1 and Ubuntu 16.04 running R 3.4.4.
colocr
provides an R-based workflow for a type of microscopy analysis. Specifically, images are taken of two different colors, typically with each color representing a different molecule of interest. With colocr
, users can measure how much the two colors correlate in their intensity in different parts of the image.The steps are: select (and verify) regions of interest in the image, get (and view) the intensity of each pixel in each channel, and calculate two measures of correlation to assess co-localization of the two channels.
The method of colocalization analysis is not new; the purpose of colocr
is to make it easier to perform in R (while also providing a GUI option) for more reproducible results. The package is pretty simple and easy to use for this specific task, as opposed to more complex packages like imager
.
Below I have written comments based on the checkbox sections listed above. An overview of the most significant issues that I discovered:
I had difficulty building the package and documentation, but when installed via install_github
, things mostly worked fine. A number of automated checks gave warnings or errors when built locally. (This bullet point is the main reason that I didn't check some of the boxes above.)
The vignette was lacking some detail and context.
The package could benefit from functions to process multiple images at once.
More minor issues are named below. (Note that I have pointed out typos using this format: "the typo" -> "the correction".)
The package is pretty nicely done and most of my comments are relatively minor. Good job!
Good overview!
"at loss" -> "at a loss" "straight forward" -> "straightforward"
(Note that those two corrections should also be made in the vignette.)
I don't understand what the "no submodules!" and "@development" mean in the installation guidelines. Can you explain (on the README) what that means and why?
It took me a while to get the package installed on Ubuntu. Specifically, imager
wouldn't install because its dependencies Cairo
and tiff
wouldn't install until I installed some external-to-R dependencies. I think that if I had followed the installation instructions here it would have gone smoothly. It would be helpful for you to include a link to how to install imager
on ubuntu in the installation instructions on github.
The vignette did not build successfully on either the Windows or the Ubuntu computer. This also prevented devtools::check()
from succeeding. Specifically, I would get this error:
pandoc.exe: Could not fetch
: openBinaryFile: does not exist (No such file or directory)
Error: processing vignette 'using_colocr.Rmd' failed with diagnostics:
pandoc document conversion failed with error 67
Execution halted
I figured out (via https://github.com/rstudio/rmarkdown/issues/228) that changing the header of the vignette to include the following lines would fix it:
output:
rmarkdown::html_vignette:
self_contained: no
This fixed the issue on both the Windows and Ubuntu computers.
The vignette installed fine when installing with devtools::install_github(.., build_vignettes = TRUE)
.
I suggest adding a link to an HTML version of the vignette on your github README. As it is now, I have to install the package before I can read the vignette, which is a barrier to me installing it because I can't see very well what it does. Either knit the vignette to a github markdown document and link to that or, once it's on CRAN, link to the web version of the vignette that CRAN hosts.
appropriatness -> appropriateness
acheived -> achieved
The reset of the anlysis -> the result (?) of the analysis
Pearsonโs Coefficeint Correlation -> Pearson correlation coefficient
co-distrubuted -> co-distributed
straight forward -> straightforward
three steps only -> only three steps
functionionality -> functionality
encampasing -> encompassing
resuliontion -> resolution
Arguable -> arguably
seleciton -> selection
sampe -> sample
Although -> this sentence should be combined with the next one, otherwise "Although" doesn't make sense
rational -> rationale
separatly -> separately
through and formal -> thorough and formal
correltion -> correlation
Note that the vignette installation instructions differ from the README instructions.
Maybe hide the startup messages that appear after library(imager)
In the "Getting started" section, give some background on what the image is, what the "appropriate parameters" are and how you determine them and evaluate their appropriateness. What are the cells in the image and what are they stained for? What is the goal of the ROIs? At the example setting, the five cells in the upper middle all appear enclosed in one region together, is that the goal? I would have thought to make a separate regions for each cell.
Why do you use imager::highlight()
instead of roi_show()
?
Some of the background I am asking for in the "Getting started" section is provided in the "Detailed Example" section. I might just skip the "Getting started" section altogether and add a little more background to the "Detailed Example" section.
Why is the "Manders Overlap Coefficient" abbreviated with "SCC"? And why does the link go to a page about the Spearman's rank correlation coefficient?
Is the link to the shiny app running on shinyapps.io the same as the app that comes with the package?
"calling four different functions in order" but there are five listed
What are you looking for in the intensity_show
plots?
Does the labeling of individual regions of interest just label the n largest contiguous regions? This is not totally clear from the vignette.
In the colocalization benchmark source, it would be helpful to know what results are expected from the particular images chosen. Does colocr
achieve the expected results?
?roi_select
: It would be nice to have a more thorough explanation of the parameters to roi_select
than the help page provides, or a link to one at least. There's something funny in the details section where it says fill\codeclean
, looks like in the .R
file there's a stray slash on line 21.
?roi_show
: in the details section, it says "a a low resolution"
?colocr::colocr-package
and/or ?colocr::colocr
should link to the main functions in the package, otherwise they're not very useful.
example("intensity_get")
isn't very informative. I suggest printing out the head of each element of pix_int
and explaining what they are. Also it says "call coloc_test" which is a mistake
example("intensity_show")
runs slowly on my computer, can you subsample the image or something?
There are no guidelines for how to contribute to the package.
Installation from github mostly works fine. I had some issues as described in the documentation described above.
The functionality described in the vignette and documentation is present and works just fine for the most part!
run_app()
gives warnings and errors immediately on loading (argument is of length 0). It works once you load an image, but it's a bit disconcerting to see error messages.
One issue I thought about is what if you want to correlate the intensity of more than two channels?
Another issue I thought about is whether there could be a higher-level function to apply thresholds and get co-localization statistics for a bunch of images at once. I imagine relatively few cases where the user would want to repeat the code over and over for every image. Maybe a function that takes a vector of images and vectors for each argument to roi_select
and then gives back the results of coloc_test
for each image? You could also make functions for highlight and roi_show
and intensity_show
that could create some kind of report with each of the images.
I don't really understand how roi_select
works with regard to channel 1 vs channel 2. Could you explain that somewhere, perhaps in the vignette? I.e. can you select an ROI based just on channel 1 and apply that to channel 1 and channel 2?
Maybe import and re-export the functions from imager
that are needed for the vignette (and thus likely to be used by the user), so the user doesn't have to library(imager)
It would be nice to have more descriptive names for the results of coloc_test
than $'p'
and $r
.
Currently, the basic workflow is to do:
px <- roi_select(img, threshold = 90)
pix_int <- intensity_get(img, px)
coloc_test(pix_int)
It would be nice if you could do something like this:
img %>%
roi_select(threshold = 90) %>%
intensity_get() %>%
coloc_test()
The benefits I see are that it's pipeable and that you don't have to separately keep track of px
and pix_int
in addition to img
. Another benefit is that all the functions could return the same type of object. If you accept this option, you could drop the intensity_get()
function all together by combining it with the roi_select
function. You could make the functions pipeable and all return the same type by having each of the functions return a cimg
with attributes containing px
and pix_int
that are used by the appropriate function or by making a new class as a subclass of cimg
.
This is not a requirement by any means, just something that I think would be nice!
roi_select
: with threshold = 100, you get a weird error when running highlight. With threshold > 100, you get a weird error when running roi_select, same with thresholds well below 0. I suggest enforcing 0 < threshold < 100 (i.e. throw an error if the provided value is outside that range). The errors are hard to interpret so it's not obvious that they're happening because of the value of threshold
. Currently it just checks if it's numeric or not. Same for the other parameters as appropriate.
The app runs kind of slowly for me. It's not unusable by any means, but if there's some way you could speed it up (eg subsample the image to set the threshold and other parameters initially and then have a button to push to do the full analysis), that would be nice.
goodpractice::goodpractice()
goodpractice::goodpractice()
gives me the following message on both Windows and Ubuntu:
โ write unit tests for all functions, and all package code in general. 87% of code lines are
covered by test cases.
R/methods.R:52:NA
R/methods.R:53:NA
R/methods.R:54:NA
R/methods.R:62:NA
R/methods.R:138:NA
... and 12 more lines
Ubuntu-only errors:
โ fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically
indicates Rd problems.
โ fix this R CMD check ERROR: Re-running with no redirection of stdout/stderr. Hmm ... looks
like a package You may want to clean up by 'rm -rf /tmp/RtmpKNtM4d/Rd2pdf2e0446da511d'
Windows-only errors:
Warning messages:
1: In MYPREPS[[prep]](state, quiet = quiet) :
Prep step for cyclomatic complexity failed.
2: In MYPREPS[[prep]](state, quiet = quiet) :
Prep step for rcmdcheck failed.
Passes completely on Ubuntu once vignette issue is resolved (see Vignettes section).
It results in an error on Windows in the "Checking examples" section when it runs the example for .manders
, I think because .manders
isn't exported.
Works fine on Ubuntu once vignette issue is resolved (see Vignettes section)
I get these warnings on Windows:
Rd warning: H:/R/colocr/man/roi_select.Rd:16: file link 'shrink' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:20: file link 'fill' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:38: file link 'shrink' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:39: file link 'fill' in package 'imager' does not exist and so has been treated as a topic
Tests pass.
I get these warnings on Ubuntu:
Skipping invalid path: .labels_add.Rd
Skipping invalid path: .pearson.Rd
Skipping invalid path: .manders.Rd
Section 1.3.2 suggests that a Shiny app should come with a mechanism to reproduce the choices made in the GUI, such as auto-generation of code. Maybe you can add a tab that generates the code that is run based on the user's selections?
Section 1.7 suggests creating a documentation website.
Thanks for your thorough review @seaaan! ๐
Thanks, @seaaan for your review. I will follow with a reply to these issues.
I really like your suggested API. I made quick changes in a new branch called pipeable to try this approach. If it turned out this works better, I will rewrite the tests, app and the vignette.
@haozhu233 just a reminder that your review is due on 2018-09-03. Thank you! ๐บ
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).I reviewed the development
branch of this package as it has a stand-alone folder for that shiny app. I noticed that there are some recent changes in the master branch but they are mostly about badges. I run all the checks on OSX 10.13.
This package provides a straightforward way of conducting co-localization calculation, a specific type of image analysis for microscopy images. There are only 5 main functions in this package but they are quite useful and easy to use. The author also provided a GUI in a shiny app which makes it even more user friendly. The package vignette was also well written and easy to follow. Although I personally don't have any experience with analyzing microscopy images, I was still able to follow the tutorial and achieve what was supposed to be done. I can imagine this package can be quite useful for this particular use case.
Here I'm listing out some of the most significant issues I found:
inst/colocr_app
folder. This approach works but if the app is considered as one of the core functionality of this package, I would recommend to put these codes in the R folder and recode it in a way that fits a R packagetabular panel
sort of indicates that the users are supposed to run the analysis on many images and then gather the results together. Too many mouse-clicking will quickly make users hate to use the app.There are also a few minor issues that might be improved. I'm going to mention them below as well.
The default value for n
in roi_select
was defined in .labels_add
which is not an exported function. It confuses RStudio and promotes a code diagnosis warning that "the value of n
is not defined".
In coloc_test
, it would be nice to standardize the type
options with the terms used in the vignette. For example, "pearsons"
can become "PCC"
or "pcc"
, or vice versa. At the same time, the option "all"
should be "both"
.
The purpose of using S3
method here seems to be just sending out a warning message if class(img) != "cimg"
. It seems a little unnecessary. This logic can simply be replaced by a if
statement which can even be modularized to a small utility function.
It might be a good idea to split the contents of methods.R
into multiple R files, like roi.R & intensity.R, based on their purposes.
The function name run_app
is too general. Consider name it differently. For example, colocr_app
req(img1())
at the very beginning of every output
object. For, example, output$image_plot <- renderPlot({
req(input$image1)
par(mfrow=c(2,2), mar = rep(1, 4))
roi_show(img = img1(),
px = px(),
labels = labs.px())
})
img1
, labs.px
, px
and corr
in reactiveValues
. You can just start with things like this.values <- reactiveValues()
# load images
img1 <- reactive({
load.image(input$image1$datapath) # note that I removed the value assignment
})
values$df <- data.frame()
and that's enough.For these three issues above, check out this modified version: https://gist.github.com/haozhu233/7a02bb4a03ccf4b5ef63acf40c53bbc7
As I said above, it would be nice if users can upload a zipped file with many pictures. You can add two button for going back and forward and a simple counter to make it easy to use.
It would be nice to add download to csv buttons to those two tables on the "Tabular View" tab.
The correlation texts could be reformatted and become more readable.
Thanks a lot for your review @haozhu233! ๐
@mahshaaban now both reviews are in.
Thanks @haozhu233 for your review. I will work on the changes you suggested. Regarding your last comment, I made some changes to use the pipeable approach, here. I haven't merged the changes to the development branch yet. So please, let me know if you have any comments on that as well.
I'd like first to thank you guys for your time and the thoughtful comments.
I've made a quite a few changes based on the comments and the suggestions above. So I will start by mentioning these then I will reply to each of the reviews separately.
The changes I made for these reviews are in a new branch called pipeable
To install this branch from github direclty please use
devtools::install_github('MahShaaban/colocr@pipeable')
Since the master still behind these changes and has the app directory as a submodule, please used the following to pull the repo and view it locally
$ git clone --recursive-submodules https://github.com/MahShaaban/colocr
The main changes are:
intensity_show
to roi_check
, colocr_test
to roi_test
and run_app
to colocr_app
. %>%
. So typical function call can be as followingimg <- imager::load.image('path to image file')
img %>%
roi_select(threshold = 50) %>%
roi_test()
So all function takes a cimg
as an input. roi_select
returns a cimg
with an attribute label
.
roi_show
and roi_check
return lists of 4 and 2 plots, respectively. Finally, roi_test
returns a data.frame
.
I added an S3 method for each of the functions to take a list
of cimg
objects and returns a list
of their regular return value. This way, users can process multiple images at once.
I updated the vignette to describe this new changes. I also corrected the issues that were raised in the reviews.
I updated the shinyapp to work with the new pipeable functions and added a few changes to it including the ability to upload multiple files at once.
I added more tests and moved the app tests to run with these of the package.
Here I reply to the different issues raised by both reviewers
--recursive-submodules
when pulling the repo to work with it locally. Other branchs don't have this issue.devtools::install_github('MahShaaban/colocr')
would fail.--recursive-submodules
would solve this issue.req()
in each of the instance where the app shos and error on start. Now it shows nothing until the required input is provided.ind
input in roi_test
can be a way to do that. One can calculate the correlation for say the first and the second channel then the second to the third.list
of images as described above. I think this can do for processing a collection of images at once.n
default value in roi_select
.roi_test
no returns a data.frame
instead of a list
. The columns in the data.frame
correspond to the different co-localization stats which are named the same as the type
argument; pcc
and/or moc
.list
to enable processing lists of images.req()
as suggested.downloadButton
s to the Tabular ViewI really appreciate all the comments and the suggestions from both reviewers. They definately helped me improving the code.
Thanks @MahShaaban!
@seaaan @haozhu233 could you please have a look at @MahShaaban's answer? Thanks in advance!
@MahShaaban Thanks for the responses. I just checked the branch and it looks a lot better. Here are some new issues I found.
unzip
function in R that lets you unzip files and returns the names (you might want to create a temp dir and then do that). ERROR:need finite 'xlim' values
on my machine.threshold should be a numeric >= 0 and < 100.
%>%
from magritter
so this package works with pipe naturally.shinyBS
in DESCRIPTION but didn't use it in any R
code other than the shiny app. Maybe you should put it in the suggest list. Same for magritter
. If you don't have a plan to handle the %>%
in NAMESPACE, you can throw it in the suggest list. Thanks, @haozhu233
multiple = TRUE
in fileInput()
, here. I also have a test case with two images and it passes fine. If you tried that on the version of the app hosted on shinyapps.io it will not work since I haven't deployed the recent changes.if
statement. So it only show the output if(nrow(values$df) >= 1)
, here \n
didn't work. I will keep looking at it.roi_select(..., threshold = 100)
, so I restricted the input range to < 100.magritter
and shinyBS
to Suggests
in the DESCRIPTION.@MahShaaban
<br>
?Another tiny issue, can you reformat the UI part to make the codes look a little bit more structured?
Nice changes! I have a few suggestions, but mostly it all looks good to me.
I like the functionality of processing multiple images.
It would be nice to return a data frame of one row per image instead of a list of data frames when roi_test
is called on a list of images
Maybe provide a load_images()
function that takes a directory (or list of files?) and calls lapply(., imager::load.image)
. You could also provide load_image()
that just wraps imager::load.image()
so the user doesn't have to run library(imager)
"The reset of the analysis" --> the RESULT of the analysis?
"Arguable, selecting" --> ArguablY
"only semi-automate" --> only semi-automateS
The link to ImageJ doesn't work
"shrin" --> shrinK (in the argument to roi_select)
The example checking that the app and package output are equal returns FALSE
. Use all.equal()
instead of ==
.
"regioins" --> regions
The first colocalization benchmark score image gives statistics relatively close to the expected. The second answer is very different! Comment in the vignette on why that is. It seems like ideally you should be able to reproduce the expected results?
ppc -> pcc (last code box)
With regard to the @development and "no submodules!" on the README, I suggest that you explain on the README what those mean (since I didn't know, maybe other users will also wonder).
roi_show()
returns (via imager::highlight
) NULL
(eg as displayed in the vignette). If you return invisible(NULL)
instead, NULL
won't print out but you'll still get the same effect. You could also invisibly return the img to continue the piping.
Thanks, @haozhu233
<br>
, it didn't work as expected. threshold
in the shiny app to 0:99.Hey, @seaaan Here are some changes I made in response to your last comment.
roi_select
to return a list
is to keep have the returned data.frame
units unchanged regardless of the number of input images. Returning one row for each image is not possible when there are multiple regions of interest in each image. Also, to have a straightforward way to map the output back to the input without having to subset a data.frame
. I think the only other option, is to return a data.frame
in case of multiple input images is to add an extra column identifier. Which also mean I have to assign these identifiers or ask for them as input from the user.image_load
that takes a path to one or more images and returns a cimg
or a list of cimg
objects.all.equal
instead of ==
colocr
would fail to produce reliable stats.invisible(NULL)
as a return statement in both roi_show
and roi_check
Nice work! Just a couple of things:
In the vignette, when you introduce analyzing a collection of images, it would be better to show how to use image_load
with multiple file paths than to create the collection using list(img, img2)
like it is now.
Also in the vignette, in the section where you're reproducing the analysis with the app and with code, the results aren't actually equivalent (i.e. all.equal(...)
) returns a difference. That's confusing! Why aren't they equivalent?
Thanks @seaaan
image_load
in the section for analyzing a collection of images.I am not sure what might be the reason why you get different results. This is the case at least after rounding to the 2nd or 3rd digits. One thing I can think of is that the system files in your local version of the package might be not updated.
Okay, well it must be something on my end then. As long as it says TRUE
in the CRAN vignette it should be fine. I don't have any further comments. Nice work!
Nice work @MahShaaban!
Thanks @seaaan!
@haozhu233 Thanks too! Did you have any further comment?
@maelle Nope~ :) @MahShaaban has fixed all the questions and comments I got and I don't have any further comments. Great job!
I get this error for tests. Any clue? Is there any dependency with a minimal version not indicated in DESCRIPTION yet?
test-app.R:6: error: app works
Can't find variable: $
1: expect_pass(testApp(app_dir, compareImages = FALSE)) at C:\Users\Maelle\Documents\ropensci\colocr/tests/testthat/test-app.R:6
2: testApp(app_dir, compareImages = FALSE)
3: lapply(found_testnames, function(testname) {
withr::local_dir(testsDir)
withr::local_envvar(c(RSTUDIO = ""))
withr::local_options(list(shinytest.app.dir = "appdir"))
gc()
env <- new.env(parent = .GlobalEnv)
if (!quiet) {
message(testname, " ", appendLF = FALSE)
}
source(testname, local = env)
})
4: FUN(X[[i]], ...)
5: source(testname, local = env)
6: withVisible(eval(ei, envir))
7: eval(ei, envir)
8: eval(ei, envir)
9: ShinyDriver$new("../") at one_image.R:1
10: .subset2(public_bind_env, "initialize")(...)
11: sd_initialize(self, private, path, loadTimeout, checkNames, debug, phantomTimeout = phantomTimeout,
seed = seed, cleanLogs = cleanLogs, shinyOptions = shinyOptions)
12: private$web$executeScript(js)
13: session_executeScript(self, private, script, ...)
14: private$makeRequest("EXECUTE SCRIPT", list(script = script, args = args))
15: session_makeRequest(self, private, endpoint, data, params, headers)
16: report_error(response)
--------------------------------------------------------------------------------
โ | 6 | helpers.R [4.8 s]
โ | 32 | pipeable [205.1 s]
== Results =====================================================================
Duration: 223.1 s
OK: 38
Failed: 1
Warnings: 0
Skipped: 0
I can't really understand what the problem is from the error. I have merged the branch we've been woking on to the master and it passed the checks locally and on travis. You may try a fresh clone and see if the error goes away.
I did a fresh clone and got it again, will try to extract a reprex.
library(shinytest)
app_dir <- system.file('colocr_app/', package = 'colocr')
expect_pass(testApp(app_dir, compareImages = FALSE))
#> Running one_image.R
#> Error in session_makeRequest(self, private, endpoint, data, params, headers): Can't find variable: $
Created on 2018-09-14 by the reprex package (v0.2.0).
I get the same error with shinytest
CRAN version.
library(shinytest)
#> Warning: package 'shinytest' was built under R version 3.5.1
#> Warning: package 'shinytest' was built under R version 3.5.1
app_dir <- system.file('colocr_app/', package = 'colocr')
expect_pass(testApp(app_dir, compareImages = FALSE))
#> Running one_image.R
#> Error in session_makeRequest(self, private, endpoint, data, params, headers): Can't find variable: $
Created on 2018-09-14 by the reprex package (v0.2.0).
Oops so after re-installing htmltools
(it was somehow corrupted), I was able to run the test. It told me the output was different. For the app screenshot it seems like a font difference. However it also shows me different results.
@MahShaaban could you please add testing on Appveyor to see what you get there? Sorry to notice all of this only now.
I actually see you have a failing Appveyor build, but it seems due to a wrong Appveyor config, which prevents further tests from running. I'll stop exploring on my own system now, I hope you can get the tests passing on Appveyor.
Besides, the link in your shinyapps.io badge, https://mahshaaban.shinyapps.io/colocr_app/, has a 404 status.
Reg the Appveyor build this webdriver
setup might help https://github.com/rstudio/webdriver/pull/29/files
I fixed the link.
I can't get the tests to run on appveyor. I had a look at the issue from the webdriver
repo. I don't get how they got it to pass, but if I have to guess I think they skipped that part of the test!
๐ค can you ask on the Slack workspace?
And any idea why I got different results?
Okay. Will do.
I updated the image to match the expected. So the test for two images should pass with No changes
now. My guess is I forgot to update the snapshot image the last time I ran the test.
But then why did the test pass on Travis?
I think it passed on Travis because I use compareImages = FALSE
in inst/colocr/runt_tests.R
. I use this option because these images are screenshots and would be different from an OS to another.
Yes but what about the average PCC etc? (Am still new to shinytest
)
I am also new to shinytest :) I ran the tests locally again and got No changes
!
Summary
What does this package do? (explain in 50 words or less): Conduct Co-localization Analysis of Fluorescence Microscopy Images
Paste the full DESCRIPTION file inside a code block below:
URL for the package (the development repository, not a stylized html page): https://github.com/MahShaaban/colocr
Please indicate which category or categories from our package fit policies this package falls under *and why(? (e.g., data retrieval, reproducibility. If you are unsure, we suggest you make a pre-submission inquiry.):
[e.g., "data extraction, because the package parses a scientific data file format"] data extraction
ย Who is the target audience and what are scientific applications of this package? ย Biologists (no advanced R required)
Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category? The EBImage and the imager packages contain the algorithms for dealing with images. This package uses imager but is focused only on the functionality needed for conducting co-localization of multichannel images. In addition, the package provide a shiny app to inteactively perform the same analysis.
ย If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted. A pre-submission, https://github.com/ropensci/onboarding/issues/241
Requirements
Confirm each of the following by checking the box. This package:
Publication options
paper.md
matching JOSS's requirements with a high-level description in the package root or ininst/
.Detail
[x] Does
R CMD check
(ordevtools::check()
) succeed? Paste and describe any errors or warnings:[x] Does the package conform to rOpenSci packaging guidelines? Please describe any exceptions:
If this is a resubmission following rejection, please explain the change in circumstances:
If possible, please provide recommendations of reviewers - those with experience with similar packages and/or likely users of your package - and their GitHub user names: