Bioconductor / BiocWorkshops

:warning: 2018 :warning: Bioconductor Workshops
https://bioconductor.github.io/BiocWorkshops/
74 stars 53 forks source link

Waldron_PublicData.Rmd #36

Closed lwaldron closed 6 years ago

lwaldron commented 6 years ago

I'm creating an issue for each workshop, to be closed when both 1) the workshop is complete, and 2) it compiles without problems. Please note problems and status updates here.

lwaldron commented 6 years ago

@seandavi and @bhaibeka you are invited as collaborators

lwaldron commented 6 years ago
pandoc: Filter pandoc-citeproc not found
Error: pandoc document conversion failed with error 85
Execution halted
seandavi commented 6 years ago

see https://github.com/rstudio/bookdown-demo/issues/2.

lwaldron commented 6 years ago

(I have probably fixed this, but waiting for the next build to see if there are any other problems)

processing file: Waldron_PublicData.Rmd
Error in parse_block(g[-1], g[1], params.src) : 
  duplicate label 'loadLibrary'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
lwaldron commented 6 years ago

@bhaibeka I think the PharmacoGx section should have less code and be more lightweight. Could you just download one smaller dataset and demonstrate what it contains? On the other hand, time is short - could I just cut the sections from Comparing drug sensitivity datasets onward?

seandavi commented 6 years ago

I went ahead and shortened. Also, I think the duplicate labels are fixed.

seandavi commented 6 years ago

Looks like the complete PharmacoGx section is back. I had cleaned up and shortened yesterday. Was that by design? To be clear, I had removed everything from Comparing drug sensitivity datasets onward. I'd like to remove those again, if that works for everyone?

seandavi commented 6 years ago

I removed it and things appear to build. I'll checkin the changes in a few minutes.

lwaldron commented 6 years ago

Thanks @seandavi. Sorry I was out of touch yesterday, but I'm back and reviewing the book this weekend... I'll take off the "problems" label again.

lwaldron commented 6 years ago
processing file: 103_Waldron_PublicData.Rmd
No methods found in package 'IRanges' for request: 'subset' when loading 'derfinder'
snapshotDate(): 2018-07-17
Quitting from lines 854-861 (103_Waldron_PublicData.Rmd) 
Error in s$collate(limit = 1000) : unused argument (limit = 1000)
Calls: <Anonymous> ... handle -> withCallingHandlers -> withVisible -> eval -> eval
In addition: Warning message:
no DISPLAY variable so Tk is not available 
Execution halted
LiNk-NY commented 6 years ago

I'm getting an error when building:

Quitting from lines 847-854 (103_Waldron_PublicData.Rmd) 
Error in s$collate(limit = 1000) : unused argument (limit = 1000)
Calls: local ... handle -> withCallingHandlers -> withVisible -> eval -> eval
Execution halted
lwaldron commented 6 years ago

I got this error too until I built on a cleanly checked out copy...

lwaldron commented 6 years ago

The erroring chunk is this from the SRAdbV2 section - I wonder if the cache=FALSE gets over-ridden somehow? Anyways the error doesn't seem to happen if there is in fact no cache.

```{r cache=FALSE}
# for VERY large result sets, this may take
# quite a bit of time and/or memory. An
# alternative is to use s$chunk() to retrieve
# one batch of records at a time and process
# incrementally.
res = s$collate(limit = 1000)
head(res)
#```
seandavi commented 6 years ago

Be sure to update the SRAdb version. This stuff is relatively new (last week). The collate limit is the latest change. I tried setting a version on the remote, but versioning remotes does not work right now (it interprets the version as part of the name).

On Sun, Jul 22, 2018 at 1:24 PM Levi Waldron notifications@github.com wrote:

The erroring chunk is this from the SRAdbV2 section - I wonder if the cache=FALSE gets over-ridden somehow? Anyways the error doesn't seem to happen if there is in fact no cache.


# for VERY large result sets, this may take
# quite a bit of time and/or memory. An
# alternative is to use s$chunk() to retrieve
# one batch of records at a time and process
# incrementally.
res = s$collate(limit = 1000)
head(res)
#```

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/Bioconductor/BiocWorkshops/issues/36#issuecomment-406883046>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAFpE-9Q6QFqivdUzkFGgQhmEZQW7gbCks5uJLU_gaJpZM4VGgNu>
.

-- Sean Davis, MD, PhD Center for Cancer Research National Cancer Institute National Institutes of Health Bethesda, MD 20892 https://seandavi.github.io/ https://twitter.com/seandavis12

seandavi commented 6 years ago

The AMI has the correct versions--should be fine. And it is SRAdbV2 (not yet in Bioc, I know).

On Mon, Jul 23, 2018 at 10:39 AM lshep notifications@github.com wrote:

SRAdb

or

seandavi/SRAdbV2

I rendered the book fresh this morning and it rendered fine. Do these need updating outside the AMI id you gave me last night?

Lori Shepherd

Bioconductor Core Team

Roswell Park Cancer Institute

Department of Biostatistics & Bioinformatics

Elm & Carlton Streets

Buffalo, New York 14263


From: Sean Davis notifications@github.com Sent: Monday, July 23, 2018 10:28:10 AM To: Bioconductor/BiocWorkshops Cc: Subscribed Subject: Re: [Bioconductor/BiocWorkshops] Waldron_PublicData.Rmd (#36)

Be sure to update the SRAdb version. This stuff is relatively new (last week). The collate limit is the latest change. I tried setting a version on the remote, but versioning remotes does not work right now (it interprets the version as part of the name).

On Sun, Jul 22, 2018 at 1:24 PM Levi Waldron notifications@github.com wrote:

The erroring chunk is this from the SRAdbV2 section - I wonder if the cache=FALSE gets over-ridden somehow? Anyways the error doesn't seem to happen if there is in fact no cache.


# for VERY large result sets, this may take
# quite a bit of time and/or memory. An
# alternative is to use s$chunk() to retrieve
# one batch of records at a time and process
# incrementally.
res = s$collate(limit = 1000)
head(res)
#```

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
https://github.com/Bioconductor/BiocWorkshops/issues/36#issuecomment-406883046
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAFpE-9Q6QFqivdUzkFGgQhmEZQW7gbCks5uJLU_gaJpZM4VGgNu

.

-- Sean Davis, MD, PhD Center for Cancer Research National Cancer Institute National Institutes of Health Bethesda, MD 20892 https://seandavi.github.io/ https://twitter.com/seandavis12

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub< https://github.com/Bioconductor/BiocWorkshops/issues/36#issuecomment-407077355>, or mute the thread< https://github.com/notifications/unsubscribe-auth/AR22HEQzyw5NDGgLzR6EYNcp7JlmflY2ks5uJd16gaJpZM4VGgNu

.

This email message may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for the delivery of this message to the intended recipient(s), you are hereby notified that any disclosure, copying, distribution, or use of this email message is prohibited. If you have received this message in error, please notify the sender immediately by e-mail and delete this email message from your computer. Thank you.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Bioconductor/BiocWorkshops/issues/36#issuecomment-407081304, or mute the thread https://github.com/notifications/unsubscribe-auth/AAFpE7f9mDxTWEApnZR4UNAf2I3vajQJks5uJeAZgaJpZM4VGgNu .

-- Sean Davis, MD, PhD Center for Cancer Research National Cancer Institute National Institutes of Health Bethesda, MD 20892 https://seandavi.github.io/ https://twitter.com/seandavis12