sr320 / course-fish546-2021

1 stars 1 forks source link

Scripting #77

Closed sr320 closed 3 years ago

sr320 commented 3 years ago

Create a 1) script, AND either a jupyter notebook, or R markdown file that performs a relevant operation(s) on a number of files (>4) "automatically".

This should be in your class repo and should be written in a manner that when I clone the repo I can generate the output.

For instance, the files will need to be in the repo or the script will need to be able to pull them from a public url. Note absolute paths will not work.

please place the url to the script, jupyter notebook, or R markdown file below.

aspencoyle commented 3 years ago

https://github.com/fish546-2021/aidan-hematodinium/blob/main/projects/hematodinium_analysis/scripts/11_obtaining_TPM_for_DEGs.Rmd

Loops over kallisto output files, selects a column, and left joins it onto an existing table

skreling commented 3 years ago

https://github.com/fish546-2021/Sam-Metabarcoding/blob/main/Assignments/Create_a_Script.Rmd

Takes input fastq or fastq.gz files and creates a combined plot of all figures showing sample quality scores and writes the figure to a specified directory using DADA2. Note that output plot is sent to "results" folder

Also did this very small xargs script to remove unnecessary .fasta files https://github.com/fish546-2021/Sam-Metabarcoding/blob/main/Assignments/Assignment_script2.ipynb

meganewing commented 3 years ago

runs fastqc for "all" my raw data files (all meaning as many as are on gannet....waiting for an update on the whole ostrich issue to transfer rest of the files to gannet)

https://github.com/fish546-2021/Megan-project/blob/main/code/0218-fastqcAutoForMultipleFiles.ipynb

laurel-nave-powers commented 3 years ago

https://github.com/fish546-2021/Laurel-genes/blob/main/546_R_HW/546_R_HW/546_R_HW_script.Rmd

Brybrio commented 3 years ago

https://github.com/fish546-2021/Bryan-Eelgrass/blob/main/Scripts/Stacks_long.ipynb

jdduprey commented 3 years ago

I uploaded a subset of data to Gannet, working on a script to download and unzip files from there instead of from the google drive. Still a work in progress.