jpwrobinson / grazing-gradients

Collaborative OS project on herbivore grazing gradients.
2 stars 0 forks source link

grazing-gradients

This is the archive of LECReefs collaborative grazing project, learning Rmd and Git through writing a scientific paper.

Read the published paper: https://besjournals.onlinelibrary.wiley.com/doi/10.1111/1365-2435.13457

Reproduce the manuscript from code: https://github.com/jpwrobinson/grazing-grads

Meeting 7 - 14th November 2018

Items to discuss:

Grazing functions - update 25th Oct 2018

I've applied current best practice for estimating cropping and browsing functions. We are waiting on Andy Hoey for bite size data but I think he's in the field just now. Just to summarise what's going on:

Cropper bite rates do not scale with body size, but we expect bite volume to increase with body mass. Marshell & Mumby (2015, J. Exp. Mar. Biol.) estimated algal consumption according to the energy requirements of herbivorous fish, which they took from a 1998 study of Caribbean fish and algal food quality (van Rooij et al. J. Fish. Biol.). Basically, cropping function can be worked out by:

  1. predict daily algal consumption for a fish of given mass
  2. scale up the hourly bite rate up to estimate number of bites in one day
  3. estimate the algae consumed per bite
  4. estimate the algae consumed per hour per hectare, given the observed bite rate

If you do this, cropping function is pretty much predicted by biomass, and bite rate information doesn't add much information to grazing function. But this is best practice - so it will be good to demonstrate why in situ feeding observations are really important.

For browsers, we don't have good information on nutritional quality of macroalgae. Best practice here is to estimate the mass standardized bite rate (Bellwood in several papers). This is just bite rate * body mass. Because we have very little species-level information on browser bites rates, this ends up with a very tight relationship between biomass and browsing function. Again, highlights the need for in situ feeding studies in different locations.

I expect the scraper function analysis will be strongest. This function is most well-studied, and we have feeding observations for 24 out of 27 observed species already.

Compile 03_function_models.Rmd to see the figures behind this update.


Referencing with Rmarkdown

We can use bibtex files and csl style files to generate reference lists in an R markdown document. Every reference software can export a bibtex library that contains unique 'citekeys' - we'll export our grazing-gradient library from Mendeley and keep the bibtex file in this repo.

To make stuff work - make sure that you have done the following in RStudio:

install.packages(c('knitcitations', 'bibtex', 'citr'))  
library(knitcitations); library(bibtex); library(citr)  
cleanbib()  
cite_options(citation_format = "pandoc", check.entries=TRUE)    

To find all of the citation styles, fork this repo to your account: github.com/Styles

Then, clone it to your computer through terminal:

git clone https://github.com/jpwrobinson/styles.git

Now, you have all of the citations styles locally on your computer - find the one that you'd like to use (e.g. Global Change Biology), and then copy that .csl file (e.g. global-change-biology.csl) into your PROJECT_NAME/ms (manuscript) folder. This way it can be directly sourced while knitting the document.

What about cite while you write options? The citr library now does this for R Studio. Copied from https://libscie.github.io/rmarkdown-workshop/handout.html

"citr is an R package that provides an easy-to-use RStudio addin that facilitates inserting citations. The addin will automatically look up the Bib(La)TeX-file(s) specified in the YAML front matter. The references for the inserted citations are automatically added to the documents reference section.

Once citr is installed (install.packages("citr")) and you have restarted your R session, the addin appears in the menus and you can define a keyboard shortcut to call the addin."


Meeting 6 - 4th Oct 2018 - biomass models, main text Figure, introduction

Jeneen, Jan, James:

Jan also starting first results paragraph


Meeting 5 - 21st Sept 2018 - biomass models

Jeneen, Jan, James discussed first model results. Decided to improve benthic + fishing predictors by considering alt. regimes (turf, sand, rubble) and fishable biomass gradient.

James received bite rate data. Working on bite rate predictive model.

Next steps in #14


summer was great


Meeting 4 - 30th May 2018 - bite rate literature and Rmd collaborating

Tasks


Meeting 3 - 23rd May 2018 - checking in with Rmd and data questions, and bite rate literature search

Tasks

It's pretty difficult to transition from R to Rmarkdown, and writing text next to code. R-markdown is very powerful (I only use the basics): here's R Studio's Rmarkdown cheatsheet


Meeting 2 - 15th May 2018 - R markdown for reproducible notebooks and reports

Tasks

This thing you're reading is a markdown file. It's designed to encourage fast writing with neat formatting, but it functions like a simple text document. Github likes Markdown, so when we write stuff here it gets converted into a simple, well-structured readme file.

Check here for all your Markdown tips. All you really need are different numbers of hashtags, to separate out sections. We've been editing this file in a basic text editor, but you can get a free markdown viewer if you want to preview how things will get formatted online. I use Typora

Section 1

Section 2

Section 3

Section 4

Section 5

etc.

R Studio has also developed an R - markdown hybrid. It's designed to write quick, readable reports that contain R code, data, figures and tables. Some examples of what you can do with R markdown reports.

We're going to use R-markdown to explore the UVC data, so we can easily share our findings among the group. The document will serve as a lab notebook that keeps track of everything we're working on, growing over time as we figure out how herbivore functions vary across the Indian Ocean.

Useful bash commands

We won't be using bash for much more than git pull, add -A, commit -m 'message', and git push. Sometimes you will need to navigate around folders and check where you are. For that:

cd     ## this is the change directory command
cd ..   ##  move 'up' your file structure
cd Documents/git_repos   ##  move into Documents, and then git_repos

pwd      ### 'print working directory'

ls      ## list files in your working directory
ls -a     ## list 'all' files, including hidden ones (like .gitignore)
ls -l      ## list files and some metadata (when they were created, how large they are)

Herbivore metadata

All cleaned herbivore data is at data/wio_gbr_herb_master.Rdata, with the full species list in data/herbivore_species_list.csv.

raw-data contains all raw csv files from Nick, and these were cleaned by James in scripts/clean_merge_graham_data.R. I'd suggest only use the Rdata data file and subsetting to relevant regions. For example,

load('data/wio_gbr_herb_master.Rdata')
str(herb) ## check structure of data frame
unique(herb$dataset) ## check regions in herb

## subset to seychelles data only
seychelles <- herb[herb$dataset == 'Seychelles',]

## write to csv
write.csv(seychelles, file = 'data/seychelles_master.csv')

We decided to each explore different region datasets:

Column names in wio_gbr_herb_master.Rdata

In this dataset, each row is an observation of an individual fish

Location stuff

date = survey data (this is not standardized, e.g. years for Seychelles and missing for Chagos)
dataset = region surveyed
reef = area surveyed (e.g. island or reef section)
site = site name within area surveyed
site.number = numeric site names
management = fished or unfished
habitat = habitat type (not standaridized, something to discuss)
unique.id = unique identifier for each transect, indicating reef + site + transect number
depth = site depth (metres)
transect = transect number within each site
transect.area = transect area in m^2

Fish stuff

family = fish species family
species = species name
FG = functional group (grazer, browser, scraper)
length.cm = length of fish (cm)
mass.g = weight of fish (grams)
biomass.kgha = biomass of fish in kg per hectare
abundance.500m2 = abundance of fish in 500 m^2


Meeting 1 - 2nd May 2018 - Github setup

We'll be following Git lessons from Software Carpentry

Troubleshooting

Who uses Github and why?