Closed bbest closed 10 years ago
Hi Melanie,
Good find! Yes, you'll want to use this fourth one, except I recommend using the masked version:
model/GL-NCEAS-Halpern2008/data/masked_ecosystems_d_s_bottom.tif
This corresponds with "Deep Soft Benthic" at http://www.nceas.ucsb.edu/globalmarine/ecosystems.
From the Halpern (2008) SOM (bottom of p. 15) we see that all 4 (not just 3 I previously mentioned) should be used:
Good work, BB
On Thu, Apr 10, 2014 at 8:00 AM, Melanie Frazier frazier@nceas.ucsb.edu wrote: I am calculating the area of softbottom in the high seas (for HD subtidal soft-bottom pressure), and I had a question about the habitat data.
For EEZ's the softbottom habitat was limited to 0-200 m depth (which is described by these two files: masked_ecosystems_s_t_s_bottom.tif and masked_ecosystems_soft_shelf.tif, located here: model/GL-NCEAS-Halpern2008/data).
However, when I combine these layers (along with masked_ecosystems_soft_slope.tiff as suggested by Ben B.) It looks like the data is basically limited to the shoreline (see first image).
However, I found this file: ecosystems_d_s_bottom (located here: model/GL-NCEAS-Halpern2008/tmp) that looks potentially better (see second image). Does this look like this is the layer I should use to calculate soft-bottom area for high seas? Or, is there a file that describes these data layers?
Thanks for all your help!
Hi Liz and all,
Just to further justify the rationale, the Pelagic high bycatch and Demersal non-destructive high bycatch make sense as an alternative fishing pressure, picked up already by Commercial High bycatch (D&P) and Commercial Low bycatch (D&P). Since we haven't categorized pelagic waters as a dedicated habitat per se, the pel_lb and pel_hb fishing categories aren't applied to a separate Habitat Destruction: HD Pelagic category.
Given the confusion with "Sub-tidal Soft Bottom" at cumimpacts/ecosystems referring to shallow (0-60 m) and High Seas applying to all four soft bottom habitats:
I suggest renaming in the pressures_matrix.csv (attached), at least for High Seas, to exclude the word subtidal:
BB
On Thu, Apr 10, 2014 at 10:47 AM, Elizabeth Selig eselig@conservation.org wrote: Dear Mel,
This all sounds good to me. I think my only question is why we don't use pelagic high bycatch and Demersal non-destructive high bycatch. Do we assume that these don't have an impact on softbottom? I can see the rationale - just want to make sure others agree.
Liz
On Thu, Apr 10, 2014 at 12:29 PM, Melanie Frazier frazier@nceas.ucsb.edu wrote: Thanks Ben, Katie and Liz for all the help calculating the HD_subtidal_softbottom pressure for the high seas! Now, I need some more advice.
I’ve attached a figure to help explain how I am planning to do the calculation (based on my understanding of our discussions/data).
Here are my specific questions (based on the figure)
Hi @Melsteroni, @katlongo, @bshalpern, @erselig,
So working backwards from the layers_navigation_2012a_2013a, we see layer hd_subtidal_sb
is from model/GL-NCEAS-subtidal_softbottom_v2013a/data/po_rgn_hd_subtidal_sb_2013a.csv
which has file permissions owner of longo. The tmp/README.pdf there suggests model/GL-NCEAS-Pressures_SoftBottom/manual_output/Kris_CatchSumEEZGear.csv
got used, which presumably used just catch (and not VGPM). It actually includes OHI 2012 region IDs for the high seas (175-185) and Antarctica (162). There's a couple issues though translating these data to our new OHI 2014 regions.
model/GL-NCEAS-OceanRegions_v2013a/manual_output/eez_rgn_2013master.csv
We could instead just use the github: ohiprep/Global/SAUP-FishCatchByGearType_Halpern2008 for extracting to our 2014 regions.
Make sense?
BB
PS Replying to just this email (ie notifications@github.com) will log the email as a comment in this issue #66 (a great reference) and send to all subscribed (@Melsteroni, @katlongo, @bshalpern, @erselig),
On Tue, Apr 15, 2014 at 2:36 PM, Melanie Frazier frazier@nceas.ucsb.edu wrote: Ben B:
Do you know if there is a way to confirm which data was used to calculate the fisheries pressures (i.e. raw catch vs. productivity corrected catch)?
Thanks!
Mel
On Tue, Apr 15, 2014 at 2:14 PM, Katie Longo longo@nceas.ucsb.edu wrote: Yes!! That's what I mean!!! :) :) :)
So, on that long call that Ben B, Mel and Liz were on, we found two sets of data, one is tons/Km2, the other is vgpm data. I fear that we've been using the former (raw catch) and not the latter (productivity-corrected). Do you know which one was actually used? or is there a way to verify?
On 4/15/14 2:12 PM, Melanie Frazier wrote: I think I finally understand the general argument (I realize I'm probably late to the ballgame here....)!
Is this analogy correct?: If the same number of trees were harvested in the Arizona vs. the Pacific Northwest, the pressure on the AZ tree harvest would in actuality be far greater because that is a much less productive environment. And, to compare regional pressures, either: 1) comparisons should be made within the same general ecological regions; or 2) comparisons can be made across all regions after the data is scaled by productivity.
So, as Katie was saying: It all boils down to whether the data used to calculate the fishing pressures are based on raw catch data or are standardized by local productivity. I don't know the pressures data well enough to know how it was calculated or what the units are, but reading through the papers, my assumption would be that the data used to calculate the pressures WAS scaled by productivity. In the 2012 Halpern Nature paper it says: "Full details on the data that comprise this layer are provided in Halpern et al., Nature ). In that paper, it says: "These catch-per-area values, per gear category, were then divided by average productivity (g Carbon/m2/yr) for each cell on the basis of productivity data derived from the Vertically Generalized Production Model (VGPM)"
To me, this suggests that the pressures data for fisheries for the 2012 and 2013 data are standardized by productivity (and for the HS/AQ, I used the 2013 layers). But maybe this isn't correct?
Hi all,
just digging back, I found the email where Ben B describes the two data types to Mel (copied below). Mel, if you used 'catch rate' and not 'fishprod', then we know the high seas values are not corrected by productivity. The open question remains as to which product was used by Darren for the EEZs. Ben B, do you think you can shed light on this last question?
Thanks K.
On 4/16/14 9:43 AM, Ben Best wrote:
Reopened #66 https://github.com/OHI-Science/ohicore/issues/66.
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66.
-------- Original Message -------- Subject: [ohicore] highseas: soft bottom habitats and fisheries by gear type (#66) Date: Wed, 09 Apr 2014 17:54:36 -0700 From: Ben Best notifications@github.com Reply-To: OHI-Science/ohicore reply@reply.github.com
To: OHI-Science/ohicore ohicore@noreply.github.com CC: katlongo catherine.longo@gmail.com
Hi Melanie and Katie,
aka @Melsteroni https://github.com/Melsteroni and @katlongo https://github.com/katlongo (to automatically track any further updates on this autocreated issue)
Below is the README.md for the copied over product SAUP-FishCatchByGearType_Halpern2008with sensible names by gear type code, and an area raster. Rasters here:
\neptune.nceas.ucsb.edu\data_edit\
git-annex\Global\SAUP-FishCatchByGearType_Halpern2008\data
As I previously Skyped, you can get the soft bottom habitats here:
data_edit/model/GL-NCEAS-Halpern2008/data
Sub-tidal Soft Bottom
masked_ecosystems_s_t_s_bottom.tif
Soft Shelf
masked_ecosystems_soft_shelf.tif
Soft Slope
masked_ecosystems_soft_slope.tif
Note that for EEZ analysis we seemed to use only Sub-tidal Soft Bottom + Soft Shelf (based on model/GL-NCEAS-Habitats/import1.R), not Soft Slope which is probably worthwhile for high seas.
BB
Sea Around Us Project - Fisheries Catch Rate by Gear Type
These rasters are copies of the original fisheries rasters from Halpern et al (2008) in the format:
[product]/[gear type | area]/[projection]
Files are to be found at:
\neptune.nceas.ucsb.edu\data_edit\git-annex\Global\SAUP-FishCatchByGearType_Halpern2008\data
Product
An area raster (in km2) is given for each product. Gear Types id codelabel 1pel_lbPelagic Low-Bycatch Fishing 2pel_hb Pelagic High-Bycatch Fishing3 dem_dDemersal Destructive Fishing 4dem_nd_lb Demersal Non-Destructive, Low-Bycatch Fishing 5dem_nd_hbDemersal Non-Destructive, High-Bycatch Fishing
Projection
Both sets of products are in geographic coordinate system (gcs), but with different dimensions.
Other
For more details, see the supplement of Halpern et al (2008) and inspect paths of R script.
On Tue, Apr 8, 2014 at 4:05 PM, Katie Longo wrote:
Hello folks,
just a reminder for tomorrow's 1:30pm meeting:
Topic: how to calculate soft bottom for high seas?
Background: calculating soft bottom status (Biodiversity - Habitats) requires two pieces of information 1) global soft bottom map 2) catch per gear type in HS. We have dataset 1, we're missing dataset 2. What we have: catch per gear type for all high seas, not broken down by FAO-HS region (in hand); catch per FAO-HS region, not broken down by gear type (in hand); raw catch data per gear type per half degree cell up to 2004 (to be located) OR map of fishing pressure scores from the cumulative impacts map We need to figure out: where is the raw catch data by half degree cell? Is it per gear type? If it is, summarize catch per bottom trawling gear per HS region; if it isn't, can we use cumulative impact scores to infer soft bottom pressures for HS?
Talk to you tomorrow! K.
~ > ~ ~
~ ~ >
Catherine Longo, PhD Project scientist, Marine Science Institute (UCSB)
National Center for Ecological Analysis and Synthesis (NCEAS) 735 State Street, Suite 300 Santa Barbara, CA 93101 Telephone: (805) 882-9218
Email: longo@nceas.ucsb.edu mailto:longo@nceas.ucsb.edu Web: http://ift.tt/R41cmd
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66.
-~-- ><> ---~-~--
<> ~ ~ ><>
Dr Catherine Longo National Center for Ecological Analysis and Synthesis (NCEAS) 735 State Street, Suite 300 Santa Barbara, CA 93101 Telephone: (805) 882-9218
Email: longo@nceas.ucsb.edu Web: http://www.nceas.ucsb.edu/postdocs
Yes! The further description was very helpful. (I know you pointed it out before Ben....but the significance wasn't entirely clear to me)
I now have a better understanding of these data types.....
On Wed, Apr 16, 2014 at 10:05 AM, katlongo notifications@github.com wrote:
Hi all,
just digging back, I found the email where Ben B describes the two data types to Mel (copied below). Mel, if you used 'catch rate' and not 'fishprod', then we know the high seas values are not corrected by productivity. The open question remains as to which product was used by Darren for the EEZs. Ben B, do you think you can shed light on this last question?
Thanks K.
On 4/16/14 9:43 AM, Ben Best wrote:
Reopened #66 https://github.com/OHI-Science/ohicore/issues/66.
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66.
-------- Original Message -------- Subject: [ohicore] highseas: soft bottom habitats and fisheries by gear type (#66) Date: Wed, 09 Apr 2014 17:54:36 -0700 From: Ben Best notifications@github.com Reply-To: OHI-Science/ohicore reply@reply.github.com
To: OHI-Science/ohicore ohicore@noreply.github.com CC: katlongo catherine.longo@gmail.com
Hi Melanie and Katie,
aka @Melsteroni https://github.com/Melsteroni and @katlongo https://github.com/katlongo (to automatically track any further updates
on this autocreated issue)
Below is the README.md for the copied over product SAUP-FishCatchByGearType_Halpern2008with sensible names by gear type code, and an area raster. Rasters here:
\neptune.nceas.ucsb.edu\data_edit\
git-annex\Global\SAUP-FishCatchByGearType_Halpern2008\data
As I previously Skyped, you can get the soft bottom habitats here:
data_edit/model/GL-NCEAS-Halpern2008/data
Sub-tidal Soft Bottom
masked_ecosystems_s_t_s_bottom.tif
Soft Shelf
masked_ecosystems_soft_shelf.tif
Soft Slope
masked_ecosystems_soft_slope.tif
Note that for EEZ analysis we seemed to use only Sub-tidal Soft Bottom + Soft Shelf (based on model/GL-NCEAS-Habitats/import1.R), not Soft Slope which is probably worthwhile for high seas.
BB
Sea Around Us Project - Fisheries Catch Rate by Gear Type
These rasters are copies of the original fisheries rasters from Halpern et al (2008) in the format:
[product]/[gear type | area]/[projection]
Files are to be found at:
\neptune.nceas.ucsb.edu \data_edit\git-annex\Global\SAUP-FishCatchByGearType_Halpern2008\data
Product
- /catch/: average catch rate (tons per km2 per year) by gear type for
all reported fish species caught 1999-2003. 360 rows and 720 columns.
- /fishprod/: catch divided by average productivity (g Carbon/m2/yr)
using Vertically Generalized Production Model (VGPM). 1024 rows and 2048 columns.
An area raster (in km2) is given for each product. Gear Types id codelabel 1pel_lbPelagic Low-Bycatch Fishing 2pel_hb Pelagic High-Bycatch Fishing3 dem_dDemersal Destructive Fishing 4dem_nd_lb Demersal Non-Destructive, Low-Bycatch Fishing 5dem_nd_hbDemersal Non-Destructive, High-Bycatch Fishing
Projection
Both sets of products are in geographic coordinate system (gcs), but with different dimensions.
Other
For more details, see the supplement of Halpern et al (2008) and inspect paths of R script.
On Tue, Apr 8, 2014 at 4:05 PM, Katie Longo wrote:
Hello folks,
just a reminder for tomorrow's 1:30pm meeting:
Topic: how to calculate soft bottom for high seas?
Background: calculating soft bottom status (Biodiversity - Habitats) requires two pieces of information 1) global soft bottom map 2) catch per gear type in HS. We have dataset 1, we're missing dataset 2. What we have: catch per gear type for all high seas, not broken down by FAO-HS region (in hand); catch per FAO-HS region, not broken down by gear type (in hand); raw catch data per gear type per half degree cell up to 2004 (to be located) OR map of fishing pressure scores from the cumulative impacts map We need to figure out: where is the raw catch data by half degree cell? Is it per gear type? If it is, summarize catch per bottom trawling gear per HS region; if it isn't, can we use cumulative impact scores to infer soft bottom pressures for HS?
Talk to you tomorrow! K.
~ > ~ ~
~ ~ >
Catherine Longo, PhD Project scientist, Marine Science Institute (UCSB)
National Center for Ecological Analysis and Synthesis (NCEAS) 735 State Street, Suite 300 Santa Barbara, CA 93101 Telephone: (805) 882-9218
Email: longo@nceas.ucsb.edu mailto:longo@nceas.ucsb.edu
Web: http://ift.tt/R41cmd
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66.
-~-- ><> ---~-~--
<> ~ ~ ><>
Dr Catherine Longo
National Center for Ecological Analysis and Synthesis (NCEAS) 735 State Street, Suite 300 Santa Barbara, CA 93101 Telephone: (805) 882-9218
Email: longo@nceas.ucsb.edu Web: http://www.nceas.ucsb.edu/postdocs
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66?utm_campaign=website&utm_source=sendgrid.com&utm_medium=email#issuecomment-40624840 .
Hi Katie,
Ok, so the Nature 2012 analysis is based on model/GL-NCEAS-Pressures_SoftBottom
:
tmp/global_modeldata_trawl_area.csv
(fields: id, km2, whence; source: GL-NCEAS-Habitats).manual_output/Kris_CatchSumEEZGear.csv
(fields: year, saup_id, gear, value)model.R
# calculate density scores
d$catch <- d$value
d$catch_density <- ifelse(d$km2>0,d$catch / d$km2, NA)
d$catch_density_refpt <- max(d$catch_density, na.rm=T)
d$catch_density_score <- log(d$catch_density+1) / log(d$catch_density_refpt+1)
#
# calc status score
d$status_raw <- 1 - d$catch_density_score
d$status_refpt <- median(d$status_raw, na.rm=T)
stopifnot(d$status_refpt>0)
#
d$status <- score.clamp(d$status_raw / d$status_refpt)
d$p <- 1 - d$status
d$p[is.na(d$p)] <- 0
summary(d)
global_modeldata_hd_subtidal_sb_model <- d
ohi.save('global_modeldata_hd_subtidal_sb_model') # save timeseries
#
# calc pressure
stopifnot(length(unique(d$id)) == length(d$id[d$year == max(d$year)]))
d <- d[d$year == max(d$year),c('id','p')] # extract cur year only
names(d) <- c('id', 'value')
global_pressures_hd_subtidal_sb <- d
ohi.save('global_pressures_hd_subtidal_sb')
Am I still missing something?
BB
On Wed, Apr 16, 2014 at 9:57 AM, Katie Longo longo@nceas.ucsb.edu wrote:
Hi Ben,
thanks for digging but these are not the data we are concerned about. (By the way, these values are not used for Antarctica so it's not a concern that the regions don't match.)
What we are interested in is the fishing pressures. In other words, those 5 categories, demersal high bycatch, pelagic high bycatch, etc (numbered from 1 to 5) you were looking at the other day, for which you found two types of data: catch density (I think it was called "cr" and was in tons/Km2 per cell) and a vgpm corrected productivity.
The layers navigation for 2013 will simply reveal we used data from 2012 multiplied by a factor. The question we have is which type of data was used by Darren for the first global OHI. If not obvious from his script, we might have to just look at values for an EEZ to understand which ones he used.
Thanks Katie
Hi Ben, this is still soft bottom.
On 4/16/14 10:12 AM, Ben Best wrote:
Hi Katie,
Ok, so the Nature 2012 analysis is based on |model/GL-NCEAS-Pressures_SoftBottom|:
1.
|tmp/global_modeldata_trawl_area.csv| (fields: id, km2, whence; source: GL-NCEAS-Habitats).
2.
|manual_output/Kris_CatchSumEEZGear.csv| (fields: year, saup_id, gear, value)
3.
|model.R| # calculate density scores d$catch <- d$value d$catch_density <- ifelse(d$km2>0,d$catch / d$km2, NA) d$catch_density_refpt <- max(d$catch_density, na.rm=T) d$catch_density_score <- log(d$catch_density+1) / log(d$catch_density_refpt+1) # # calc status score d$status_raw <- 1 - d$catch_density_score d$status_refpt <- median(d$status_raw, na.rm=T) stopifnot(d$status_refpt>0) # d$status <- score.clamp(d$status_raw / d$status_refpt) d$p <- 1 - d$status d$p[is.na(d$p)] <- 0 summary(d) global_modeldata_hd_subtidal_sb_model <- d ohi.save('global_modeldata_hd_subtidal_sb_model') # save timeseries # # calc pressure stopifnot(length(unique(d$id)) == length(d$id[d$year == max(d$year)])) d <- d[d$year == max(d$year),c('id','p')] # extract cur year only names(d) <- c('id', 'value') global_pressures_hd_subtidal_sb <- d ohi.save('global_pressures_hd_subtidal_sb')
Am I still missing something?
BB
On Wed, Apr 16, 2014 at 9:57 AM, Katie Longo longo@nceas.ucsb.edu mailto:longo@nceas.ucsb.edu wrote:
Hi Ben,
thanks for digging but these are not the data we are concerned about. (By the way, these values are not used for Antarctica so it's not a concern that the regions don't match.)
What we are interested in is the fishing pressures. In other words, those 5 categories, demersal high bycatch, pelagic high bycatch, etc (numbered from 1 to 5) you were looking at the other day, for which you found two types of data: catch density (I think it was called "cr" and was in tons/Km2 per cell) and a vgpm corrected productivity.
The layers navigation for 2013 will simply reveal we used data from 2012 multiplied by a factor. The question we have is which type of data was used by Darren for the first global OHI. If not obvious from his script, we might have to just look at values for an EEZ to understand which ones he used.
Thanks Katie
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66#issuecomment-40625655.
sorry, didn't mean to send just yet.
We're interested in fishing by gear type. See your previous email I just forwarded for the data descriptions.
On 4/16/14 10:12 AM, Ben Best wrote:
Hi Katie,
Ok, so the Nature 2012 analysis is based on |model/GL-NCEAS-Pressures_SoftBottom|:
1.
|tmp/global_modeldata_trawl_area.csv| (fields: id, km2, whence; source: GL-NCEAS-Habitats).
2.
|manual_output/Kris_CatchSumEEZGear.csv| (fields: year, saup_id, gear, value)
3.
|model.R| # calculate density scores d$catch <- d$value d$catch_density <- ifelse(d$km2>0,d$catch / d$km2, NA) d$catch_density_refpt <- max(d$catch_density, na.rm=T) d$catch_density_score <- log(d$catch_density+1) / log(d$catch_density_refpt+1) # # calc status score d$status_raw <- 1 - d$catch_density_score d$status_refpt <- median(d$status_raw, na.rm=T) stopifnot(d$status_refpt>0) # d$status <- score.clamp(d$status_raw / d$status_refpt) d$p <- 1 - d$status d$p[is.na(d$p)] <- 0 summary(d) global_modeldata_hd_subtidal_sb_model <- d ohi.save('global_modeldata_hd_subtidal_sb_model') # save timeseries # # calc pressure stopifnot(length(unique(d$id)) == length(d$id[d$year == max(d$year)])) d <- d[d$year == max(d$year),c('id','p')] # extract cur year only names(d) <- c('id', 'value') global_pressures_hd_subtidal_sb <- d ohi.save('global_pressures_hd_subtidal_sb')
Am I still missing something?
BB
On Wed, Apr 16, 2014 at 9:57 AM, Katie Longo longo@nceas.ucsb.edu mailto:longo@nceas.ucsb.edu wrote:
Hi Ben,
thanks for digging but these are not the data we are concerned about. (By the way, these values are not used for Antarctica so it's not a concern that the regions don't match.)
What we are interested in is the fishing pressures. In other words, those 5 categories, demersal high bycatch, pelagic high bycatch, etc (numbered from 1 to 5) you were looking at the other day, for which you found two types of data: catch density (I think it was called "cr" and was in tons/Km2 per cell) and a vgpm corrected productivity.
The layers navigation for 2013 will simply reveal we used data from 2012 multiplied by a factor. The question we have is which type of data was used by Darren for the first global OHI. If not obvious from his script, we might have to just look at values for an EEZ to understand which ones he used.
Thanks Katie
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66#issuecomment-40625655.
Hi Katie,
Ok, sorry to have missed the larger question here on all commercial fisheries, and not just soft bottom habitat destruction (ie principally trawl).
In summary, yes it looks like the Nature 2012 analysis did use the VGPM corrected data, which should correspond to either of these datasets, each offering different projections and dimensions. The denser Mollweide raster should simply be a densification of the geographic one for consistency amongst cumulative impact rasters (but worth checking).
model/git-annex/Global/SAUP-FishCatchByGearType_Halpern2008/data
fishprod_*_gcs.tif
model/GL-NCEAS-Halpern2008/data
masked_impacts_*.tif
where *
above corresponds with
id | code | label |
---|---|---|
1 | pel_lb | Pelagic Low-Bycatch Fishing |
2 | pel_hb | Pelagic High-Bycatch Fishing |
3 | dem_d | Demersal Destructive Fishing |
4 | dem_nd_lb | Demersal Non-Destructive, Low-Bycatch Fishing |
5 | dem_nd_hb | Demersal Non-Destructive, High-Bycatch Fishing |
For more gory details, first what we did last year, then this year...
directory:
model/GL-NCEAS-Pressures_CommercialFisheries_v2013a
Source: GL-NCEAS-Halpern2008, GL-SAUP-FisheriesCatchData_v2013
We updated the commercial fisheries pressures layers based on the best available data, which has been updated only for the overall Sea Around Us Project (SAUP) region and not at the original finer half degree cell performed by Halpern et al (2008) which was applied to last year's initial global Index (Halpern et al 2012). A percent change to each of the 5 input commercial fishing pressure raster layers was applied. The percent change was calculated as the difference in average annual total catch between the most recent reporting period (2009 to 2011) and the period (1999 to 2003) previously used by Halpern et al (2008), and then divided by the original period (1999 to 2003). The commercial fisheries 1km resolution rasters (Halpern et al 2008) were then multiplied by this difference (1 + percent change) specific to SAUP regions. Mean 1km pixel values were then extracted per OHI region and year (2012 or 2013). The 2012 and 2013 regional values were combined to rescale values based on the maximum scores plus 10% across both years per fishing pressure. For all 5 pressure layers the maximum came from the 2012 (vs 2013) value. The commercial fishing high bycatch layer was then defined as the average of demersal destructive (e.g. trawl), demersal non-destructive high bycatch (e.g. pots, traps) and pelagic high bycatch (e.g. long-lines) gear. The commercial fishing low bycatch was taken as the average of destructive low bycatch (e.g. hook and line) and pelagic low bycatch (e.g. purse seines) gear.
from bbest to halpern on Aug 8, 2013, subject: GL-NCEAS-Pressures_CommercialFisheries_v2013a The updated commercial fishing pressure rasters are copying over now into:
from windows machine:
\\neptune\data\model\GL-NCEAS-Pressures_CommercialFisheries_v2013a\data
or on neptune:
/var/data/ohi/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a/data
as either raw percent change applied (ie pixel values can range beyond 1) or rescaled (0 to 1):
fp_com_hb_dem_2013_raw.tif , fp_com_hb_dem_pctchg_rescaled.tif
fp_com_hb_dem_nd_2013_raw.tif, fp_com_hb_dem_nd_pctchg_rescaled.tif
fp_com_hb_pel_2013_raw.tif , fp_com_hb_pel_pctchg_rescaled.tif
fp_com_lb_dem_nd_2013_raw.tif , fp_com_lb_dem_nd_pctchg_rescaled.tif
fp_com_lb_pel_2013_raw.tif , fp_com_lb_pel_pctchg_rescaled.tif
We identified 5 different categories of commercial fishing gear on the basis of whether or not the gear modifies habitat, if it incurs bycatch, and if it occurs in pelagic or benthic areas (S7). Since habitat-modifying fishing is by definition high-bycatch and habitat-modifying methods for pelagic fishing do not currently exist, five categories of fishing emerge: pelagic low-bycatch, pelagic high-bycatch, demersal habitat-modifying, demersal non-habitat-modifying low-bycatch, and demersal non-habitat-modifying high- bycatch (see Table S4 for lists of gear types in each category). We then used half-degree global commercial catch data developed by the Sea Around Us Project (S8) on the basis of data from FAO and other sources. This dataset provides gear type and species caught for all reported fish for the most recent 5 years (1999-2003), which was used to calculate average catch rate (tons per km2 per year) for each fishing category for each half-degree cell.
These catch-per-area values, per gear category, were then divided by average productivity (g Carbon/m2/yr) for each cell on the basis of productivity data derived from the Vertically Generalized Production Model (VGPM) (S9) to create an estimate of average catch rate standardized by productivity, under the assumption that higher catch rates in lower productivity regions of the ocean have a higher impact than similar catch rates in higher productivity regions. These catch rates were assumed to be uniform within half-degree cells and so all 1 km2 cells within the larger cell were assigned the productivity-adjusted catch rate for the half-degree cell. Final data for the commercial fishing layers were in units of tons of fish per tons of carbon in each 1 km2 cell.
Where used: Pressure for several goals Description: This Pressure represents fish caught using high bycatch gear, which includes demersal destructive (e.g. trawl), demersal non-destructive high bycatch (e.g. pots, traps) and pelagic high bycatch (e.g. long-lines) gear. The species-gear associations are from Watson et al.79. Catch data come from 2006 and were spatialized by the Sea Around Us Project into 1⁄2 degree cell resolution 31. We then summed these values into our EEZ reporting units. When cells spanned EEZ borders, we divided catch proportionally based on amount of area in each EEZ. Full details on the data that comprise this layer are provided in Halpern et al.3.
Where used: Pressure for several goals Description: This Pressure represents fish caught using low bycatch gear, which includes demersal non-destructive low bycatch (e.g. hook and line) and pelagic low bycatch (e.g. purse seines) gear. The species-gear associations are from Watson et al. 79. Catch data come from 2006 and were spatialized by the Sea Around Us Project into 1⁄2 degree cell resolution 31. We then summed these values into our EEZ reporting units. When cells spanned EEZ borders, we divided catch proportionally based on amount of area in each EEZ. Full details on the data that comprise this layer are provided in Halpern et al.3.
directory:
model/GL-NCEAS-Pressures_v2012
Source: GL-NCEAS-CleanWatersPressures, GL-NCEAS-Halpern2008, GL-NCEAS-Halpern2008_Distance, GL-NCEAS-Livelihoods, GL-NCEAS-Pressures_AlienInvasives, GL-NCEAS-Pressures_Intertidal, GL-UBC-Trujillo2008, GL-WorldBank-Governance, GL-WRI-ReefsAtRisk, GL-NCEAS-Pressures_SoftBottom
Pressures methods
Note that some stressors are the aggragate (mean) of several base stressors which are denoted with _N in the name, e.g., $xyz = mean(xyz_1, xyz_2, xyz_3)$.
Based on setup.sh
, I see that the fisheries pressures (fp_*
) are from here:
model/GL-NCEAS-Halpern2008/data/zstats_masked_impacts
Source code snippets:
# Halpern 2008 data layers
p="$OHI_MODELDIR/GL-NCEAS-Halpern2008/data/zstats_masked_impacts"
linky "$p" dem_d hd_subtidal_sb
linky "$p" dem_d fp_com_hb_3
linky "$p" dem_nd_hb fp_com_hb_1
linky "$p" pel_hb fp_com_hb_2
linky "$p" dem_nd_lb fp_com_lb_1
linky "$p" pel_lb fp_com_lb_2
linky "$p" artisanal fp_art_lb
...
# Halpern 2008 data layers with clipped distances
p="$OHI_MODELDIR/GL-NCEAS-Halpern2008_Distance/data/zstats_masked_impacts"
linky "$p" artisanal_3nm fp_art_lb_3nm
# Reefs at Risk Revisited data layers
p="$OHI_STABLEDIR/GL-WRI-ReefsAtRisk/data/pressures_gl"
linky "$p" thr_poison hd_subtidal_hb_1
linky "$p" thr_blast hd_subtidal_hb_2
linky "$p" thr_blast fp_art_hb
On Wed, Apr 16, 2014 at 9:57 AM, Katie Longo longo@nceas.ucsb.edu wrote: Hi Ben,
thanks for digging but these are not the data we are concerned about. (By the way, these values are not used for Antarctica so it's not a concern that the regions don't match.)
What we are interested in is the fishing pressures. In other words, those 5 categories, demersal high bycatch, pelagic high bycatch, etc (numbered from 1 to 5) you were looking at the other day, for which you found two types of data: catch density (I think it was called "cr" and was in tons/Km2 per cell) and a vgpm corrected productivity.
The layers navigation for 2013 will simply reveal we used data from 2012 multiplied by a factor. The question we have is which type of data was used by Darren for the first global OHI. If not obvious from his script, we might have to just look at values for an EEZ to understand which ones he used.
Thanks Katie
Hi @Melsteroni,
To address your question, I should've supplemented previous email with extra file info below.
In summary, for assessment www 2013 I multiplied the Halpern 2008 rasters (with VGPM adjustment) by the percent change in fisheries per SAUP region, and rescaled (min 0 to max 1) for each fisheries pressure raster. Note that the 3 polygons without a ratio were given a 0 [Bouvet Island (BVT saup_id 74 of Norway) in S Ocean, Cyprus_North (saup_id 197 of Cyprus) in Med, and a negligible fao sliver saup_id 1037 in Med].
Here are the principal inputs:
# fisheries pressure rasters
model/GL-NCEAS-Halpern2008/data/masked_impacts_*.tif
# SAUP regions and percent change
model/GL-SAUP-FisheriesCatchData_v2013/data/saup_fao_mol.shp
model/GL-SAUP-FisheriesCatchData_v2013/data/pct_chg_saup_2009to2011_vs_1999to2003.csv
BB
On Wed, Apr 16, 2014 at 10:07 AM, Melanie Frazier frazier@nceas.ucsb.edu wrote: Hi Ben:
The data you indicate are for the HD softbottom analysis (i.e. pressure on softbottom habitat). I think that this metric should be based on catch data that hasn't been corrected for productivity because it is a better metric for evaluating the pressure on soft-bottom habitats. I think this one is settled! Yay!!!
In the latter discussions, we are talking about fisheries pressures. For these analyses, I used the following data:
For Commercial High Bycatch:
For Commercial Low Bycatch:
And, for these data layers I am not clear whether they are based on productivity scaled fishery data.
(confusing...I know....)
Thanks Ben:
Mel
directory:
model/GL-NCEAS-Pressures_CommercialFisheries_v2013a
# Halpern 2008 data layers
pfx="$OHI_MODELDIR/GL-NCEAS-Halpern2008/data/masked_impacts_"
ln -sf "${pfx}dem_d.tif" tmp/fp_com_hb_dem.tif
ln -sf "${pfx}dem_nd_hb.tif" tmp/fp_com_hb_dem_nd.tif
ln -sf "${pfx}pel_hb.tif" tmp/fp_com_hb_pel.tif
ln -sf "${pfx}dem_nd_lb.tif" tmp/fp_com_lb_dem_nd.tif
ln -sf "${pfx}pel_lb.tif" tmp/fp_com_lb_pel.tif
# ocean mask
ln -sf $OHI_MODELDIR/GL-NCEAS-Landsea_v2013a/data/ocean.* tmp
# SAUP catch data
ln -sf $OHI_MODELDIR/GL-SAUP-FisheriesCatchData_v2013/data/pct_chg_saup_2009to2011_vs_1999to2003.csv tmp
ln -sf $OHI_MODELDIR/GL-SAUP-FisheriesCatchData_v2013/data/saup_fao_* tmp
# OHI regions
ln -sf $OHI_MODELDIR/GL-NCEAS-OceanRegions_v2013a/data/rgn_fao_mol.* tmp
# vars
scratch = tempfile.mkdtemp(prefix='GL-NCEAS-Pressures_CommercialFisheries_v2013a', dir='D:/best/tmp') # scratch directory
wd = 'N:/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a' # working directory
td = wd + '/tmp' # temporary directory
dd = wd + '/data' # data directory
md = wd + '/manual_output' # manual output directory
gdb = td + '/geodb.gdb' # file geodatabase
r_mol = td+'/ocean.tif' # raster reference
# projections
sr_mol = arcpy.SpatialReference('Mollweide (world)') # projected Mollweide (54009)
sr_gcs = arcpy.SpatialReference('WGS 1984') # geographic coordinate system WGS84 (4326)
# shapefiles don't have nulls, so use geodatabase
if not arcpy.Exists(gdb):
arcpy.CreateFileGDB_management(os.path.dirname(gdb), os.path.basename(gdb))
# workspace & scratch space
arcpy.env.workspace = gdb; os.chdir(gdb)
arcpy.env.scratchWorkspace=scratch
# copy into tmp geodb
pct_chg_csv = 'pct_chg_saup_2009to2011_vs_1999to2003.csv'
pct_chg_tbl = os.path.splitext(pct_chg_csv.replace('-','_'))[0]
arcpy.FeatureClassToGeodatabase_conversion(td+'/saup_fao_mol.shp', gdb)
arcpy.TableToGeodatabase_conversion(td + '/' + pct_chg_csv, gdb)
# join ratio to saup regions
flds_except = ['OBJECTID','Shape','AREA_KM2','Shape_Length','Shape_Area','id','id_type']
arcpy.JoinField_management('saup_fao_mol', 'id', pct_chg_tbl, 'id',
[f.name for f in arcpy.ListFields(pct_chg_tbl) if f.name not in flds_except])
# check for polygons without a ratio
arcpy.Select_analysis('saup_fao_mol', 'saup_fao_mol_missing_pct_chg', '"pct_chg" IS NULL')
# 3 polygons are without a ratio: Bouvet Island (BVT saup_id 74 of Norway) in S Ocean, Cyprus_North (saup_id 197 of Cyprus) in Med, and a negligible fao sliver saup_id 1037 in Med
# for thise polygons make 0% change
arcpy.MakeFeatureLayer_management("saup_fao_mol",'lyr', '"pct_chg" IS NULL')
arcpy.CalculateField_management('lyr', 'pct_chg', '0', 'Python_9.3')
# convert saup pct change to raster
arcpy.RepairGeometry_management("saup_fao_mol")
arcpy.env.snapRaster = arcpy.env.cellSize = arcpy.env.outputCoordinateSystem = arcpy.env.extent = arcpy.env.mask = r_mol
arcpy.arcpy.PolygonToRaster_conversion('saup_fao_mol', 'pct_chg', td+'/saup_pct_chg_mol.tif', 'MAXIMUM_COMBINED_AREA')
# helper function to rescale rasters per Halpern et al (2008)
def rescale_raster(r_in, r_out):
min = float(arcpy.GetRasterProperties_management(r_in, 'MINIMUM').getOutput(0))
max = float(arcpy.GetRasterProperties_management(r_in, 'MAXIMUM').getOutput(0))
print ' min: %g' % (min)
print ' max: %g' % (max)
r = (Raster(str(r_in)) - min) / (max - min)
r.save(r_out)
def raster_m1000int(r_in, r_out):
r = Int(Raster(r_in) * 1000)
r.save(r_out)
arcpy.BuildRasterAttributeTable_management(r_out)
# for each fisheries pressure raster, multiply by the percent change and rescale
r_pctchg = td+'/saup_pct_chg_mol.tif'
fps = ['fp_com_%s.tif' % s for s in ['hb_dem','hb_dem_nd','hb_pel','lb_dem_nd','lb_pel']]
for fp in fps:
print ' %s' % fp
b = os.path.splitext(fp)[0] # basename of rp without filename extension
r_fp = '%s/%s' % (td,fp)# full path to fisheries pressure raster
r_fp_int = '%s/%s_m1000int.tif' % (td,b) # multiplied by 1000 and made integer for VAT to read in R
r_fp_pctchg = '%s/%s_pctchg.tif' % (td,b) # percent change raster
r_fp_pctchg_int = '%s/%s_pctchg_m1000int.tif' % (td,b) # percent change raster, integer
r_fp_pctchg_rescaled = '%s/%s_pctchg_rescaled.tif' % (td,b) # linear rescaled fisheries raster after percent change applied
r_fp_pctchg_rescaled_int = '%s/%s_pctchg_rescaled_m1000int.tif' % (td,b) # linear rescaled fisheries raster after percent change applied, integer
print ' ' + os.path.basename(r_fp)
ZonalStatisticsAsTable(td+'/rgn_fao_mol.tif', 'VALUE', r_fp, '%s/%s_2012_rgn_mean.dbf' % (td, b), 'DATA', 'MEAN')
print ' ' + os.path.basename(r_fp_pctchg)
r = (1 + Raster(r_pctchg)/100) * Raster(r_fp)
r.save(r_fp_pctchg)
ZonalStatisticsAsTable(td+'/rgn_fao_mol.tif', 'VALUE', r_fp_pctchg, '%s/%s_pctchg_rgn_mean.dbf' % (td, b), 'DATA', 'MEAN')
Thanks Ben!
Mel, I guess this means you need to use the productivity corrected data for the high seas, determine the % change, and then we can rescale to max across all regions, including EEZs.
We'll need to use the same for AQ (but subtracting the catch assigned to southern ocean EEZs in the CCAMLR regions with doughnut holes).
Thanks again K.
On 4/16/14 10:49 AM, Ben Best wrote:
Hi Katie,
Ok, sorry to have missed the larger question here on all commercial fisheries, and not just soft bottom habitat destruction (ie principally trawl).
In summary, yes it looks like the Nature 2012 analysis did used the VGPM corrected data, which should correspond to either of these datasets, each offering different projections and dimensions. The denser Mollweide raster should simply be a densification of the geographic one for consistency amongst cumulative impact rasters (but worth checking).
- Geographic 1,024 x 2,048. github:ohiprep/Global/SAUP-FishCatchByGearType_Halpern2008 https://github.com/OHI-Science/ohiprep/tree/master/Global/SAUP-FishCatchByGearType_Halpern2008:
model/git-annex/Global/SAUP-FishCatchByGearTypeHalpern2008/data fishprod*_gcs.tif
- Mollweide 38,610 x 19,305. neptune:model/GL-NCEAS-Halpern2008/data
model/GL-NCEAS-Halpern2008/data maskedimpacts*.tif where |*| above corresponds with
Gear Type
id code label 1 pel_lb Pelagic Low-Bycatch Fishing 2 pel_hb Pelagic High-Bycatch Fishing 3 dem_d Demersal Destructive Fishing 4 dem_nd_lb Demersal Non-Destructive, Low-Bycatch Fishing 5 dem_nd_hb Demersal Non-Destructive, High-Bycatch Fishing
For more gory details, first what we did last year, then this year...
www 2013
directory:
model/GL-NCEAS-Pressures_CommercialFisheries_v2013a README.txt
Source: GL-NCEAS-Halpern2008, GL-SAUP-FisheriesCatchData_v2013
We updated the commercial fisheries pressures layers based on the best available data, which has been updated only for the overall Sea Around Us Project (SAUP) region and not at the original finer half degree cell performed by Halpern et al (2008) which was applied to last year's initial global Index (Halpern et al 2012). A percent change to each of the 5 input commercial fishing pressure raster layers was applied. The percent change was calculated as the difference in average annual total catch between the most recent reporting period (2009 to 2011) and the period (1999 to 2003) previously used by Halpern et al (2008), and then divided by the original period (1999 to 2003). The commercial fisheries 1km resolution rasters (Halpern et al 2008) were then multiplied by this difference (1 + percent change) specific to SAUP regions. Mean 1km pixel values were then extracted per OHI region and year (2012 or 2013). The 2012 and 2013 regional values were combined to rescale values based on the maximum scores plus 10% across both years per fishing pressure. For all 5 pressure layers the maximum came from the 2012 (vs 2013) value. The commercial fishing high bycatch layer was then defined as the average of demersal destructive (e.g. trawl), demersal non-destructive high bycatch (e.g. pots, traps) and pelagic high bycatch (e.g. long-lines) gear. The commercial fishing low bycatch was taken as the average of destructive low bycatch (e.g. hook and line) and pelagic low bycatch (e.g. purse seines) gear.
from bbest to halpern on Aug 8, 2013, subject: GL-NCEAS-Pressures_CommercialFisheries_v2013a The updated commercial fishing pressure rasters are copying over now into:
|from windows machine: \neptune\data\model\GL-NCEAS-Pressures_CommercialFisheries_v2013a\data
or on neptune: /var/data/ohi/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a/data
as either raw percent change applied (ie pixel values can range beyond 1) or rescaled (0 to 1):
fp_com_hb_dem_2013_raw.tif , fp_com_hb_dem_pctchg_rescaled.tif fp_com_hb_dem_nd_2013_raw.tif, fp_com_hb_dem_nd_pctchg_rescaled.tif fp_com_hb_pel_2013_raw.tif , fp_com_hb_pel_pctchg_rescaled.tif fp_com_lb_dem_nd_2013_raw.tif , fp_com_lb_dem_nd_pctchg_rescaled.tif fp_com_lb_pel_2013_raw.tif , fp_com_lb_pel_pctchg_rescaled.tif |
Halpern et al (2008) SOM (p. 3): Commercial Fishing (5 types)
We identified 5 different categories of commercial fishing gear on the basis of whether or not the gear modifies habitat, if it incurs bycatch, and if it occurs in pelagic or benthic areas (S7). Since habitat-modifying fishing is by definition high-bycatch and habitat-modifying methods for pelagic fishing do not currently exist, five categories of fishing emerge: pelagic low-bycatch, pelagic high-bycatch, demersal habitat-modifying, demersal non-habitat-modifying low-bycatch, and demersal non-habitat-modifying high- bycatch (see Table S4 for lists of gear types in each category). We then used half-degree global commercial catch data developed by the Sea Around Us Project (S8) on the basis of data from FAO and other sources. This dataset provides gear type and species caught for all reported fish for the most recent 5 years (1999-2003), which was used to calculate average catch rate (tons per km2 per year) for each fishing category for each half-degree cell.
These catch-per-area values, per gear category, were then divided by average productivity (g Carbon/m2/yr) for each cell on the basis of productivity data derived from the Vertically Generalized Production Model (VGPM) (S9) to create an estimate of average catch rate standardized by productivity, under the assumption that higher catch rates in lower productivity regions of the ocean have a higher impact than similar catch rates in higher productivity regions. These catch rates were assumed to be uniform within half-degree cells and so all 1 km2 cells within the larger cell were assigned the productivity-adjusted catch rate for the half-degree cell. Final data for the commercial fishing layers were in units of tons of fish per tons of carbon in each 1 km2 cell.
Halpern et al (2012) SOM (p. 39): 7.11. Commercial fishing: high bycatch
Where used: Pressure for several goals Description: This Pressure represents fish caught using high bycatch gear, which includes demersal destructive (e.g. trawl), demersal non-destructive high bycatch (e.g. pots, traps) and pelagic high bycatch (e.g. long-lines) gear. The species-gear associations are from Watson et al.79. Catch data come from 2006 and were spatialized by the Sea Around Us Project into 1⁄2 degree cell resolution 31. We then summed these values into our EEZ reporting units. When cells spanned EEZ borders, we divided catch proportionally based on amount of area in each EEZ. Full details on the data that comprise this layer are provided in Halpern et al.3.
7.12. Commercial fishing: low bycatch
Where used: Pressure for several goals Description: This Pressure represents fish caught using low bycatch gear, which includes demersal non-destructive low bycatch (e.g. hook and line) and pelagic low bycatch (e.g. purse seines) gear. The species-gear associations are from Watson et al. 79. Catch data come from 2006 and were spatialized by the Sea Around Us Project into 1⁄2 degree cell resolution 31. We then summed these values into our EEZ reporting units. When cells spanned EEZ borders, we divided catch proportionally based on amount of area in each EEZ. Full details on the data that comprise this layer are provided in Halpern et al.3.
Nature 2012
directory:
model/GL-NCEAS-Pressures_v2012 README.txt
Source: GL-NCEAS-CleanWatersPressures, GL-NCEAS-Halpern2008, GL-NCEAS-Halpern2008_Distance, GL-NCEAS-Livelihoods, GL-NCEAS-Pressures_AlienInvasives, GL-NCEAS-Pressures_Intertidal, GL-UBC-Trujillo2008, GL-WorldBank-Governance, GL-WRI-ReefsAtRisk, GL-NCEAS-Pressures_SoftBottom
Pressures methods
- Load the raw stressor data which contains a value per reporting region
- Each raw stressor is rescaled, as needed, from its range into a score of 0..1.
- Each stressor belongs to a pressures category, and contains a single score (0..1) per OHI region.
Note that some stressors are the aggragate (mean) of several base stressors which are denoted with _N in the name, e.g., $xyz = mean(xyz_1, xyz_2, xyz_3)$.
setup.sh
Based on |setup.sh|, I see that the fisheries pressures (|fp_*|) are from here:
model/GL-NCEAS-Halpern2008/data/zstats_masked_impacts Source code snippets:
|# Halpern 2008 data layers p="$OHI_MODELDIR/GL-NCEAS-Halpern2008/data/zstats_masked_impacts" linky "$p" dem_d hd_subtidal_sb linky "$p" dem_d fp_com_hb_3 linky "$p" dem_nd_hb fp_com_hb_1 linky "$p" pel_hb fp_com_hb_2 linky "$p" dem_nd_lb fp_com_lb_1 linky "$p" pel_lb fp_com_lb_2 linky "$p" artisanal fp_art_lb ...
Halpern 2008 data layers with clipped distances
p="$OHI_MODELDIR/GL-NCEAS-Halpern2008_Distance/data/zstats_masked_impacts" linky "$p" artisanal_3nm fp_art_lb_3nm
Reefs at Risk Revisited data layers
p="$OHI_STABLEDIR/GL-WRI-ReefsAtRisk/data/pressures_gl" linky "$p" thr_poison hd_subtidal_hb_1 linky "$p" thr_blast hd_subtidal_hb_2 linky "$p" thr_blast fp_art_hb |
On Wed, Apr 16, 2014 at 9:57 AM, Katie Longo longo@nceas.ucsb.edu mailto:longo@nceas.ucsb.edu wrote: Hi Ben,
thanks for digging but these are not the data we are concerned about. (By the way, these values are not used for Antarctica so it's not a concern that the regions don't match.)
What we are interested in is the fishing pressures. In other words, those 5 categories, demersal high bycatch, pelagic high bycatch, etc (numbered from 1 to 5) you were looking at the other day, for which you found two types of data: catch density (I think it was called "cr" and was in tons/Km2 per cell) and a vgpm corrected productivity.
The layers navigation for 2013 will simply reveal we used data from 2012 multiplied by a factor. The question we have is which type of data was used by Darren for the first global OHI. If not obvious from his script, we might have to just look at values for an EEZ to understand which ones he used.
Thanks Katie
— Reply to this email directly or view it on GitHub https://github.com/OHI-Science/ohicore/issues/66#issuecomment-40629948.
If I understand, it sounds like the rasters I used to generate the fisheries pressures for HS and AQ (and that were used for 2013 OHI) were: (1) scaled by productivity (i.e., VGPM adjustment), (2) corrected for the change in fisheries catch from 2004 to 2011 (for each SAUP region), (3) rescaled to range from 0-1.
Is this correct? If this is the case, we are good to go! (thank god!)
Thanks for helping us hack our way through this one......
Mel
On Wed, Apr 16, 2014 at 11:11 AM, Ben Best notifications@github.com wrote:
Hi @Melsteroni https://github.com/Melsteroni,
To address your question, I should've supplemented previous email with extra file info below.
In summary, for assessment www 2013 I multiplied the Halpern 2008 rasters (with VGPM adjustment) by the percent change in fisheries per SAUP region, and rescaled (min 0 to max 1) for each fisheries pressure raster. Note that the 3 polygons without a ratio were given a 0 [Bouvet Island (BVT saup_id 74 of Norway) in S Ocean, Cyprus_North (saup_id 197 of Cyprus) in Med, and a negligible fao sliver saup_id 1037 in Med].
Here are the principal inputs:
fisheries pressure rasters
model/GL-NCEAS-Halpern2008/data/maskedimpacts*.tif
SAUP regions and percent change
model/GL-SAUP-FisheriesCatchData_v2013/data/saup_fao_mol.shp model/GL-SAUP-FisheriesCatchData_v2013/data/pct_chg_saup_2009to2011_vs_1999to2003.csv
BB
On Wed, Apr 16, 2014 at 10:07 AM, Melanie Frazier frazier@nceas.ucsb.eduwrote: Hi Ben:
The data you indicate are for the HD softbottom analysis (i.e. pressure on softbottom habitat). I think that this metric should be based on catch data that hasn't been corrected for productivity because it is a better metric for evaluating the pressure on soft-bottom habitats. I think this one is settled! Yay!!!
In the latter discussions, we are talking about fisheries pressures. For these analyses, I used the following data:
For Commercial High Bycatch:
- model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_2013_rescaled.tif 2. model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_nd_2013_rescaled.tif
- model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_pel_2013_rescaled.tif
For Commercial Low Bycatch:
1. model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_dem_nd_2013_rescaled.tif
- model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_pel_2013_rescaled.tif
And, for these data layers I am not clear whether they are based on productivity scaled fishery data.
(confusing...I know....)
Thanks Ben:
Mel www 2013
directory:
model/GL-NCEAS-Pressures_CommercialFisheries_v2013a
setup.sh
Halpern 2008 data layers
pfx="$OHI_MODELDIR/GL-NCEAS-Halpern2008/data/maskedimpacts" ln -sf "${pfx}dem_d.tif" tmp/fp_com_hb_dem.tif ln -sf "${pfx}dem_nd_hb.tif" tmp/fp_com_hb_dem_nd.tif ln -sf "${pfx}pel_hb.tif" tmp/fp_com_hb_pel.tif ln -sf "${pfx}dem_nd_lb.tif" tmp/fp_com_lb_dem_nd.tif ln -sf "${pfx}pel_lb.tif" tmp/fp_com_lb_pel.tif
ocean mask
ln -sf $OHI_MODELDIR/GL-NCEAS-Landsea_v2013a/data/ocean.* tmp
SAUP catch data
ln -sf $OHI_MODELDIR/GL-SAUP-FisheriesCatchData_v2013/data/pct_chg_saup_2009to2011_vs_1999to2003.csv tmp ln -sf $OHI_MODELDIR/GL-SAUP-FisheriesCatchData_v2013/data/saupfao* tmp
OHI regions
ln -sf $OHI_MODELDIR/GL-NCEAS-OceanRegions_v2013a/data/rgn_fao_mol.* tmp
model.py
varsscratch = tempfile.mkdtemp(prefix='GL-NCEAS-Pressures_CommercialFisheries_v2013a', dir='D:/best/tmp') # scratch directorywd = 'N:/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a' # working directorytd = wd + '/tmp' # temporary directorydd = wd + '/data' # data directorymd = wd + '/manual_output' # manual output directorygdb = td + '/geodb.gdb' # file geodatabaser_mol = td+'/ocean.tif' # raster reference
projectionssr_mol = arcpy.SpatialReference('Mollweide (world)') # projected Mollweide (54009)sr_gcs = arcpy.SpatialReference('WGS 1984') # geographic coordinate system WGS84 (4326)
shapefiles don't have nulls, so use geodatabaseif not arcpy.Exists(gdb):
arcpy.CreateFileGDB_management(os.path.dirname(gdb), os.path.basename(gdb))
workspace & scratch spacearcpy.env.workspace = gdb; os.chdir(gdb)arcpy.env.scratchWorkspace=scratch
copy into tmp geodbpct_chg_csv = 'pct_chg_saup_2009to2011_vs_1999to2003.csv'pct_chg_tbl = os.path.splitext(pct_chgcsv.replace('-',''))[0]arcpy.FeatureClassToGeodatabase_conversion(td+'/saup_fao_mol.shp', gdb)arcpy.TableToGeodatabase_conversion(td + '/' + pct_chg_csv, gdb)
join ratio to saup regionsflds_except = ['OBJECTID','Shape','AREA_KM2','Shape_Length','Shape_Area','id','id_type']arcpy.JoinField_management('saup_fao_mol', 'id', pct_chg_tbl, 'id',
[f.name for f in arcpy.ListFields(pct_chg_tbl) if f.name not in flds_except])
check for polygons without a ratioarcpy.Select_analysis('saup_fao_mol', 'saup_fao_mol_missing_pct_chg', '"pct_chg" IS NULL')# 3 polygons are without a ratio: Bouvet Island (BVT saup_id 74 of Norway) in S Ocean, Cyprus_North (saup_id 197 of Cyprus) in Med, and a negligible fao sliver saup_id 1037 in Med# for thise polygons make 0% changearcpy.MakeFeatureLayer_management("saup_fao_mol",'lyr', '"pct_chg" IS NULL')arcpy.CalculateField_management('lyr', 'pct_chg', '0', 'Python_9.3')
convert saup pct change to rasterarcpy.RepairGeometry_management("saup_fao_mol")arcpy.env.snapRaster = arcpy.env.cellSize = arcpy.env.outputCoordinateSystem = arcpy.env.extent = arcpy.env.mask = r_molarcpy.arcpy.PolygonToRaster_conversion('saup_fao_mol', 'pct_chg', td+'/saup_pct_chg_mol.tif', 'MAXIMUM_COMBINED_AREA')
helper function to rescale rasters per Halpern et al (2008)def rescale_raster(r_in, r_out):
min = float(arcpy.GetRasterProperties_management(r_in, 'MINIMUM').getOutput(0)) max = float(arcpy.GetRasterProperties_management(r_in, 'MAXIMUM').getOutput(0)) print ' min: %g' % (min) print ' max: %g' % (max) r = (Raster(str(r_in)) - min) / (max - min) r.save(r_out)
def raster_m1000int(r_in, r_out): r = Int(Raster(r_in) * 1000) r.save(r_out) arcpy.BuildRasterAttributeTable_management(r_out)
for each fisheries pressure raster, multiply by the percent change and rescaler_pctchg = td+'/saup_pct_chg_mol.tif'fps = ['fpcom%s.tif' % s for s in ['hb_dem','hb_dem_nd','hb_pel','lb_dem_nd','lb_pel']]for fp in fps:
print ' %s' % fp b = os.path.splitext(fp)[0] # basename of rp without filename extension r_fp = '%s/%s' % (td,fp)# full path to fisheries pressure raster r_fp_int = '%s/%s_m1000int.tif' % (td,b) # multiplied by 1000 and made integer for VAT to read in R r_fp_pctchg = '%s/%s_pctchg.tif' % (td,b) # percent change raster r_fp_pctchg_int = '%s/%s_pctchg_m1000int.tif' % (td,b) # percent change raster, integer r_fp_pctchg_rescaled = '%s/%s_pctchg_rescaled.tif' % (td,b) # linear rescaled fisheries raster after percent change applied r_fp_pctchg_rescaled_int = '%s/%s_pctchg_rescaled_m1000int.tif' % (td,b) # linear rescaled fisheries raster after percent change applied, integer print ' ' + os.path.basename(r_fp) ZonalStatisticsAsTable(td+'/rgn_fao_mol.tif', 'VALUE', r_fp, '%s/%s_2012_rgn_mean.dbf' % (td, b), 'DATA', 'MEAN') print ' ' + os.path.basename(r_fp_pctchg) r = (1 + Raster(r_pctchg)/100) * Raster(r_fp) r.save(r_fp_pctchg) ZonalStatisticsAsTable(td+'/rgn_fao_mol.tif', 'VALUE', r_fp_pctchg, '%s/%s_pctchg_rgn_mean.dbf' % (td, b), 'DATA', 'MEAN')
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66#issuecomment-40632573 .
Hi @Melsteroni,
I believe so. But can you please confirm which rasters (exact path) you used? If you check in code to a sensible product in the ohiprep repository (and submit a pull request from melsteroni/ohiprep to ohi-science/ohiprep), then paths and process can be most easily checked.
Thanks, BB
Ok, The file is in ohiprep/Global/HS_AQ_Pressures_2013/DataSummary.R. I think I just added it to the ohi-science/ohiprep directly....I believe the https://neptume.nceas.ucsb.edu/rstudio account is linked to ohi-science/ohiprep......rather than melsteroni/ohiprep. This might be something I need to change?
Anyway, Ignore most of what is in that R script.....I am in the process of streamlining the code, but if you go to the sections on: "Commercial high bycatch" and "Commercial low bycatch" you can see the paths to the raster files I used.
Thanks
On Thu, Apr 17, 2014 at 11:24 AM, Ben Best notifications@github.com wrote:
Hi @Melsteroni https://github.com/Melsteroni,
I believe so. But can you please confirm which rasters (exact path) you used? If you check in code to a sensible product in the ohiprep repository (and submit a pull request from melsteroni/ohiprep to ohi-science/ohiprep), then paths and process can be most easily checked.
Thanks, BB
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66#issuecomment-40746065 .
Looking great Melanie! For the record, I see the following preferred paths in ohiprep:Global/HS_AQ_Pressures_2013/DataSummary.R
# Commercial High Bycatch
hb_dem <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_2013_rescaled.tif")
hb_dem_nd <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_nd_2013_rescaled.tif")
hb_pel <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_pel_2013_rescaled.tif")
# Commercial Low Bycatch
lb_dem <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_dem_nd_2013_rescaled.tif")
lb_pel <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_pel_2013_rescaled.tif")
Your code highlights a few changes I'd like to make elsewhere on its dependencies:
Perhaps once you get it cleaned up, we can take a look together in a shared session and meanwhile update the neptune rstudio to using melsteroni/ohiprep, not ohi-science/ohiprep, which will ensure that our code changes don't collide later.
There are a few tips I'd like to share for making the code more portable with paths. These are non-critical but will greatly help to make the code more usable across machines.
Use path prefixes so code can be run on any host. We have a little script to help with this.
source('src/R/common.R') # get dir_neptune_data
sp_gcs_shp = file.path(dir_neptune_data, 'git-annex/Global/NCEAS-Regions_v2014/data/sp_gcs')
#...
regions_gcs <- readOGR(dsn=dirname(sp_gcs_shp), layer=basename(sp_gcs_shp))
Don't change working directory (setwd()
), which will be local to whatever host. Instead stay in the local github ohiprep folder by opening the ohiprep RStudio project. Then outputs intended for github can be put in local relative paths and any intended for the neptune can use the dir_neptune_data prefix, whether for model
or git-annex
. For example, reading in the SST raster on neptune data and writing the zonal means to the github repository.
# prefix paths relative to local ohiprep github repo
dir_gwd <- "Global/HS_AQ_Pressures_2013" # github working directory
dir_gdata <- file.path(dir_gwd, "data") # github data directory
#
# prefix paths on neptune
source('src/R/common.R') # get dir_neptune_data
dir_atmp <- file.path(dir_neptune_data, "git-annex", dir_gwd, "temp") # annex temp directory on neptune
dir_adata <- file.path(dir_neptune_data, "git-annex", dir_gwd, "data") # annex data directory on neptune
#
# data paths
sst_in_tif <- file.path(dir_neptune_data, "model/GL-NCEAS-Pressures_v2013a/tmp/sst_05_10-82_86i_mol.tif")
sst_out_tif <- file.path(dir_gdata, "sst_rescaled.tif") # big files to annex data
sst_out_csv <- file.path(dir_adata, "SST_ZonalMean.csv") # smaller csv to github data
#
# create directories if not already present
dir.create(dir_data, showWarnings=F)
dir.create(dir_tmp, showWarnings=F)
#
# read in sst_tif
SST <- raster(sst_in_tif)
#
# write interim to git-annex temporary (note path set on fly since just interim)
writeRaster(SST, file.path(dir_atmp, "sst_tmp.tif"))
#
# write final big data file to annex data
writeRaster(SST, sst_out_tif)
#
# write smallish csv file to github data
write.csv(data, sst_out_csv, row.names=FALSE)
raster
, rasterize
, zonal
, write.csv
) which could be iterated over using variables read in from a csv table having columns raster_in, csv_out table. (Would it be faster/easier to use raster's extract
vs rasterize
and zonal
?)I have been adding to that code bit by bit as I figure out/check each pressure....so it certainly isn't optimized in any way.
It really hasn't mattered at this point, because optimization isn't what is slowing the process down.
But, yes, once these deadlines pass, I would like to get it better integrated with everything else! So thanks for the suggestions and it would be great to go over it at some point with Skype.
Rasterize and zonal are by far the fastest options. Extract works too....but much more slowly. One issue with the rasterize/zonal method that I have recently started thinking about is that it doesn't take into account partial coverage of a polygon of a raster cell, which my understanding is that this isn't consistent with how it was done previously. This doesn't seem to make much of a difference at the level of the region summary, and besides, I consider this source of variation to be noise anyway because, really, the precise boundaries of the poygons are probably not all that meaningful.
I can account for partial raster cells using extract. So maybe I should convert everything to extract and accept that everything will take longer. Do you have a suggestion for this?
Thanks Ben:
Mel
On Thu, Apr 17, 2014 at 1:42 PM, Ben Best notifications@github.com wrote:
Looking great Melanie! For the record, I see the following preferred paths in ohiprep:Global/HS_AQ_Pressures_2013/DataSummary.Rhttps://github.com/OHI-Science/ohiprep/blob/55c5b8391b2bf6201e8d300a35162a9f098e9c3e/Global/HS_AQ_Pressures_2013/DataSummary.R#L124-L156
Commercial High Bycatchhb_dem <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_2013_rescaled.tif")hb_dem_nd <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_dem_nd_2013_rescaled.tif")hb_pel <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_hb_pel_2013_rescaled.tif")
Commercial Low Bycatchlb_dem <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_dem_nd_2013_rescaled.tif")lb_pel <- raster("/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_lb_pel_2013_rescaled.tif")
Your code highlights a few changes I'd like to make elsewhere on its dependencies:
1.
make Mollweide available in git-annex/Global/NCEAS-Regions_v2014/data 2.
move fpcom*_2013_rescaled.tif out of /tmp and into /data withinn model/GL-NCEAS-Pressures_v2013a
Perhaps once you get it cleaned up, we can take a look together in a shared session and meanwhile update the neptune rstudio to using melsteroni/ohiprep, not ohi-science/ohiprep, which will ensure that our code changes don't collide later.
There are a few tips I'd like to share for making the code more portable with paths. These are non-critical but will greatly help to make the code more usable across machines.
1.
Set all paths at top of script to highlight dependencies. 2.
Use path prefixes so code can be run on any host. We have a little script to help with this.
source('src/R/common.R') # get dir_neptune_datasp_gcs_shp = file.path(dir_neptune_data, 'git-annex/Global/NCEAS-Regions_v2014/data/sp_gcs')#...regions_gcs <- readOGR(dsn=dirname(sp_gcs_shp), layer=basename(sp_gcs_shp))
- Don't change working directory (setwd()), which will be local to whatever host. Instead stay in the local github ohiprep folder by opening the ohiprep RStudio project. Then outputs intended for github can be put in local relative paths and any intended for the neptune can use the dir_neptune_data prefix, whether for model or git-annex. For example, reading in the SST raster on neptune data and writing the zonal means to the github repository.
prefix paths relative to local ohiprep github repodir_gwd <- "Global/HS_AQ_Pressures_2013" # github working directorydir_gdata <- file.path(dir_gwd, "data") # github data directory
prefix paths on neptunesource('src/R/common.R') # get dir_neptune_datadir_atmp <- file.path(dir_neptune_data, "git-annex", dir_gwd, "temp") # annex temp directory on neptunedir_adata <- file.path(dir_neptune_data, "git-annex", dir_gwd, "data") # annex data directory on neptune
data pathssst_in_tif <- file.path(dir_neptune_data, "model/GL-NCEAS-Pressures_v2013a/tmp/sst_05_10-82_86i_mol.tif")sst_out_tif <- file.path(dir_gdata, "sst_rescaled.tif") # big files to annex datasst_out_csv <- file.path(dir_adata, "SST_ZonalMean.csv") # smaller csv to github data
create directories if not already presentdir.create(dir_data, showWarnings=F)dir.create(dir_tmp, showWarnings=F)
read in sst_tifSST <- raster(sst_in_tif)
write interim to git-annex temporary (note path set on fly since just interim)writeRaster(SST, file.path(dir_atmp, "sst_tmp.tif"))
write final big data file to annex datawriteRaster(SST, sst_out_tif)
write smallish csv file to github datawrite.csv(data, sst_out_csv, row.names=FALSE)
- Loop repetitive tasks. This script seems to have a repeat pattern ( raster, rasterize, zonal, write.csv) which could be iterated over using variables read in from a csv table having columns raster_in, csv_out table. (Would it be faster/easier to use raster's extracthttp://www.inside-r.org/packages/cran/raster/docs/extractvs rasterize and zonal?)
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66#issuecomment-40760245 .
Yeah, rasterize() does simple point in polygon operation so should be fastest. For fine resolution global 1km rasters that's fine. For coarser rasters such as the 1/2 degree catch rasters of SAUP-FishCatchByGearType_Halpern2008, you'd probably want to use extract()
with arguments for small
and/or weights
.
Your process makes good sense. I'm fine with returning to these finer points, including code cleanup, until after deadlines. We can simply leave this issue open and assigned to you, or create a new issue specifically highlighting updates to do later.
That would be great!
Yes, for the Gear Type data for soft-bottom habitat pressure I did extract with weights for that reason.
Thanks Ben
On Thu, Apr 17, 2014 at 2:20 PM, Ben Best notifications@github.com wrote:
Yeah, rasterize()http://www.inside-r.org/packages/cran/raster/docs/rasterizedoes simple point in polygon operation so should be fastest. For fine resolution global 1km rasters that's fine. For coarser rasters such as the 1/2 degree catch rasters of SAUP-FishCatchByGearType_Halpern2008https://github.com/OHI-Science/ohiprep/tree/master/Global/SAUP-FishCatchByGearType_Halpern2008, you'd probably want to use extract()http://www.inside-r.org/packages/cran/raster/docs/extractwith arguments for small and/or weights.
Your process makes good sense. I'm fine with returning to these finer points, including code cleanup, until after deadlines. We can simply leave this issue open and assigned to you, or create a new issue specifically highlighting updates to do later.
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66#issuecomment-40764259 .
Hi @Melsteroni,
So the files you used:
/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fp_com_*_2013_rescaled.tif
should be swapped from the tmp
folder of the secondary product GL-NCEAS-Pressures_v2013a
to the symbolically linked (for how to determine, see ohiprep wiki:Linux Commands) authoritative data
subfolder of the product GL-NCEAS-Pressures_CommercialFisheries_v2013a
containing the README and scripts descriptive of their creation.
/var/data/ohi/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a/data/fp_com_*_2013_rescaled.tif
I confirmed that the files you used are truly the VGPM Halpern 2008 rasters with the percent change per SAUP (1999 to 2003 vs 2009 to 2011) applied and rescaled 0 to 1. This was confusing because the model.py
file only outputs to the tmp folder (not data), but without any corresponding _rescaled.tif filename outputs. I checked raster values and confirmed that data/fpcom_2013_rescaled.tif is identical to tmp/fpcom*_pctchg_rescaled.tif.
How I checked:
# data
gdalinfo data/fp_com_hb_dem_2013_rescaled.tif
#tmp
for f in $(ls tmp/fp_com_hb_dem_*.tif); do
printf "\n\n##########\n$f\n"
gdalinfo $f
done
Result extracts:
data/fp_com_hbdem2013_rescaled.tif
STATISTICS_COVARIANCES=1.93113588022672E-03
STATISTICS_MAXIMUM=1.0000000116222
STATISTICS_MEAN=0.012463659862586
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=0.04394469114952
tmp/fp_com_hbdempctchg_rescaled.tif
STATISTICS_COVARIANCES=1.93113588022672E-03
STATISTICS_MAXIMUM=1.0000000116222
STATISTICS_MEAN=0.012463659862586
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=0.04394469114952
Thanks Ben:
I appreciate you taking the time to look into this!
On Thu, Apr 17, 2014 at 4:30 PM, Ben Best notifications@github.com wrote:
Hi @Melsteroni https://github.com/Melsteroni,
So the files you used:
/var/data/ohi/model/GL-NCEAS-Pressures_v2013a/tmp/fpcom*_2013_rescaled.tif
should be swapped from the tmp folder of the secondary product GL-NCEAS-Pressures_v2013a to the symbolically linked (for how to determine, see ohiprep wiki:Linux Commandshttps://github.com/OHI-Science/ohiprep/wiki/Linux-Commands) authoritative data subfolder of the product GL-NCEAS-Pressures_CommercialFisheries_v2013a containing the README and scripts descriptive of their creation.
/var/data/ohi/model/GL-NCEAS-Pressures_CommercialFisheries_v2013a/data/fpcom*_2013_rescaled.tif
I confirmed that the files you used are truly the VGPM Halpern 2008 rasters with the percent change per SAUP (1999 to 2003 vs 2009 to 2011) applied and rescaled 0 to 1. This was confusing because the model.py file only outputs to the tmp folder (not data), but without any corresponding rescaled.tif filename outputs. I checked raster values and confirmed that *data_/fp_com__2013rescaled.tif is identical to tmp_ /fp_com____pctchg_rescaled.tif.
How I checked:
data
gdalinfo data/fp_com_hb_dem_2013_rescaled.tif
tmpfor f in $(ls tmp/fp_com_hbdem*.tif); do printf "\n\n##########\n$f\n"
gdalinfo $fdone
Result extracts:
-
data/fp_com_hb_dem__2013rescaled.tif
STATISTICS_COVARIANCES=1.93113588022672E-03 STATISTICS_MAXIMUM=1.0000000116222 STATISTICS_MEAN=0.012463659862586 STATISTICS_MINIMUM=0 STATISTICS_STDDEV=0.04394469114952 -
tmp/fp_com_hb_dem__pctchgrescaled.tif
STATISTICS_COVARIANCES=1.93113588022672E-03 STATISTICS_MAXIMUM=1.0000000116222 STATISTICS_MEAN=0.012463659862586 STATISTICS_MINIMUM=0 STATISTICS_STDDEV=0.04394469114952
— Reply to this email directly or view it on GitHubhttps://github.com/OHI-Science/ohicore/issues/66#issuecomment-40773975 .
Hi @katlongo, @Melsteroni,
Closing this issue but please let me know if there's anything else you need.
BB
Hi Melanie and Katie,
aka @Melsteroni and @katlongo (to automatically track any further updates on this autocreated issue)
Below is the README.md for the copied over product SAUP-FishCatchByGearType_Halpern2008 with sensible names by gear type code, and an area raster. Rasters here:
As I previously Skyped, you can get the soft bottom habitats here:
Note that for EEZ analysis we seemed to use only Sub-tidal Soft Bottom + Soft Shelf (based on model/GL-NCEAS-Habitats/import1.R), not Soft Slope which is probably worthwhile for high seas.
BB
Sea Around Us Project - Fisheries Catch Rate by Gear Type
These rasters are copies of the original fisheries rasters from Halpern et al (2008) in the format:
Files are to be found at:
Product
An area raster (in km2) is given for each product.
Gear Types
Projection
Both sets of products are in geographic coordinate system (gcs), but with different dimensions.
Other
For more details, see the supplement of Halpern et al (2008) and inspect paths of R script.
On Tue, Apr 8, 2014 at 4:05 PM, Katie Longo wrote:
> Hello folks, > > just a reminder for tomorrow's 1:30pm meeting: > > Topic: how to calculate soft bottom for high seas? > > Background: calculating soft bottom status (Biodiversity - Habitats) > requires two pieces of information 1) global soft bottom map 2) catch per > gear type in HS. We have dataset 1, we're missing dataset 2. > What we have: catch per gear type for all high seas, not broken down by > FAO-HS region (in hand); catch per FAO-HS region, not broken down by gear > type (in hand); raw catch data per gear type per half degree cell up to > 2004 (to be located) OR map of fishing pressure scores from the cumulative > impacts map > We need to figure out: where is the raw catch data by half degree cell? Is > it per gear type? If it is, summarize catch per bottom trawling gear per HS > region; if it isn't, can we use cumulative impact scores to infer soft > bottom pressures for HS? > > Talk to you tomorrow! > K. > > -- - - - - - -- > > ~ > ~ ~ > > ~ > ~ > > > Catherine Longo, PhD > Project scientist, Marine Science Institute (UCSB) > > National Center for Ecological Analysis and Synthesis (NCEAS) > 735 State Street, Suite 300 > Santa Barbara, CA 93101 > Telephone: (805) 882-9218 > > Email: longo@nceas.ucsb.edu > Web: http://ift.tt/R41cmd > >