Closed tmcd82070 closed 8 years ago
Height of the cone is a column in the efficiency table created by SQL. So, cone status is being pulled from the database. Question is then what the R code does with this information and whether it's correct.
Current Task 2.1 in Scope of Work 2. Currently under investigation.
Revisit, and as needed, revise the R analyses in the production reports so they properly account for half cone operations that may affect the juvenile Chinook salmon production estimates and raw catch.
In checking out the production sequence for an American river run, it appears that variables halfConeID and HalfCone are present in dataframe catch.df at the time of model estimation in programs catch.model and est.catch, around line 71 in program est_passage. Currently, although present, it doesn't appear as if anything is done with these variables.
In this particular case, all 440 records in catch.df have halfConeID == 2 and HalfCone == "No". In other words, this is a bit of a boring example, but I can see in the databases that this isn't always the case.
Some questions for whenever we have a call about this:
Let me know if you want to still try and have a call next week.
Thank you for tackling this issue Jason. I'm afraid it will be tricky. I think this was never fully resolved because up until recently we didn't have any good example data for your testing.
You may like to use the Clear and Battle Creek database (on the Admin_ftp) for testing because it has data for half-cone configuration. Krista can also post a database with half-cone data to the ftp. For example you can run analysis for the following:
Battle Creek upper RST 9/1/2007-8/1/2008. Clear Creek lower RST 9/1/2008 to 8/1/2008.
HalfConeID: The CAMP.mdb halfconeID field has a lookup code of 1 = yes (half-cone was used) or 2 = no (no half-cone-trap fully deployed). I've included both in the output table TempSumUnmarkedByTrap_Run_Final because you might prefer to use the ID field during coding and the description for reports.
Half-cone configuration: In the half cone configuration the trap isn't necessarily raised but half of trap is set to allow fish to bypass the live-box and exit the trap. This effectively reduces the number of fish caught by %50. There is calendar report in the QC application that presents the cone status for easy viewing. You may find it helpful as it will make it easier to figure out when the half-cone was applied.
Using the half-cone configuration has ramifications for the total number of fish caught as well as the trap efficiency tests. Below are the issues as I see them.
Can an efficiency developed during half-cone operations be used without modification to expand the catch on days when the half-cone configuration was used?
Can an efficiency developed during full-cone operations be used without modification to expand the catch on days when the half-cone configuration was used?
Should there be two pools of efficiency tests, one for half-cone and one for full-cone trapping?
Should the efficiency be modified if the test occurred during half-cone operations so that it can be applied to any type of trapping?
If the efficiency is modified as above should the catch also be doubled during half-cone trapping?
If the cone configuration is changed during and active efficiency test can it still be used? Recall that table TempRelRecap_Final provides the total number of fish released, the cone configuration, and the number of recaptures by trap and sampling time for each record.
Cone depth, thalweg, and trap function fields: These fields are present in the database but I don't think they are intended to be used in analysis. These fields are provided so that the biologists can make informed decisions about whether to include or exclude a catch sample or even an efficiency test. This is why these fields are not included in the summary TempSumUnmarkedByTrap_Run_Final or TempRelRecap_Final. Only the IncludeCatch and IncludeTest fields are included. This is something we can discuss more next Thursday with Doug in case he has a different view. You and Trent may also see things differently.
Note: Use of the Thalweg and cone-depth fields is optional so it is often null. At times the crews forget to input data for how the trap is functioning, I suspect this happens most often when the trap is running well.
Something I didn’t mention the other day that might play into the discussion about how to deal with half-cone use is the issue of lengthy gaps in the data.
The current R application faults when a gap in sampling is lengthy. I’m wondering if parsing the data by cone configuration (and hopefully gear type) will aggravate that issue. It is common for the half-cone configuration to be used for only a part of a season.
I agree, the more we parse the data, the more likely there will be occasions when the requisite data (e.g., trap efficiency data) for developing production estimates won’t be available.
D
Okay, I will add this to the Issue, which I'll be writing up later today.
On Mon, Dec 14, 2015 at 10:46 AM, dougthreloff notifications@github.com wrote:
I agree, the more we parse the data, the more likely there will be occasions when the requisite data (e.g., trap efficiency data) for developing production estimates won’t be available.
D
From: ConnieShannon [mailto:notifications@github.com] Sent: Monday, December 14, 2015 9:40 AM To: tmcd82070/CAMP_RST Subject: Re: [CAMP_RST] SOW 2 - Task 2.1: Are half-cones counts being correctly inflated? (#11)
Something I didn’t mention the other day that might play into the discussion about how to deal with half-cone use is the issue of lengthy gaps in the data.
The current R application faults when a gap in sampling is lengthy. I’m wondering if parsing the data by cone configuration (and hopefully gear type) will aggravate that issue. It is common for the half-cone configuration to be used for only a part of a season.
— Reply to this email directly or view it on GitHub <https://github.com/tmcd82070/CAMP_RST/issues/11#issuecomment-164505382
.[image: Image removed by sender.]
— Reply to this email directly or view it on GitHub https://github.com/tmcd82070/CAMP_RST/issues/11#issuecomment-164507466.
Jason Mitchell Research Biometrician
Environmental & Statistical Consultants 200 S. Second Street Laramie, WY 82070 (307) 721-7342 jmitchell@west-inc.com www.west-inc.com
Follow WEST: Facebook http://www.facebook.com/pages/Western%E2%80%90EcoSystems%E2%80%90Technology%E2%80%90WESTInc/125604770807646 , Twitter http://twitter.com/WestEcoSystems, Linked In http://www.linkedin.com/company/1458419, Join our Mailing list http://visitor.r20.constantcontact.com/manage/optin/ea?v=001qrD4A3S5xJ5KgMyelH9jyw%3D%3D
CONFIDENTIALITY NOTICE: This message and any accompanying communications are covered by the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510-2521, and contain information that is privileged, confidential or otherwise protected from disclosure. If you are not the intended recipient or an agent responsible for delivering the communication to the intended recipient, you are hereby notified that you have received this communication in error. Dissemination, distribution or copying of this e-mail or the information herein by anyone other than the intended recipient, or an employee or agent responsible for delivering the message to the intended recipient, is prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the original message. Thank you.
P Please consider the environment before printing.
On 12/10/2015, Doug, Connie, Jason Julian, Krista, and Jason Mitchell had a call to discuss what to do about half-cone operations. I thought I would write out what we discussed in detail, so as to make sure I understand what needs to happen. Provided this is all accurate, we can then summarize it a bit, and send off to the biologists for input.
In the conversation, I confirmed that Connie's query retains the cone status for each trap fishing period up to the point where the Platform begins to spline together the catch. However, as far as I can tell, nothing is then done with this cone information.
Doug confirmed that this is an issue, and organized three separate options on how to deal with it.
OPTION 1
Generally speaking, cones are either fully in the water ("full-cone"), or halfway out of the water ("half-cone"). In reality, it's recognized that cones can sometimes be at a variety of positions in and out of the water, but in general, it's either in, or it's out. When half-cone, only half as much water flows through the trap as the full-cone configuration. In theory then, half as much fish are trapped in the half-cone configuration. So, for those trap-specific fishing periods where a half-cone operation was recorded, the collected count of fish would be multiplied by 2. This then would ensure that all fishing periods are at least scaled appropriately.
I confirmed that such an adjustment should be relatively straight-forward to program. At the appropriate spot in the code, the sum of the assigned and unassigned fish would be multiplied by two. This number, the "halfConeAdj" would then occupy a new column in the baseTable output currently produced via a production run.
Example: Suppose a query on the American River at Trap 8.1 on some particular day caught 1,452 fish. For ease of illustration, suppose this trap ran for the 24 hours from midnight to midnight. Of these 1,452 fish, 100 were pulled out, measured, and assigned. This means 1,352 would go through the plus-count algorithm, with that 1,352 fish divvied up between runs and lifeStages, as appropriate. On this day, however, suppose further that Connie's half-cone flag was a 1. So, an additional 1,452 fish, for this day, would be recorded in the new halfConeAdj column of the baseTable. To differentiate "caught" fish versus "seen+imputed (but not expanded by efficiency)" fish, the current totalCatch would be renamed to, say, totalEst. In this way, the appellation "totalCatch" would then apply to the number of truly "caught" fish. I have in my notes that "changing existing field names not good." We could always not insert a subtotal of caught fish, so as to preserve the totalCatch field name as it currently is.
So, the Excel baseTables would look something like the below. I didn't spend too much time checking the numbers in this very fake example, but I intended it to respect the accounting of
where a zero in halfConeAdj implies the use of a full cone; i.e., no adjustment is necessary. Am I thinking of this correctly?
In theory, this option, by itself, should be relatively straightforward to code. Provided there are no surprises, I think 2-4 hours is a reasonable median timeframe to insert this update.
OPTION 2
This option attempts to resolve the cone issue via the partitioning of data. In much the same way the code loops over runs, the code would be set up to loop over cone type as well. So, another loop would be constructed inside of the run loop (and lifeClass loop, where necessary?) to first estimate passage for all full-cone operations, while another second loop would estimate for half-cone operations.
As Doug indicated on the call, this is a factorial design, and would increase the amount of output/runs of the data. At most, for 3 life stages, 4 runs, and 2 cone settings, we would create analyses over 24 distinct data partitions. (If we did this for gear type as well, we would end up with at most 48 partitions, assuming 5- and 8-[meter?] gear sizes are the only two options.)
This is a reasonable option, although in thinking about it, I'm not much of a fan. The partitioning is awkward. Further, although I know Trent originally adapted the code to loop over runs as an easy and efficient way to generalize the code, I'm not certain he meant it to be run in this way over more than run and lifeStage. Additionally, splines for, say half-cone operations alone, may be separated by several days worth of missing data, where those missing days were at full-cone operations. This means that a spline will "connect the dots" from the end of a first half-cone measurement period to the start of the next half-cone measurement period, with no consideration of what happens in between. I worry this would lead to possibly worse estimates in some cases than the Option 1 approach, or doing nothing at all. It would be good if Trent could confirm this suspected behavior.
Practically, the looping part of the code I don't believe would be very difficult to implement at all. One thing we would probably have to code for is what happens when only a few traps' worth of half-cone operations (or full) make into a particular loop. I suspect that in data sets with a particular poor set of data consisting of only a few points, an intercept-only model will be fit. This is the logic that results in boring horizontal lines on the efficiency pngs when only a few data points are present.
More concerning to me is how to put the data results back together again. Do we just want to spit out the results, or do we want to combine the full- and half-cone numbers, and then get estimates and confidence intervals for their combination as well? For example, when considering run, we output a final CSV that gives individual run estimates and intervals, but we never combine the four runs into one overarching number. Given an estimate and confidence interval for full- and half-cone operations, would we want to combine the two and get a final estimate?
For concreteness: we currently get a fall, late fall, winter, and spring estimate. Do we just want the eight fall/half, fall/full, late fall/half, late fall/full, winter/half, winter/full, spring/half, spring/full, or do we want all of those 8, plus the four when we combine the half and full for a run; the additional four?
Also, we've had trouble before with what happens when we run an individual run versus the result we get when we run all runs, and achieve different numbers. We know why these numbers happen, but it's created a headache in the past. Do we foresee anything like this happening with the partitioning of the cones as described by Option 2?
This option is harder than Option 1. I think after 4 hours I would have an idea of how well it's coming together, at which point, I'd have a better understanding of the additional time needed to get it up to speed and ready to go. I know that's not very accurate...this is definitely more time consuming than Option 1 however.
We also discussed a variation of this Option 2. Another option then is to load, perhaps in that river's database, multi-year polynomial weights to use for efficiency, where one set of weights would be used for full-cone operations, while another set would be used for half-cone operations. These weights would come from fitting splines to several years' worth of data. I may not be describing this suboption with 100% accurately; it seemed to quickly lose favor in the discussion. We never established if West would estimate these betas, if they would need to be updated annually, etc.
OPTION 3
This option assumes that data, as collected at the RBDD, are available. This option would calculate Q-flow, take advantage of how far the trap was in the water, etc. in order to adjust for cone operations directly. In this instance, dealing explicitly for half-cone versus full-cone operaitons would be moot. While attractive statistically, it does nothing to remedy historical data in which catch was recorded, but the required extraneous variables were not. So, proceeding in this way, without doing anything else, would leave passage estimates for some rivers up to 2015, inaccurate.
OPTIONS 2 & OPTIONS 1
Another possible solution involves the coding of both Options 1 and 2, since it seems that different rivers would do better with different solutions. This is not very efficient from a programming standpoint, but may be necessary to accommodate the various ways in which different rivers collect their data.
I make a mention of gaps in fishing above; this is exactly in line with Connie's thinking per her email from earlier today, 12/14/2015. These would be exacerbated with half-cone operations, which I imagine sometimes vary a bit over days within a trap and run. I don't think it would be much of an issue for gear type -- it seems once a gear is switched once, it doesn't go back. But that's probably not set in stone.
I think my hourly rate is $100, so hourly cost estimates could use this nice round estimator.
Action Doug, Jason, and Trent to discuss the options and implement one.
(from Doug)
Jason:
Krista and I have been trading emails with some simple modeling efforts to see how we could deal with the half cones using Option 1 in your email. She and I are coming up with similar results, i.e., the collection of catch data with half cones does not alter the production estimates when half cone trap efficiencies are used to expand those reduced catches because there is half the efficiency and half the catch so the production turns out to be the same when the catch is divided by the efficiency. This is not what I anticipated but mathematically appears to be the case when a GAM spline is not considered. I still have this nagging suspicion something is not right, however.
I am now curious to know how the GAM splines would deal with half cones since those are what are used to expand the catch and get the production estimates. I want to run a test. I looked at the 11 trap efficiencies for 2015 from the American River. I then randomly assigned a half cone status to each of the 11 tests, and then cut the recaps in half for the simulated tests with a half cone.
I am not sure how the CAMP Platform’s trap efficiency model assigns a date to each trap efficiency test, e.g., does the R code look at the recaps for each trap efficiency test and assign the date to a particular trap efficiency test based on (A) the date of the last recap? Or (B) it uses the release date time (e.g., January 1 at 5:00 PM) + the # of test days associated with the test period (e.g., a release with a test period of 2 days (48 hours) = January 3 at 5:00 PM?
I am trying to assess how a variable cone status affects the passage estimates based on the catches and GAM trap efficiency splines. See the “8.1 all dates E test summary” workbook in the spreadsheet attached to this email. Please determine if the R code uses approach A or B above. Then fit the same kind of GAM that is in the RST platform’s Efficiency PNG file to the data in Column C or I depending on which date R uses to assign a date to each E test. Then output a CSV that provides the date-specific estimated efficiencies along the spline for the entire season like you did in the platform, and send me that CSV file. I will then take those date-specific efficiencies based on a spline and apply them to some American River total catch data that has been randomly assigned a full or half cone assignment (and halved catches as appropriate) to see how the modeled half cones affect the estimated passage estimates vs. a case where things run in a full cone operation with a normal spline all season long.
Thanks. simulated half cones.xlsx
Okay. I think you just want to see how the catch is affected when tinkering with the efficiency in different ways. You can't break into the code however, so it's difficult for you to do this.
The answer to your question is (C) None of the above.
I have updated your "simulated half cones.xlsx" Excel workbook. On sheet '8.1 all dates E test summary,' I added a third block of green columns, indicating where the Platform is assigning efficiency dates. I think it's straightforward to follow / infer; note that the dates all line up row-wise. I also get 11 distinct efficiency trials. I also get the same "actual results" you report prior to manipulating the observed efficiencies. In other words, your "actual results" equal my nCaught / nReleased = "actual results." I just put them in different dates.
As for the dates and your question, the Platform appears to be assigning a mean recapture date, based on the distribution of when fish are caught. In your sheet '8.1 E test summary' I have added two columns, in green, with the date and time the Platform is using to assign the release date. In all cases, note that the Platform Date is temporally after what you call Date.
Generally, the code is looking at the distribution of found fish after the release date, and then taking the average day and time, based on that temporal distribution. Based on the data in this sheet, it's clear that most test fish appear (for this river and its trials) at a trap the day after being released. This tendency manifests in the Platform Date being, on average, the day after the release Date. So, that's how the Platform is assigning the efficiency date. Note that I preceded this paragraph with "generally," as I haven't followed data explicitly through the code sequence where this occurs. I can, however, see that the code is clearly calculating an average time, and is commented as such, so I feel comfortable with this answer for now.
Let me know if you want me to dig further, etc. I suspect we'll talk about this on the call shortly.
simulated half cones Jason.xlsx
I have since updated the passage code to do the "times 2" update. This means that on trapping instances for which half-cone operations were in play, the number of caught fish was multiplied by 2. Catch originating from trapping instances for which full-cone operations were in play were not changed in any way.
In order to investigate what the "times 2" update does to the underlying estimates of passage, I took a look at the RBDD / Sacramento River, using a timeframe from 10/1/2012 through 9/30/2013. This timeframe corresponds to what we use in the Big Looper code for the Spring run on this river. For checking, however, I will include / compare results for all four runs. The spline plots clearly demonstrate this choice of dates is not great for some of the other non-Spring runs, but it works to illustrate what I've done, and what we could examine.
Recall that if the "times 2" adjustment is to have an effect on passage estimates, at least one trapping instance must have a half-cone operation. This time period on the RBDD has several of these instances. However, not all days were half-cone operations. So, the new passage estimate, when comparing the after (yes "times 2" adjustment) to the before (no "times 2" adjustment), should be at least 1.0 times, but less than 2.0 times, the original number.
So, I ran passage for both the before and after, for the river and time frame described above. Here are the estimates of passage, for each run:
Run Before After Ratio
(After / Before)
Spring 232,585 290,561 1.25
Fall 18,484,656 33,681,706 1.82
Late Fall 67,176 72,552 1.08
Winter 1,545,384 1,656,524 1.07
----------------------------------------------------------------------
Total 20,329,801 35,701,343 1.76
Note that the Ratio fluctuates quite a lot between 1.0 and 2.0 over the different runs. Given that these numbers derive from the same traps, all operating at the same time, it seems that the "times 2" adjustment has a bigger impact on bigger catches. This makes sense to me -- by doubling catch, we're allowing the upper bounds of catch to increase (by up to 2), but we're keeping the lower bound at zero. For now, let's just note this. Also, doubling of half-cones should affect some runs more than others, since each run probably sees its spike of caught fish at different time points of the year. These may or may not coincide when a half-cone operation was in process.
Now, the varying ratios suggest that taking a look at the plots may be of some use. Keep in mind that the spline model operates on the level of a trap. For this year, four traps were operating on this river: Gates 3, 6, 7, and 8. It looks like the Winter run passage estimates are fairly consistent before and after, even though the number of associated passage fish is relatively high.
Taking a look at the splines for Gate 3, Winter run, before and after reveals similar splines before (top) and after (bottom). For now, the splines plot their data based on the data are fed up to that point in the code. So, even though the two plots look highly similar, be sure to always orient with respect to each plot's y-axis -- note how the scaling of the "AFTER" plot is roughly two times that of the original.
BEFORE "times 2" Adjustment
AFTER "times 2" Adjustment
These plots quickly demonstrate that the traps that were operating at half-cone had basically no effect on Winter run fish. This is because all the points that are red more or less coincide on days where the catch was essentially zero. So, the "times 2" adjustment failed to make much of a change for the Winter run (for this trap). Similar behavior (I highly suspect) would be seen if you were to compare each of the before and after for the other three traps for this run. This helps explain why the "times two" adjustment didn't really change the Winter run estimate -- there weren't really any Winter fish caught between Jan 1 - Feb 15, and so the "times 2" acted on numbers very close to zero.
[An aside as I review this post. Note that the red dot around 300 fish here. It's clear that this number has not been doubled, when comparing the "BEFORE" and "AFTER". This leads to me believe that 1. something is wrong; 2. the plus-count procedure acts differently, based on the number of fish fed to it. Given that my numbers (so far) match that reported by the baseTable, it may be that the plus-count algorithm responds (correctly) based on the numbers fed to it, and so differs in the divvying up fish to Winter, for before vs. after; 3. the number of trapping instances on this day is greater than one, with only one of them half-cone, and the rest full-cone. The half-cone fish on this day would be doubled ONLY, for that trapping instance ONLY -- the full-cone counts would stay as originally recorded; this would make the AFTER fish catch be something less than 2.0 times the BEFORE. Communicating the number of trapping instances per displayed point would be a straightforward update and improvement to these plots.]
The preceding argument, as applied to Winter, suggests that a different run, with actual fish caught in the period Jan 1- Feb 15 or so, should see its passage estimates change markedly. Based on the small table above, investigation of Fall run passage estimates is warranted.
Next, I turn to the investigation of the Fall run. Although very similar in functional form, note that the "AFTER" plot below has a different y-axis. Note also that, as expected based on the small table above, there were tons of Fall run fish caught in the Jan 1 - Feb 15 period. These of course are subsequently expanded by the reciprocal of the efficiency, leading to the large blow-up of passage for the Fall run.
It was mentioned to me before that half-cone operations are often put in place to deal with issues having to do with Winter and Late Fall fish; however, I find it interesting here that this seems to impact the estimation of Fall passage the most.
[Note in this pair of plots how the biggest red dot does appear to be doubled. So, I suspect that all trapping instances on this day were half-cone.]
BEFORE "times 2" Adjustment
AFTER "times 2" Adjustment
Finally, note that the "times 2" update changes the behavior of the underlying spline globally for Gate 6, Spring run.
BEFORE "times 2" Adjustment
AFTER "times 2" Adjustment
I think these plots are incredibly useful, and we will get a lot of mileage out of them. A product based on these plots would be a mechanism that puts two splines together on the same plot, so their y-axes agree. But, that is an update for later perhaps.
I took a quick look at the catch on 11/18/2012, Gate 3, Winter run. It turns out there were two trapping instances on that day. The first, from 9:41 am to 12:08 pm, was a full-cone operation that caught 216 fish. The second was a half-cone operation from 4:35 pm to 8:59 pm, and caught 67 fish. Note that 216 + 2*67 = 216 + 134 = 350 fish. These numbers (216 + 67 = 283 [the BEFORE value] and 350 [the AFTER value] ) make sense with what is displayed on the plot(s) for 11/18/2012.
Note also that since there is a gap in fishing, there is some imputation involved. This is visible on the graph.
Jason:
Thanks for your efforts relating to R code revisions that address half cone operations. Our conversion and your modeling exercises yesterday suggest you have successfully developed R code that doubles the catch during half cone operations.
The last version of the daily base CSV files I saw that pertained to half cones is reflected in the screen shot below.
Please revise the catch columns in the table header of the daily base CSV files so they include the following fields:
The first 4 fields should be calculated independently, and their sum should equal the totalEstimatedCatch. Like you did once before, you may want to do some testing to ensure the totalEstimatedCatch really is the sum of the 4 other fields.
I suggest dropping the totalCatch column from the daily base file because I doubt the biologists would use it.
I think this will wrap up the work on task 2.1. Thanks for your persistence on this task – I know it has been a lot of work relative to the other tasks.
Doug
Doug please correct me if I'm wrong but you didn't want Jason to remove any other field besides the Total Catch field. Correct? This means the first few columns will read as you indicate above but there will be additional columns for proportion imputed catch, efficiency, etc.
Connie:
The only field Jason would delete from the daily base file as he wraps up work on the half cones is the “totalCatch” field. He would rename the “totalEst” field to “totalEstimatedCatch”. All the other fields would stay the same, and the order of the fields in the table would remain the same.
Doug
I updated the output of the baseTables so that halfCone adjustments fall into their own column, halfConeAdj
. To do this, I keep track of both halfCone and fullCone fish-counts throughout the estimation procedure. I then sum up these two quantities at the very end, so as to create the halfConeAdj
column. This is more or less in line with the same process utilized to break apart assigned versus unassigned fish -- they are tabulated early on, and then pushed along for the ride.
Here is a quick snapshot of the csv
output with the halfConeAdj
column.
To test this out, I used the RBDD, 2013-10-01 through 2013-09-30 ALL Runs report. I then checked to make sure that the numbers in the assignedCatch and unassignedCatch columns, in the new output, summed to the same numbers as before. Now, however, the halfCone counts are tabulated in their own column. So, at long last, we've accounted for half-cone operations.
I also updated the lifeStage report to break out the baseTables via the new column halfConeAdj
.
Finally, while I was able to get numbers that agreed with what I was getting from the ALL Runs report from before, I suspect that the Big Looper may find an area which may require some further enhancement. Generally, however, this Issue looks to be done.
Trent and Jason reviewed this issue, and believe it is complete.
Some sites raise the cone to fish half it's normal depth. Question is, are the associated counts correctly inflated. There are two parts to this question. Are the efficiency trials conducted under 1/2 cone conditions correctly accounted for? And, are the regular counts obtained under 1/2 cone conditions correctly inflated?