Open NPJuncal opened 9 months ago
It works with example data enhancement 1 has been implemented
tasks: -there are two input dataframes. Right now it only checks the columns and allow the user to provide the name of the columns of one of them. It needs to allow to modify the field measurements dataframe and them transform the names so it can work as an input to the function estimate_compaction -the df_fm is optional, the function cannot break if it is not provided -write documentation -write tests
enhancement 2: allow to give a column with dry bulk density (compresed) and estimate decompresed dry bulk density at same time
Enhancement 2 was addressed. I made several changes that I think make the function more readable. I'll write some tests when I have the time, it'll be good for me to learn how to do those :)
Hy @Pakillo @MarcioFCMartins @costavale @Julenasti, I have a question about how to approach this function.
I am working in the first improvement. Right now you have two input dataframes:
The idea is that if you have cores without the compression estimated, the function will get data from the second dataframe and estimate the compression of those cores using the estimate_compaction function (one of the other main functions of the package, that already works). However, this complicates the function quite a bit, and I am having trouble allowing the user to choose the names of the input columns in the second dataframe. If the dataframes are provided with the default column names for the second dataframe, it works (you already can choose the names of the first dataframe columns with no problem).
I have pushed what I have done until now to github so you can check it.
My question is: if there is already a function to estimate compresion in the package, does it make sense to include this process within the other function? Wouldn't it be better to just run the estimate_compression function for all your cores and then the decompact_linear function?
That way the function would be way more simple and robust, right? What do you think? I am sure it would not take much of your (or mine) time to make it work with both dataframes, but maybe it does not make sense.
You might be right, including redundant functions is probably not the best idea. Let's just simplify and keep the functionality separated.
On Mon, 26 Feb 2024 at 11:38, NPJuncal @.***> wrote:
Hy @Pakillo https://github.com/Pakillo @MarcioFCMartins https://github.com/MarcioFCMartins @costavale https://github.com/costavale @Julenasti https://github.com/Julenasti, I have a question about how to approach this function.
I am working in the first improvement. Right now you have two input dataframes:
- The dataframe with the depth and dry bulk density data to be corrected for compresion
- An optional database with field measurements to estimate the compresion
The idea is that if you have cores without the compression estimated, the function will get data from the second dataframe and estimate the compression of those cores using the estimate_compaction function (one of the other main functions of the package, that already works). However, this complicates the function quite a bit, and I am having trouble allowing the user to choose the names of the input columns in the second dataframe. If the dataframes are provided with the default column names for the second dataframe, it works (you already can choose the names of the first dataframe columns with no problem).
I have pushed what I have done until now to github so you can check it.
My question is: if there is already a function to estimate compresion in the package, does it make sense to include this process within the other function? Wouldn't it be better to just run the estimate_compression function for all your cores and then the decompact_linear function?
That way the function would be way more simple and robust, right? What do you think? I am sure it would not take much of your (or mine) time to make it work with both dataframes, but maybe it does not make sense.
— Reply to this email directly, view it on GitHub https://github.com/EcologyR/BlueCarbon/issues/37#issuecomment-1963941887, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIKR7TD5H4PZILDZGUQULPDYVRX3ZAVCNFSM6AAAAABB3ACDZSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRTHE2DCOBYG4 . You are receiving this because you were mentioned.Message ID: @.***>
I agree, let's keep estimate_compression separated. We will write a workflow for the users in the vignette of the package
Perfect! I have changed the function to eliminated the field measurements dataframe and the code to estimate compression within the function. Now it works :)
tasks:
-write documentation -write tests
Both test and documentation are written
Function to estimate sample depth (min and max) after matematical decompresion.
Input is a df with core code_id, compresion % and min and max depth of each sample. Output is same df with tow new columns with corrected sample min and max depth. Same correction is applied to all samples (linear correction).
tasks: -check if it works -write documentation -write tests
enhancement 1: if compresion is not estimate a additional df with the field mesurements can be provided to estimate core compresion for each core using function estimate_compresion.
enhancement 2: allow to give a column with dry bulk density (compresed) and estimate decompresed dry bulk density at same time