Closed QuanMPhm closed 2 months ago
@knikolla @naved001 Also, while looking through the code for the PI-specific invoice, I noticed that these invoices are made before the BU subsidy is applied. I am assuming this is not intended behavior? My apologies for missing this and not bringing it up when I first implemented the BU subsidy.
@knikolla @naved001 Since the pi-specific invoices are a list of invoices instead of just one, I'm wondering about how I should refactor these invoices. The main question I'm concerned with is how to subclass the base
invoice.Invoice
class:I imagine the arguments for the child "PI-specific" invoice class (aside from the ones in the base class) would be the pi name and output folder name. However, I don't know what dataframe should be the input though. Should it be the full combined dataframe, which the pi-specfic invoice will then filter down? Or should it be the already filtered csv? It's a minor design question, but I will follow your opinions.
No need to have one instance per PI. You can just overload the export_s3
and export
functions and have those write multiple files.
In this case you would provide whatever 1 dataframe has all the info you need to create the invoices.
@knikolla @naved001 Also, while looking through the code for the PI-specific invoice, I noticed that these invoices are made before the BU subsidy is applied. I am assuming this is not intended behavior? My apologies for missing this and not bringing it up when I first implemented the BU subsidy.
This is why I mentioned that ordering is important.
@QuanMPhm the Subsidy is managed by BU after the MGHPCC data is processed. Therefore the Subsidy should not be in the PI specific CSVs.
@knikolla @naved001 Since the pi-specific invoices are a list of invoices instead of just one, I'm wondering about how I should refactor these invoices. The main question I'm concerned with is how to subclass the base
invoice.Invoice
class:I imagine the arguments for the child "PI-specific" invoice class (aside from the ones in the base class) would be the pi name and output folder name. However, I don't know what dataframe should be the input though. Should it be the full combined dataframe, which the pi-specfic invoice will then filter down? Or should it be the already filtered csv? It's a minor design question, but I will follow your opinions.