-
We want to deliver a "custom site" to clients who our pro bono team are doing consulting for, where they can explore the outputs of our modelling work in a similar interface as currently available on …
-
A dozen of pages of content can be copied as is in the new site: we have a list.
Another few can be made into tables of database that can be maintained in the admin filament.
-
> `Assumptions` (which can be very general) are "translated" into `exogenous data` that are used in a `model calculation`. So the current setup is OK. We want to add the axiom: _`exogenous data` is ab…
-
**Name:** SHREYA KUMAR
**Email:** shreyakumar31@gmail.com
**Linkedin Profile:** www.linkedin.com/alt-shreya
Attach the homework screenshots below for both step II and step III:
-----------------…
-
### problem ###
Volcanic ash hazard data in thickness is being used for all exposure types. It is only relevant for places and landcover.
### background ###
Some notes on volcanic ash hazard data bas…
-
_This issue was automatically created by [Allstar](https://github.com/ossf/allstar/)._
**Security Policy Violation**
Project is out of compliance with Binary Artifacts policy: binaries present in sou…
-
Hi, I really like your approach for modelling ordinal data and am trying it on my data. I have gene expression data and ordinal response data as well as categorical unpenalized factors. I have split m…
-
Thanks for the great work!
In paper FLMR, I saw there are two external datasets _Google Search Corpus_ and _Wikipedia Corpus_, used for OK-VQA task. I saw the Google Search Corpus [here](https://driv…
-
Use case implementation of FAIR Data Train Planning issue #2 'Identify (meta)data schemas from use cases)
mroos updated
1 month ago
-
## Questions
There are two things the original [macpan-base ms](https://github.com/mac-theobio/macpan_base) does: describe the model and describe the modelling framework. Is this what we still want…