issues
search
prio-data
/
prediction_competition_2023
Code for generating benchmark models and evaluation scripts for the 2023 VIEWS prediction competition
4
stars
5
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
changes that make the evaluation functions work
#42
noorains
closed
2 months ago
0
added conflictology benchmark, which takes random values with replacement from the last 12 months
#41
noorains
opened
8 months ago
8
Suggesting contestants and competition calculations take the sorting of samples/simulations within and across observations as fixed
#40
colaresi
opened
1 year ago
0
test_compliance.py does not work with refactored data structure
#39
kvelleby
opened
1 year ago
0
Refactor eval
#38
kvelleby
closed
1 year ago
3
Create benchmark.py
#37
noorains
closed
1 year ago
2
Add utilities
#36
kvelleby
closed
1 year ago
0
Added bootstrap_.py that takes filename as argument along with LOA and year
#35
noorains
closed
1 year ago
2
Pull eval-files to separate folder
#34
kvelleby
closed
1 year ago
1
Use a pyarrow.dataset.partitioning scheme to format submission_folders.
#33
kvelleby
closed
1 year ago
1
Percentile maps
#32
sarakallis
opened
1 year ago
2
Pull utility functions out into a separate utilities.py module
#31
kvelleby
closed
1 year ago
1
out of memory when using plotting.collect_plotting_data() at pgm level
#30
kvelleby
closed
1 year ago
2
How to plot uncertainties on maps
#29
kvelleby
opened
1 year ago
5
changed decimal and thousands separator
#28
kvelleby
closed
1 year ago
0
Fix separators and commas in tables
#27
kvelleby
closed
1 year ago
1
Necessary improvements to the maps and line plots as presented in Berlin
#26
hhegre
opened
1 year ago
3
More benchmarks
#25
kvelleby
opened
1 year ago
3
Update shared_competition_data in Dropbox
#24
kvelleby
opened
1 year ago
1
Separate between models with and without features beyond historical fatalities
#23
kvelleby
opened
1 year ago
0
Better visualizations
#22
kvelleby
closed
1 year ago
5
Ignorance Score binning schemes
#21
kvelleby
opened
1 year ago
2
Hierarchical reconciliation
#20
kvelleby
opened
1 year ago
0
Testing for uncertainty in evaluation metrics due to sampling
#19
kvelleby
opened
1 year ago
0
What to do about samples with negative values?
#18
kvelleby
opened
1 year ago
0
Update README.md
#17
kvelleby
closed
1 year ago
0
Update README.md
#16
kvelleby
closed
1 year ago
0
To produce 'outcome' instead of 'ged-sb' in the output parquet files
#15
noorains
closed
1 year ago
1
from_table argument in Queryset .with_column
#14
noorains
closed
1 year ago
2
Benchmark_models.ipynb: Some functions using global variables in the notebook. Can't import them in BenchmarkModels.py
#13
noorains
closed
1 year ago
1
Suggestion to use getpass.getuser() for root user
#12
noorains
closed
1 year ago
1
describe_expanded function in BenchmarkModels.py
#11
noorains
closed
1 year ago
1
ensemble_ignorance_score needs consistent number of samples
#10
kvelleby
opened
1 year ago
1
Add bootstrap
#9
jimdale
closed
1 year ago
0
Added documentation, better error messages, small fixes, support for …
#8
kvelleby
closed
1 year ago
0
Ign
#7
kvelleby
closed
1 year ago
0
Interval score support
#6
kvelleby
closed
1 year ago
0
Added a TypeError check.
#5
kvelleby
closed
1 year ago
0
Fixed testing for whether observed is outside prediction range.
#4
kvelleby
closed
1 year ago
0
Removed "steps" from the structure,and added checks with clear error messages.
#3
kvelleby
closed
1 year ago
0
Support for ignorance score
#2
kvelleby
closed
1 year ago
0
CRPS
#1
kvelleby
closed
1 year ago
0