-
Hello, I can't find this file in your DeepCDR:
import pickle
with open('/nfs/DeepCDR/DeepCDR-master/data/dataprocessed.pkl','rb') as f:
dataidxall = pickle.load(f)
-
### What can be improved?
Being able to load tabular data is a wonderful new feature.
But it is currently limited by forcing the data to 'match' calliope's dimension names in certain cases. Enforc…
-
If the data reaches one hundred million, many problems arise:
1. ConstructDatabase takes 1-2 hours to generate the data.
2. ConstructDatabase requires the input parameter `block.binpb`; even witho…
-
### What is the bug or the crash?
I have a VRT with 70k sources, `gdalinfo` on it takes 3 4 seconds, when loading the same VRT on QGIS it can take up to 20 minutes (this is the time to load layer i…
-
Observed on 2024-06-13 on [Kalev's stack](https://fragalysis-kalev-default.xchem-dev.diamond.ac.uk) with `A71EV2A` target (data archive `A71EV2A_full_screen_040624.tgz`, uploader Max, TAS `lb32627-66`…
-
Right now we use pandas.read_csv and pandas.read_csv implements "on_bad_lines=warn", so we could use that to report more errors before stopping.
If we implement our own CSV reader, should do the …
lisad updated
5 months ago
-
When training with the default hsi_twostream.yaml file in train.py, I encountered the error shown in the following image.
![image](https://github.com/hexiao0275/S2ADet/assets/125127115/5d1a96c2-a0f8…
-
Inspiration: https://github.com/OpenAWE-Project/OpenAWE/commit/68663aaa78b22a7edd7a97f16ede6992353b73ea
-
Prior to #13 I've done some refactoring to how scores data are loaded. I've ended up just pushing them to master, but in hindsight it would be good to get some review on them.
Specifically this is …
-
Request:
Support loading parquet as a source datatype in vega-loader
Envisioned solution:
```javascript
{
"data": [
{
"name": "my_data",
"format": {"type": "parquet"},
…