-
Hi,
where is the script that generates the pickle file? It is interesting to know cause it will affect much the size of the graph. I can see that the protein atoms in your files have small number of …
-
# Add functionality to write away model metadata
It would be nice to have a function for writing away the date and time of each new modeling attempt, which variables were selected, which preprocess…
-
I'm close but at the very end...
```
hines@entrvm0002 MINGW64 ~/neuron/nrn/build
$ ninja
[0/2] Re-checking globbed directories...
[4/4] Running utility command for hoc_module
FAILED: src/nrnpyth…
-
When running a xonsh script from the xonsh shellprompt the shell _sometimes_ uses an outdated (cached?) version of the script. Only copying the script to a new name resolves this, even exiting the sh…
-
@hanzhanggit Hello!
Thank you for your contributions on this code.
I'm trying to train this on my own dataset.
I followed reedscot/icml2016, and trained a char-CNN-RNN text encoder.
But it's a .t7…
-
Hi,
I'm trying the use of your SplitPy software and I'm finding some very interesting characteristics as the use of open platform (python) or the the automatization of the process that take into acc…
-
The data files in Participant Data are tricky to work with in Python requiring unclear expressions to access the data, eg
`age = data['data'][0,0]['individual'][0,0]['age'][0][0]`
In order…
-
Hi authors, thanks for your great works!
I see the F&Q, since we cannot reach D2_net pre-processing data anymore, you suggest us to use MegaDepth directly.
May I ask how to use it for training? Like…
-
Currently every solution seems to execute the queries on parquet files. In the benchmark video, it is noted that duckdb runs on a different instance type to match the number of cores a distributed sys…
-
Hi @ehcalabres @melkilin can you please guide how to train model on a custom data which is similar to cub-100 data format , folders of classes format.,