-
Hello thank you so much for making this repository public! I am trying to use the deploy.sh script for the music-arousal-valence example and when I run the script it returns "cog: Cannot load more tha…
-
Hello again!
I've read your paper and I've seen that you use the circumplex model's variables arousal and valence.
How do those variable appears in the code? I can't find them :(
Thank you,
Amaia
-
As described in the Keras handbook -_Deep Learning with Pyhton_-, for a multi-output model we need to specify different loss functions for different heads of the network. But because gradient descent …
-
Do you know how to annotate valence and arousal value for each image ?
-
I tried torcheeg with SEED dataset. My code as bellow:
```
dataset = SEEDDataset(io_path=f'./seed',
root_path='../../DATASET/SEED_EEG/Preprocessed_EEG',
…
-
Create the annotation export script which will create multiple excel/CSV files which the relevant data.
-
Following the discussion in quanteda/quanteda#1263 and quanteda/quanteda#1271, and discussions, here is the proposal for recording and using dictionary weights.
### Add a new `dictionaryweights()` …
-
See dissertation by Junchao Xu
-
Hi, @serkansulun. Thank you for your great work.
The quality of the music generated by the model is really very good.
However, when the batch size is 1(without considering the effect of batch size)…
-
Hi,
do you apply the KL divergence loss to both valence arousal and expression?
Can you provide more details about it? For instance what do you pass to the loss? Do you create distributions from the…