Open sailist opened 2 years ago
@sailist Did you got to replicate the other feature extractions?
There is no detailed explanation in this paper, in paper MMGCN (https://github.com/hujingwen6666/MMGCN), the author use opensmile with is10 configuration to extarct audio feature with size 1582. Visual and Text feature in MMGCN are also available. btw, the code structure of the two is similar in some places....
@sailist Did you extract the video features?
I use iemocap features provided by cogman and meld features provided by mmgcn.
Okay, But are you able to train the network, as I am not, please guide me with my problem below:
When executing train.py , I am getting this error
No such file or directory: './model_checkpoints/model.pt'
Do I need to run eval.py the error is :
usage: eval.py [-h] --dataset {iemocap,iemocap_4,mosei} [--data_dir_path DATA_DIR_PATH] [--device DEVICE] [--modalities {a,at,atv}] [--emotion EMOTION] eval.py: error: the following arguments are required: --dataset
Okay, But are you able to train the network, as I am not, please guide me with my problem below:
When executing train.py , I am getting this error
No such file or directory: './model_checkpoints/model.pt'
@Coding511
For training the network, try passing --from_begin
argument to the training script. (The --from_begin
argument trains the network from scratch, without expecting a pretrained model ('./model_checkpoints/model.pt') to train from)
python train.py --dataset="iemocap_4" --modalities="atv" --from_begin --epochs=55
Do I need to run eval.py the error is :
usage: eval.py [-h] --dataset {iemocap,iemocap_4,mosei} [--data_dir_path DATA_DIR_PATH] [--device DEVICE] [--modalities {a,at,atv}] [--emotion EMOTION] eval.py: error: the following arguments are required: --dataset
For evaluation, please refer to the google colab notebook referenced in the README.md file:
https://colab.research.google.com/drive/1biIvonBdJWo2TiYyTiQkxZ_V88JEXa_d?usp=sharing
@iabhinavjoshi Thanks dear But still, my device is not working may be due to lower computation.
Could you please guide me through 'CPU' installation or does Pytorch geometric don't work with GPU?
I have Cuda 10.2 but for torch-sparser
it's saying a higher version is required, though I have successfully installed geometric.
or else how to train the model from scratch with your given features on colab if possible. please give me step by step if possible on colab. I tried cloning this folder COGMEN and changing the directory to this. and then without unzipping the model. After installing Install comet-ml.
%run train.py
But the error encounter is given below. Do I need to do anything else there? Waiting for your reply @iabhinavjoshi . Thanks
OSError Traceback (most recent call last)
[/content/COGMEN/train.py](https://localhost:8080/#) in <module>()
4 import torch
5 import os
----> 6 import cogmen
7
8 log = cogmen.utils.get_logger()
9 frames
[/content/COGMEN/cogmen/__init__.py](https://localhost:8080/#) in <module>()
3 from .Dataset import Dataset
4 from .Coach import Coach
----> 5 from .model.COGMEN import COGMEN
6 from .Optim import Optim
[/content/COGMEN/cogmen/model/COGMEN.py](https://localhost:8080/#) in <module>()
3
4 from .SeqContext import SeqContext
----> 5 from .GNN import GNN
6 from .Classifier import Classifier
7 from .functions import batch_graphify
[/content/COGMEN/cogmen/model/GNN.py](https://localhost:8080/#) in <module>()
1 import torch.nn as nn
----> 2 from torch_geometric.nn import RGCNConv, TransformerConv
3
4
5 class GNN(nn.Module):
[/usr/local/lib/python3.7/dist-packages/torch_geometric/__init__.py](https://localhost:8080/#) in <module>()
2 from importlib import import_module
3
----> 4 import torch_geometric.data
5 import torch_geometric.loader
6 import torch_geometric.transforms
[/usr/local/lib/python3.7/dist-packages/torch_geometric/data/__init__.py](https://localhost:8080/#) in <module>()
----> 1 from .data import Data
2 from .hetero_data import HeteroData
3 from .temporal import TemporalData
4 from .batch import Batch
5 from .dataset import Dataset
[/usr/local/lib/python3.7/dist-packages/torch_geometric/data/data.py](https://localhost:8080/#) in <module>()
1 from typing import (Optional, Dict, Any, Union, List, Iterable, Tuple,
2 NamedTuple, Callable)
----> 3 from torch_geometric.typing import OptTensor, NodeType, EdgeType
4 from torch_geometric.deprecation import deprecated
5
[/usr/local/lib/python3.7/dist-packages/torch_geometric/typing.py](https://localhost:8080/#) in <module>()
2
3 from torch import Tensor
----> 4 from torch_sparse import SparseTensor
5
6 # Types for accessing data ####################################################
[/usr/local/lib/python3.7/dist-packages/torch_sparse/__init__.py](https://localhost:8080/#) in <module>()
17 spec = cuda_spec or cpu_spec
18 if spec is not None:
---> 19 torch.ops.load_library(spec.origin)
20 else: # pragma: no cover
21 raise ImportError(f"Could not find module '{library}_cpu' in "
[/usr/local/lib/python3.7/dist-packages/torch/_ops.py](https://localhost:8080/#) in load_library(self, path)
253 # static (global) initialization code in order to register custom
254 # operators with the JIT.
--> 255 ctypes.CDLL(path)
256 self.loaded_libraries.add(path)
257
[/usr/lib/python3.7/ctypes/__init__.py](https://localhost:8080/#) in __init__(self, name, mode, handle, use_errno, use_last_error)
362
363 if handle is None:
--> 364 self._handle = _dlopen(self._name, mode)
365 else:
366 self._handle = handle
OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_spmm_cuda.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE
Would you mind share your configuration of opensmile? I wonder how do you get audio feature with size 100.
@iabhinavjoshi hi, would you mind explain this question?
@sailist are you using their features directly?
What is the segmentation for training and testing? Are they using session 4 for testing?
What is this
choices=["iemocap", "iemocap_4", "mosei"]
For IEMOCAP could you guide me with the above problem, please?
@sailist are you using their features directly? What is the segmentation for training and testing? Are they using session 4 for testing? What is this
choices=["iemocap", "iemocap_4", "mosei"]
For IEMOCAP could you guide me with the above problem, please?
This is not a dataset problem.
OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_spmm_cuda.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE
This exception means that you mismatched the version of torch and the dependences of torch_geometric, you can manully install them from https://pytorch-geometric.com/whl/
After installed right version, everything will be fine.
@sailist, I am trying on my local desktop without GPU now. and the code is working as the training started. I will post the result soon.
However one of my questions was, How these features are segmented, I mean random split or session-wise, speaker-independent analysis done?
Okay, But are you able to train the network, as I am not, please guide me with my problem below: When executing train.py , I am getting this error No such file or directory: './model_checkpoints/model.pt'
@Coding511 For training the network, try passing
--from_begin
argument to the training script. (The--from_begin
argument trains the network from scratch, without expecting a pretrained model ('./model_checkpoints/model.pt') to train from)
python train.py --dataset="iemocap_4" --modalities="atv" --from_begin --epochs=55
Do I need to run eval.py the error is : usage: eval.py [-h] --dataset {iemocap,iemocap_4,mosei} [--data_dir_path DATA_DIR_PATH] [--device DEVICE] [--modalities {a,at,atv}] [--emotion EMOTION] eval.py: error: the following arguments are required: --dataset
For evaluation, please refer to the google colab notebook referenced in the README.md file:
https://colab.research.google.com/drive/1biIvonBdJWo2TiYyTiQkxZ_V88JEXa_d?usp=sharing
Hi Abhinav, In which journal this paper is published?
On my machine, your features are producing 82% classification accuracy. And I think in your paper, you are calming more than 84%. What else do I need to consider here to reproduce your results? Can I Email you?
Would you mind share your configuration of opensmile? I wonder how do you get audio feature with size 100.
@sailist Thank you for your interest. :)
We followed the scripts provided by the following repositories https://github.com/soujanyaporia/multimodal-sentiment-analysis https://github.com/declare-lab/multimodal-deep-learning
The audio feature size provided in the above repositories matches the one shared in our repo. https://github.com/soujanyaporia/multimodal-sentiment-analysis/tree/master/dataset/iemocap/raw
They provide a detailed pipeline for feature extraction set along with the explanation for multiple datasets like MOSI, MOSEI, and IEMOCAP datasets.
On my machine, your features are producing 82% classification accuracy. And I think in your paper, you are calming more than 84%. What else do I need to consider here to reproduce your results? Can I Email you?
@Coding511 We achieve a similar score of 81.55% with the textual modality. I wonder if you are passing --modalities="atv" to your training script. also, We used comet's Bayesian optimizer for hyperparameter tuning. (https://www.comet.com/docs/python-sdk/introduction-optimizer/) you can try that as well for replicating the results.
Please feel free to email me at ajoshi@cse.iitk.ac.in for further queries.
@iabhinavjoshi Thanks dear But still, my device is not working may be due to lower computation.
Could you please guide me through 'CPU' installation or does Pytorch geometric don't work with GPU? I have Cuda 10.2 but for
torch-sparser
it's saying a higher version is required, though I have successfully installed geometric.or else how to train the model from scratch with your given features on colab if possible. please give me step by step if possible on colab. I tried cloning this folder COGMEN and changing the directory to this. and then without unzipping the model. After installing Install comet-ml.
%run train.py
But the error encounter is given below. Do I need to do anything else there? Waiting for your reply @iabhinavjoshi . Thanks
OSError Traceback (most recent call last) [/content/COGMEN/train.py](https://localhost:8080/#) in <module>() 4 import torch 5 import os ----> 6 import cogmen 7 8 log = cogmen.utils.get_logger() 9 frames [/content/COGMEN/cogmen/__init__.py](https://localhost:8080/#) in <module>() 3 from .Dataset import Dataset 4 from .Coach import Coach ----> 5 from .model.COGMEN import COGMEN 6 from .Optim import Optim [/content/COGMEN/cogmen/model/COGMEN.py](https://localhost:8080/#) in <module>() 3 4 from .SeqContext import SeqContext ----> 5 from .GNN import GNN 6 from .Classifier import Classifier 7 from .functions import batch_graphify [/content/COGMEN/cogmen/model/GNN.py](https://localhost:8080/#) in <module>() 1 import torch.nn as nn ----> 2 from torch_geometric.nn import RGCNConv, TransformerConv 3 4 5 class GNN(nn.Module): [/usr/local/lib/python3.7/dist-packages/torch_geometric/__init__.py](https://localhost:8080/#) in <module>() 2 from importlib import import_module 3 ----> 4 import torch_geometric.data 5 import torch_geometric.loader 6 import torch_geometric.transforms [/usr/local/lib/python3.7/dist-packages/torch_geometric/data/__init__.py](https://localhost:8080/#) in <module>() ----> 1 from .data import Data 2 from .hetero_data import HeteroData 3 from .temporal import TemporalData 4 from .batch import Batch 5 from .dataset import Dataset [/usr/local/lib/python3.7/dist-packages/torch_geometric/data/data.py](https://localhost:8080/#) in <module>() 1 from typing import (Optional, Dict, Any, Union, List, Iterable, Tuple, 2 NamedTuple, Callable) ----> 3 from torch_geometric.typing import OptTensor, NodeType, EdgeType 4 from torch_geometric.deprecation import deprecated 5 [/usr/local/lib/python3.7/dist-packages/torch_geometric/typing.py](https://localhost:8080/#) in <module>() 2 3 from torch import Tensor ----> 4 from torch_sparse import SparseTensor 5 6 # Types for accessing data #################################################### [/usr/local/lib/python3.7/dist-packages/torch_sparse/__init__.py](https://localhost:8080/#) in <module>() 17 spec = cuda_spec or cpu_spec 18 if spec is not None: ---> 19 torch.ops.load_library(spec.origin) 20 else: # pragma: no cover 21 raise ImportError(f"Could not find module '{library}_cpu' in " [/usr/local/lib/python3.7/dist-packages/torch/_ops.py](https://localhost:8080/#) in load_library(self, path) 253 # static (global) initialization code in order to register custom 254 # operators with the JIT. --> 255 ctypes.CDLL(path) 256 self.loaded_libraries.add(path) 257 [/usr/lib/python3.7/ctypes/__init__.py](https://localhost:8080/#) in __init__(self, name, mode, handle, use_errno, use_last_error) 362 363 if handle is None: --> 364 self._handle = _dlopen(self._name, mode) 365 else: 366 self._handle = handle OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_spmm_cuda.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE
I got the same error, I just changed the way to import
import os import torch os.environ['TORCH'] = torch.version print(torch.version)
!pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-cluster -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-spline-conv -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-geometric #torch-geometric==2.0.3 !pip install sentence_transformers !pip install comet_ml --upgrade --quiet
@MMirandaM Are you talking about Colab ? or simple imports in this order with CPU?
Thanks:)
@MMirandaM Are you talking about Colab ? or simple imports in this order with CPU?
Thanks:)
yes, in collab. I searched about this error and in every forum it was saying that it was a cuda version error, so I installed the packages this way
[edit] - I did the experiments in colab without gpu, using only colab cpu and it worked. to run on cpu it is necessary to use the ''--device'' argument. my code looks like this: !python train.py --dataset="iemocap_4" --modalities="atv" --device="cpu" --from_begin --epochs=10
Okay so you are saying after cloning the repo, first run these lines than train? But I am getting this error there
1 import os 2 import torch ----> 3 os.environ['TORCH'] = torch.version 4 print(torch.version) 5
/usr/lib/python3.7/os.py https://localhost:8080/# in setitem(self, key, value) 684 def setitem(self, key, value): 685 key = self.encodekey(key) --> 686 value = self.encodevalue(value) 687 self.putenv(key, value) 688 self._data[key] = value
/usr/lib/python3.7/os.py https://localhost:8080/# in encode(value) 754 def encode(value): 755 if not isinstance(value, str): --> 756 raise TypeError("str expected, not %s" % type(value).name) 757 return value.encode(encoding, 'surrogateescape') 758 def decode(value):
TypeError: str expected, not module
On Wed, Jul 27, 2022 at 6:13 PM Matheus Miranda @.***> wrote:
@MMirandaM https://github.com/MMirandaM Are you talking about Colab ? or simple imports in this order with CPU?
Thanks:)
yes, in collab. I searched about this error and in every forum it was saying that it was a cuda version error, so I installed the packages this way
— Reply to this email directly, view it on GitHub https://github.com/Exploration-Lab/COGMEN/issues/1#issuecomment-1196679101, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2ERTZNN2WR6FQ52EH5ZCBLVWEVGXANCNFSM52VTM4AQ . You are receiving this because you were mentioned.Message ID: @.***>
There is no detailed explanation in this paper; in the paper MMGCN (https://github.com/hujingwen6666/MMGCN), the author uses openSmile with is10 configuration to extract audio features with size 1582. Visual and Text features in MMGCN are also available. btw, the code structure of the two is similar in some places...
@sailist They are using the same features here. I am trying to read this pickle file; however not getting what it is? Can anyone explain what is audio, video, and text features here? Where are those 1582 features audio feature?
Opening this pickle file, It is a tuple having 9 elements, with 7 dictionaries and 2 lists. what are they :(
so confusing.
@Coding511 The meaning of these elements can be understood by reading the code in the data loading session, they are:
video_ids, video_speakers, video_labels, video_text, video_audio, video_visual, video_sentence,
train_ids, test_ids
Printing them in jupyter can help you better understand.
!python train.py --dataset="iemocap_4" --modalities="atv" --device="cpu" --from_begin --epochs=10
On cpu runtime with colab getting this:
Downloading: 100% 391/391 [00:00<00:00, 325kB/s]
Downloading: 100% 190/190 [00:00<00:00, 152kB/s]
Downloading: 100% 3.74k/3.74k [00:00<00:00, 2.65MB/s]
Downloading: 100% 718/718 [00:00<00:00, 490kB/s]
Downloading: 100% 122/122 [00:00<00:00, 96.0kB/s]
Downloading: 100% 456k/456k [00:00<00:00, 3.10MB/s]
Downloading: 100% 329M/329M [00:05<00:00, 62.1MB/s]
Downloading: 100% 53.0/53.0 [00:00<00:00, 42.6kB/s]
Downloading: 100% 239/239 [00:00<00:00, 196kB/s]
Downloading: 100% 1.36M/1.36M [00:00<00:00, 6.32MB/s]
Downloading: 100% 1.35k/1.35k [00:00<00:00, 1.02MB/s]
Downloading: 100% 798k/798k [00:00<00:00, 4.79MB/s]
Downloading: 100% 229/229 [00:00<00:00, 189kB/s]
Traceback (most recent call last):
File "train.py", line 6, in <module>
import cogmen
File "/content/COGMEN/cogmen/__init__.py", line 5, in <module>
from .model.COGMEN import COGMEN
File "/content/COGMEN/cogmen/model/COGMEN.py", line 5, in <module>
from .GNN import GNN
File "/content/COGMEN/cogmen/model/GNN.py", line 2, in <module>
from torch_geometric.nn import RGCNConv, TransformerConv
File "/usr/local/lib/python3.7/dist-packages/torch_geometric/__init__.py", line 4, in <module>
import torch_geometric.data
File "/usr/local/lib/python3.7/dist-packages/torch_geometric/data/__init__.py", line 1, in <module>
from .data import Data
File "/usr/local/lib/python3.7/dist-packages/torch_geometric/data/data.py", line 3, in <module>
from torch_geometric.typing import OptTensor, NodeType, EdgeType
File "/usr/local/lib/python3.7/dist-packages/torch_geometric/typing.py", line 4, in <module>
from torch_sparse import SparseTensor
File "/usr/local/lib/python3.7/dist-packages/torch_sparse/__init__.py", line 19, in <module>
torch.ops.load_library(spec.origin)
File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 255, in load_library
ctypes.CDLL(path)
File "/usr/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_spmm_cuda.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE
!python train.py --dataset="iemocap_4" --modalities="atv" --device="cpu" --from_begin --epochs=10
On cpu runtime with colab getting this:
Downloading: 100% 391/391 [00:00<00:00, 325kB/s] Downloading: 100% 190/190 [00:00<00:00, 152kB/s] Downloading: 100% 3.74k/3.74k [00:00<00:00, 2.65MB/s] Downloading: 100% 718/718 [00:00<00:00, 490kB/s] Downloading: 100% 122/122 [00:00<00:00, 96.0kB/s] Downloading: 100% 456k/456k [00:00<00:00, 3.10MB/s] Downloading: 100% 329M/329M [00:05<00:00, 62.1MB/s] Downloading: 100% 53.0/53.0 [00:00<00:00, 42.6kB/s] Downloading: 100% 239/239 [00:00<00:00, 196kB/s] Downloading: 100% 1.36M/1.36M [00:00<00:00, 6.32MB/s] Downloading: 100% 1.35k/1.35k [00:00<00:00, 1.02MB/s] Downloading: 100% 798k/798k [00:00<00:00, 4.79MB/s] Downloading: 100% 229/229 [00:00<00:00, 189kB/s] Traceback (most recent call last): File "train.py", line 6, in <module> import cogmen File "/content/COGMEN/cogmen/__init__.py", line 5, in <module> from .model.COGMEN import COGMEN File "/content/COGMEN/cogmen/model/COGMEN.py", line 5, in <module> from .GNN import GNN File "/content/COGMEN/cogmen/model/GNN.py", line 2, in <module> from torch_geometric.nn import RGCNConv, TransformerConv File "/usr/local/lib/python3.7/dist-packages/torch_geometric/__init__.py", line 4, in <module> import torch_geometric.data File "/usr/local/lib/python3.7/dist-packages/torch_geometric/data/__init__.py", line 1, in <module> from .data import Data File "/usr/local/lib/python3.7/dist-packages/torch_geometric/data/data.py", line 3, in <module> from torch_geometric.typing import OptTensor, NodeType, EdgeType File "/usr/local/lib/python3.7/dist-packages/torch_geometric/typing.py", line 4, in <module> from torch_sparse import SparseTensor File "/usr/local/lib/python3.7/dist-packages/torch_sparse/__init__.py", line 19, in <module> torch.ops.load_library(spec.origin) File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 255, in load_library ctypes.CDLL(path) File "/usr/lib/python3.7/ctypes/__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_spmm_cuda.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE
see https://github.com/Exploration-Lab/COGMEN/issues/1#issuecomment-1195554071
@MMirandaM Did that too, still getting same error on colab. and on my laptop not able to import torch-sparse. I dont know what to do
that I think this will help you, just run: https://colab.research.google.com/drive/1cnuQ9Kbd9jqi5tU-4XbLOjNJfQFk4fN4?usp=sharing
@MMirandaM Thanks :) It is working in colab now, however, I am not able to train the model on my machine, with the same config. Here is it needed to run preprocess.py?
As features are already saved in the ./data folder.
if you don't have the data in the './data' folder, then you run the preprocess.py
Ok, did you train the network on GPU on a laptop? also, have you extracted the features from scratch, audio-video and a text feature for IEMOCAP.
On my machine, your features are producing 82% classification accuracy. And I think in your paper, you are calming more than 84%. What else do I need to consider here to reproduce your results? Can I Email you?
what configuration do you use to reach 81.55%? what command did you run?
because when I run the following command: ''python train.py --dataset="iemocap" --modalities="atv" --from_begin --epochs=55'' . I only have an accuracy of 62%.
thank you for your reply
e data
Yes, i train in my laptop with GPU and without GPU
@MMirandaM iemocap_4 can reach about 0.82, iemocap(6-way) can reach about 0.62
what configuration do you use to reach 81.55%? What command did you run?
@MMirandaM Yes, approx 82 on four emotions and 62 for 6 emotions. But I am not getting why he is printing F1 for best epoch, not accuracy.
Yes, i train in my laptop with GPU and without GPU
I am not able to import torch-sparse with GPU. Is there any other way to train the model with GPU? Thanks
!pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-cluster -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-spline-conv -f https://data.pyg.org/whl/torch-${TORCH}.html !pip install torch-geometric #torch-geometric==2.0.3 !pip install sentence_transformers !pip install comet_ml --upgrade --quiet
I believe there is no other way. you could try to set up an environment similar to mine, the versions of my venv(conda) are: Python(3.7.13), Pytorch(1.12.0) and CUDA(11.3)
Then run the installation of the libs like this:
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.12.0+cu113.html pip install torch-sparse -f https://data.pyg.org/whl/torch-1.12.0+cu113.html pip install torch-cluster -f https://data.pyg.org/whl/torch-1.12.0+cu113.html pip install torch-spline-conv -f https://data.pyg.org/whl/torch-1.12.0+cu113.html pip install torch-geometric pip install sentence_transformers pip install comet_ml --upgrade --quiet
I am using cuda11.3 Pytorch '1.12.0+cu113'
@MMirandaM Thanks I think it's working now (torch-sparse) is installed. However, after using the above installations--train.py I am getting the error with trace below:
Seed set 08/02/2022 07:53:28 Loaded data. SeqContext-> USING Transformer args.drop_rate: 0.5 Traceback (most recent call last):
File "C:\Users\Downloads\COGMEN-main\train.py", line 301, in
File "C:\Users\Downloads\COGMEN-main\train.py", line 81, in main model = cogmen.COGMEN(args).to(args.device)
File "C:\Users\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 908, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs)
RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, ort, mps, xla, lazy, vulkan, meta, hpu, privateuseone device type at start of device string: GPU
@MMirandaM Thanks I think it's working now (torch-sparse) is installed. However, after using the above installations--train.py I am getting the error with trace below:
Seed set 08/02/2022 07:53:28 Loaded data. SeqContext-> USING Transformer args.drop_rate: 0.5 Traceback (most recent call last):
File "C:\Users\Downloads\COGMEN-main\train.py", line 301, in main(args)
File "C:\Users\Downloads\COGMEN-main\train.py", line 81, in main model = cogmen.COGMEN(args).to(args.device)
File "C:\Users\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 908, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs)
RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, ort, mps, xla, lazy, vulkan, meta, hpu, privateuseone device type at start of device string: GPU
to train use: ''python train.py --dataset="iemocap" --modalities="atv" --from_begin --epochs=55''
pytorhc doesn't understand 'gpu' the correct one is 'cuda'. I believe you are passing the 'gpu' parameter in training, but you don't need to.
Yes I did that thanks. Why I dont need to I set it cuda in place of GPU. Then its working
Sim fiz isso obrigado. Por que eu não preciso defini-lo cuda no lugar da GPU. Então está funcionando
the author enters this data by default, check the args in 'train.py'
On my machine, your features are producing 82% classification accuracy. And I think in your paper, you are calming more than 84%. What else do I need to consider here to reproduce your results?
@iabhinavjoshi, could you help me?? I wrote you an email
I use iemocap features provided by cogman and meld features provided by mmgcn.
So you are using their features directly? Did you try obtaining these features urself? Iemocap has one video for multilabel audio files, how those video features are extracted for a dialogue.
@sailist Is the MELD dataset freely available? Plz, forward the link. Also did you get the result on IEMOCAp_4 as 84%?
Would you mind share your configuration of opensmile? I wonder how do you get audio feature with size 100.