-
Can you detail on how you preprocess the raw data into V/T/A features, which stored in *npy. Only textual features is mentioned in your paper.
-
貼吧活動:(請查閱 [SARS-CoV-2 Timeline by 2020.02.21](https://github.com/agorahub/_meta/blob/agoran/theagora/sari/Memorandum_2020-02-21_SARS-CoV-2-Timeline_Nathan.pdf?raw=true), by Nathan :cloud: )
- Colla…
-
Hi, I am a bit confused about what to be used to extract the textual feature.
If as mentioned in your paper, the text description will be used:
> The metadata of the amazon datasets contains the…
-
I will appreciate your great work on multi-modal recommendation!I am trying to work on the multimodal encoding, so I just want to see it can achieve higher performance with other feature extractor. I …
-
When I run:
```python
time_limit = 60*60*8 # at most 1 hour
predictor = ImagePredictor()
hyperparameters = {'batch_size': 10}
predictor.fit(train, hyperparameters=hyperparameters, hyperparam…
-
Hi, I am trying to run scripts/train_kitti360.py
I met the following error as followed below [Error Log]
Actullay I met this error before when trying to run kitti360_inference.ipynb.
I resolved t…
-
- [x] I have checked that this bug exists on the latest stable version of AutoGluon
- [ ] and/or I have checked that this bug exists on the latest mainline of AutoGluon via source installation
**D…
-
Hi,
A PR to discuss an issue raised in an exchange following a Hub PR : [https://huggingface.co/datasets/AmazonScience/massive](https://huggingface.co/datasets/AmazonScience/massive)/discussions/1 …
-
Here we can add the papers we find
-
https://lablab.ai/event/audiocraft-24-hours-hackathon/introspectiwavevisioneers
It's exciting to see how lablab.ai is organizing the AudioCraft 24-hours Hackathon, diving into the realm of audi…