-
您好,我现在用的是tsn框架(pytorch),然后我想提取最后一个卷积层的特征,并且把它输入到新加的激活层tanh中(tanh这一层我把它加到models.py的def prepare_tsn函数里面。因为我无法给直接他加到yaml里)。您知道该怎么操作吗?期待您的回复,谢谢
-
Hello. I didn't understand how you calculated the average number of instructions. For example: **inst_avg_logic**. Does it calculated by the total number of logic instructions devide the number of bas…
lumi0 updated
6 months ago
-
I'm looking for a few pointers on how to efficiently scale up extract_features. Unlike training, there isn't a lot of information on distributed prediction out there- I'd like to try one or more of t…
simra updated
5 years ago
-
Thank you for your elegant work! I am wondering if InternV2 has the same function like InternVL-C in the previous versions that support cross-modal feature retrieval, or how I can get aligned embeddin…
-
When I use the extract_features.py to extract features, the process seem be stuck. I tired the two commands:
> python3 extract_features.py --mode caffe \
--num-cpus 2 --gpus '0,1,2,3' \
…
-
Hi, thanks for your work. I downloaded Pre-extracted EgoNLQ Features multiple times and failed when downloading 50% each time. Do you have any idea about that? Thank you.
-
### Question
[BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2#feature-extraction-example) allows extracting [Unimodal features](https://github.com/salesforce/LAVIS/blob/3446bac20…
-
final input = 'A string of text from which to extract the features.';
final params = ApiQueryNLPFeatureExtraction(
inputs: [input], options: InferenceOptions(waitForModel: true));…
-
Hi,
Thanks for sharing your work. I was wondering how you extracted [these](https://drive.google.com/file/d/1BFw0jc0j-ffT2PhI4CZeP3IJFZg3GxlZ/view) visual features of the CXR images in the OpenI da…
-
Ahoi hoi gang,
within our discussion wrt auditory features and things (e.g. #702 and #596), I wanted to propose the inclusion of the cochleagram feature (standard version [here](https://github.com/…