Brain-Cog-Lab / Transfer-for-DVS

The repo for "An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain", AAAI 2024 (ORAL)
6 stars 0 forks source link

Ncaltech的使用 #2

Open ShuhanYe opened 1 month ago

ShuhanYe commented 1 month ago

作者您好!我在复现论文时遇到了一些问题: 1、我安装tonic库,但tonic.dataset.ncaltech101似乎不支持本项目,比如它并没有cls_count等一系列参数,我在Brain-cog-x中找到了以下似乎可以使用的代码,不知道是不是因为tonic版本的问题?

`# encoding: utf-8

Author : FloyedFloyed_Shen@outlook.com

Datetime : 2023/1/30 21:28

User : yu

Product : PyCharm

Project : BrainCog

File : ncaltech101.py

explain :

import os import numpy as np

from tonic.io import read_mnist_file from tonic.dataset import Dataset from tonic.download_utils import extract_archive

class NCALTECH101(Dataset): """N-CALTECH101 dataset https://www.garrickorchard.com/datasets/n-caltech101. Events have (xytp) ordering. ::

    @article{orchard2015converting,
      title={Converting static image datasets to spiking neuromorphic datasets using saccades},
      author={Orchard, Garrick and Jayawant, Ajinkya and Cohen, Gregory K and Thakor, Nitish},
      journal={Frontiers in neuroscience},
      volume={9},
      pages={437},
      year={2015},
      publisher={Frontiers}
    }

Parameters:
    save_to (string): Location to save files to on disk.
    transform (callable, optional): A callable of transforms to apply to the data.
    target_transform (callable, optional): A callable of transforms to apply to the targets/labels.
"""

url = "https://data.mendeley.com/public-files/datasets/cy6cvx3ryv/files/36b5c52a-b49d-4853-addb-a836a8883e49/file_downloaded"
filename = "N-Caltech101-archive.zip"
file_md5 = "66201824eabb0239c7ab992480b50ba3"
data_filename = "N-Caltech101-archive.zip"
folder_name = "Caltech101"
cls_count = [467,
             435, 200, 798, 55, 800, 42, 42, 47, 54, 46,
             33, 128, 98, 43, 85, 91, 50, 43, 123, 47,
             59, 62, 107, 47, 69, 73, 70, 50, 51, 57,
             67, 52, 65, 68, 75, 64, 53, 64, 85, 67,
             67, 45, 34, 34, 51, 99, 100, 42, 54, 88,
             80, 31, 64, 86, 114, 61, 81, 78, 41, 66,
             43, 40, 87, 32, 76, 55, 35, 39, 47, 38,
             45, 53, 34, 57, 82, 59, 49, 40, 63, 39,
             84, 57, 35, 64, 45, 86, 59, 64, 35, 85,
             49, 86, 75, 239, 37, 59, 34, 56, 39, 60]
# length = 8242
length = 8709

sensor_size = None  # all recordings are of different size
dtype = np.dtype([("x", int), ("y", int), ("t", int), ("p", int)])
ordering = dtype.names

def __init__(self, save_to, transform=None, target_transform=None):
    super(NCALTECH101, self).__init__(
        save_to, transform=transform, target_transform=target_transform
    )

    classes = {
        'BACKGROUND_Google': 0,
        'Faces_easy': 1,
        'Leopards': 2,
        'Motorbikes': 3,
        'accordion': 4,
        'airplanes': 5,
        'anchor': 6,
        'ant': 7,
        'barrel': 8,
        'bass': 9,
        'beaver': 10,
        'binocular': 11,
        'bonsai': 12,
        'brain': 13,
        'brontosaurus': 14,
        'buddha': 15,
        'butterfly': 16,
        'camera': 17,
        'cannon': 18,
        'car_side': 19,
        'ceiling_fan': 20,
        'cellphone': 21,
        'chair': 22,
        'chandelier': 23,
        'cougar_body': 24,
        'cougar_face': 25,
        'crab': 26,
        'crayfish': 27,
        'crocodile': 28,
        'crocodile_head': 29,
        'cup': 30,
        'dalmatian': 31,
        'dollar_bill': 32,
        'dolphin': 33,
        'dragonfly': 34,
        'electric_guitar': 35,
        'elephant': 36,
        'emu': 37,
        'euphonium': 38,
        'ewer': 39,
        'ferry': 40,
        'flamingo': 41,
        'flamingo_head': 42,
        'garfield': 43,
        'gerenuk': 44,
        'gramophone': 45,
        'grand_piano': 46,
        'hawksbill': 47,
        'headphone': 48,
        'hedgehog': 49,
        'helicopter': 50,
        'ibis': 51,
        'inline_skate': 52,
        'joshua_tree': 53,
        'kangaroo': 54,
        'ketch': 55,
        'lamp': 56,
        'laptop': 57,
        'llama': 58,
        'lobster': 59,
        'lotus': 60,
        'mandolin': 61,
        'mayfly': 62,
        'menorah': 63,
        'metronome': 64,
        'minaret': 65,
        'nautilus': 66,
        'octopus': 67,
        'okapi': 68,
        'pagoda': 69,
        'panda': 70,
        'pigeon': 71,
        'pizza': 72,
        'platypus': 73,
        'pyramid': 74,
        'revolver': 75,
        'rhino': 76,
        'rooster': 77,
        'saxophone': 78,
        'schooner': 79,
        'scissors': 80,
        'scorpion': 81,
        'sea_horse': 82,
        'snoopy': 83,
        'soccer_ball': 84,
        'stapler': 85,
        'starfish': 86,
        'stegosaurus': 87,
        'stop_sign': 88,
        'strawberry': 89,
        'sunflower': 90,
        'tick': 91,
        'trilobite': 92,
        'umbrella': 93,
        'watch': 94,
        'water_lilly': 95,
        'wheelchair': 96,
        'wild_cat': 97,
        'windsor_chair': 98,
        'wrench': 99,
        'yin_yang': 100,
    }

    # if not self._check_exists():
        # self.download()
        # extract_archive(os.path.join(self.location_on_system, self.data_filename))

    file_path = os.path.join(self.location_on_system, self.folder_name)
    for path, dirs, files in os.walk(file_path):
        dirs.sort()
        # if 'BACKGROUND_Google' in path:
        #     continue
        for file in files:
            if file.endswith("bin"):
                self.data.append(path + "/" + file)
                label_name = os.path.basename(path)

                if isinstance(label_name, bytes):
                    label_name = label_name.decode()
                self.targets.append(classes[label_name])

def __getitem__(self, index):
    """
    Returns:
        a tuple of (events, target) where target is the index of the target class.
    """
    events = read_mnist_file(self.data[index], dtype=self.dtype)
    target = self.targets[index]
    events["x"] -= events["x"].min()
    events["y"] -= events["y"].min()
    if self.transform is not None:
        events = self.transform(events)
    if self.target_transform is not None:
        target = self.target_transform(target)
    return events, target

def __len__(self):
    return len(self.data)

def _check_exists(self):
    return self._is_file_present() and self._folder_contains_at_least_n_files_of_type(
        8709, ".bin"
    )

`

2、在使用上述的ncaltech101类替代tonic.datasets.ncaltehc101后,我还碰到了该问题,DVS/NCALTECH101/train_cache路径能生成但是似乎无法缓存进去 Data 3407: D:/datasets\DVS/NCALTECH101/train_cache_10\3407_2.hdf5 not in cache, generating it now Traceback (most recent call last): File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\tonic\cached_dataset.py", line 145, in getitem data, targets = load_from_disk_cache(file_path) File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\tonic\cached_dataset.py", line 220, in load_from_disk_cache with h5py.File(file_path, "r") as f: File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\h5py_hl\files.py", line 533, in init fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr) File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\h5py_hl\files.py", line 226, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 106, in h5py.h5f.open FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = 'D:/datasets\DVS/NCALTECH101/train_cache_10\3407_2.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

3、能否上传下CEPDVS数据集

感谢您的工作,期待您的回复

ppx-hub commented 1 month ago

作者您好!我在复现论文时遇到了一些问题: 1、我安装tonic库,但tonic.dataset.ncaltech101似乎不支持本项目,比如它并没有cls_count等一系列参数,我在Brain-cog-x中找到了以下似乎可以使用的代码,不知道是不是因为tonic版本的问题?

`# encoding: utf-8

Author : FloyedFloyed_Shen@outlook.com

Datetime : 2023/1/30 21:28

User : yu

Product : PyCharm

Project : BrainCog

File : ncaltech101.py

explain :

import os import numpy as np

from tonic.io import read_mnist_file from tonic.dataset import Dataset from tonic.download_utils import extract_archive

class NCALTECH101(Dataset): """N-CALTECH101 dataset https://www.garrickorchard.com/datasets/n-caltech101. Events have (xytp) ordering. ::

    @article{orchard2015converting,
      title={Converting static image datasets to spiking neuromorphic datasets using saccades},
      author={Orchard, Garrick and Jayawant, Ajinkya and Cohen, Gregory K and Thakor, Nitish},
      journal={Frontiers in neuroscience},
      volume={9},
      pages={437},
      year={2015},
      publisher={Frontiers}
    }

Parameters:
    save_to (string): Location to save files to on disk.
    transform (callable, optional): A callable of transforms to apply to the data.
    target_transform (callable, optional): A callable of transforms to apply to the targets/labels.
"""

url = "https://data.mendeley.com/public-files/datasets/cy6cvx3ryv/files/36b5c52a-b49d-4853-addb-a836a8883e49/file_downloaded"
filename = "N-Caltech101-archive.zip"
file_md5 = "66201824eabb0239c7ab992480b50ba3"
data_filename = "N-Caltech101-archive.zip"
folder_name = "Caltech101"
cls_count = [467,
             435, 200, 798, 55, 800, 42, 42, 47, 54, 46,
             33, 128, 98, 43, 85, 91, 50, 43, 123, 47,
             59, 62, 107, 47, 69, 73, 70, 50, 51, 57,
             67, 52, 65, 68, 75, 64, 53, 64, 85, 67,
             67, 45, 34, 34, 51, 99, 100, 42, 54, 88,
             80, 31, 64, 86, 114, 61, 81, 78, 41, 66,
             43, 40, 87, 32, 76, 55, 35, 39, 47, 38,
             45, 53, 34, 57, 82, 59, 49, 40, 63, 39,
             84, 57, 35, 64, 45, 86, 59, 64, 35, 85,
             49, 86, 75, 239, 37, 59, 34, 56, 39, 60]
# length = 8242
length = 8709

sensor_size = None  # all recordings are of different size
dtype = np.dtype([("x", int), ("y", int), ("t", int), ("p", int)])
ordering = dtype.names

def __init__(self, save_to, transform=None, target_transform=None):
    super(NCALTECH101, self).__init__(
        save_to, transform=transform, target_transform=target_transform
    )

    classes = {
        'BACKGROUND_Google': 0,
        'Faces_easy': 1,
        'Leopards': 2,
        'Motorbikes': 3,
        'accordion': 4,
        'airplanes': 5,
        'anchor': 6,
        'ant': 7,
        'barrel': 8,
        'bass': 9,
        'beaver': 10,
        'binocular': 11,
        'bonsai': 12,
        'brain': 13,
        'brontosaurus': 14,
        'buddha': 15,
        'butterfly': 16,
        'camera': 17,
        'cannon': 18,
        'car_side': 19,
        'ceiling_fan': 20,
        'cellphone': 21,
        'chair': 22,
        'chandelier': 23,
        'cougar_body': 24,
        'cougar_face': 25,
        'crab': 26,
        'crayfish': 27,
        'crocodile': 28,
        'crocodile_head': 29,
        'cup': 30,
        'dalmatian': 31,
        'dollar_bill': 32,
        'dolphin': 33,
        'dragonfly': 34,
        'electric_guitar': 35,
        'elephant': 36,
        'emu': 37,
        'euphonium': 38,
        'ewer': 39,
        'ferry': 40,
        'flamingo': 41,
        'flamingo_head': 42,
        'garfield': 43,
        'gerenuk': 44,
        'gramophone': 45,
        'grand_piano': 46,
        'hawksbill': 47,
        'headphone': 48,
        'hedgehog': 49,
        'helicopter': 50,
        'ibis': 51,
        'inline_skate': 52,
        'joshua_tree': 53,
        'kangaroo': 54,
        'ketch': 55,
        'lamp': 56,
        'laptop': 57,
        'llama': 58,
        'lobster': 59,
        'lotus': 60,
        'mandolin': 61,
        'mayfly': 62,
        'menorah': 63,
        'metronome': 64,
        'minaret': 65,
        'nautilus': 66,
        'octopus': 67,
        'okapi': 68,
        'pagoda': 69,
        'panda': 70,
        'pigeon': 71,
        'pizza': 72,
        'platypus': 73,
        'pyramid': 74,
        'revolver': 75,
        'rhino': 76,
        'rooster': 77,
        'saxophone': 78,
        'schooner': 79,
        'scissors': 80,
        'scorpion': 81,
        'sea_horse': 82,
        'snoopy': 83,
        'soccer_ball': 84,
        'stapler': 85,
        'starfish': 86,
        'stegosaurus': 87,
        'stop_sign': 88,
        'strawberry': 89,
        'sunflower': 90,
        'tick': 91,
        'trilobite': 92,
        'umbrella': 93,
        'watch': 94,
        'water_lilly': 95,
        'wheelchair': 96,
        'wild_cat': 97,
        'windsor_chair': 98,
        'wrench': 99,
        'yin_yang': 100,
    }

    # if not self._check_exists():
        # self.download()
        # extract_archive(os.path.join(self.location_on_system, self.data_filename))

    file_path = os.path.join(self.location_on_system, self.folder_name)
    for path, dirs, files in os.walk(file_path):
        dirs.sort()
        # if 'BACKGROUND_Google' in path:
        #     continue
        for file in files:
            if file.endswith("bin"):
                self.data.append(path + "/" + file)
                label_name = os.path.basename(path)

                if isinstance(label_name, bytes):
                    label_name = label_name.decode()
                self.targets.append(classes[label_name])

def __getitem__(self, index):
    """
    Returns:
        a tuple of (events, target) where target is the index of the target class.
    """
    events = read_mnist_file(self.data[index], dtype=self.dtype)
    target = self.targets[index]
    events["x"] -= events["x"].min()
    events["y"] -= events["y"].min()
    if self.transform is not None:
        events = self.transform(events)
    if self.target_transform is not None:
        target = self.target_transform(target)
    return events, target

def __len__(self):
    return len(self.data)

def _check_exists(self):
    return self._is_file_present() and self._folder_contains_at_least_n_files_of_type(
        8709, ".bin"
    )

`

2、在使用上述的ncaltech101类替代tonic.datasets.ncaltehc101后,我还碰到了该问题,DVS/NCALTECH101/train_cache路径能生成但是似乎无法缓存进去 Data 3407: D:/datasets\DVS/NCALTECH101/train_cache_10\3407_2.hdf5 not in cache, generating it now Traceback (most recent call last): File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\tonic\cached_dataset.py", line 145, in getitem data, targets = load_from_disk_cache(file_path) File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\tonic\cached_dataset.py", line 220, in load_from_disk_cache with h5py.File(file_path, "r") as f: File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\h5py_hl\files.py", line 533, in init fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr) File "C:\Users\86158\anaconda3\envs\pytorch-gpu\lib\site-packages\h5py_hl\files.py", line 226, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 106, in h5py.h5f.open FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = 'D:/datasets\DVS/NCALTECH101/train_cache_10\3407_2.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

3、能否上传下CEPDVS数据集

感谢您的工作,期待您的回复

您好,感谢你对我们工作的关注,以下是一些回复:

  1. tonic.dataset.ncaltech101 里面没有cls_count等参数:是的,这和tonic版本有关系,我使用的是0.1.0版本,最早的,现在tonic官网已经找不到该版本了;我会将我环境中的tonic上传一份儿,如果你使用的是anaconda, 你可以将其放到envs/your_envs/lib/python3.8/site-packages/tonic目录下。
  2. tonic中使用缓存的目的是加快读取,我会将生成的缓存也上传一份儿,这样可以确保程序的复现结果。
  3. 可以,我查看了一下它一共占用41G,将打包传到百度网盘中。

以上预计在2小时左右完成,谢谢

ShuhanYe commented 1 month ago

非常感谢!祝您工作顺利

ppx-hub commented 1 month ago

非常感谢!祝您工作顺利

您好,抱歉回复的晚了一些。

  1. CEP-DVS数据集(带生成的缓存文件),可以在这里找到:链接:https://pan.baidu.com/s/1_fUF4XvNeQuP1TzkQmhF3w 提取码:r465
  2. NCALTECH101数据集(带生成的缓存文件),可以在这里找到:链接:https://pan.baidu.com/s/189IlBE17CwtQmkZ4_qoqTg 提取码:q257
  3. tonic-0.0.1版本(之前回复的0.1.0版本有误,应该是0.0.1),可以在这里找到:链接:https://pan.baidu.com/s/1LCimoFgbfAweYu-uJ-WyUA 提取码:3x4r
ShuhanYe commented 1 month ago

非常感谢!祝您工作顺利

您好,抱歉回复的晚了一些。

  1. CEP-DVS数据集(带生成的缓存文件),可以在这里找到:链接:https://pan.baidu.com/s/1_fUF4XvNeQuP1TzkQmhF3w 提取码:r465
  2. NCALTECH101数据集(带生成的缓存文件),可以在这里找到:链接:https://pan.baidu.com/s/189IlBE17CwtQmkZ4_qoqTg 提取码:q257
  3. tonic-0.0.1版本(之前回复的0.1.0版本有误,应该是0.0.1),可以在这里找到:链接:https://pan.baidu.com/s/1LCimoFgbfAweYu-uJ-WyUA 提取码:3x4r

感谢您的回复!我已经成功跑通了代码,但是精度差异较大在NCALTECH101运用DVS迁移学习只有不到85的top1,我注意到有两个有些疑惑的地方:

1、我在调试代码时报错了source_input_list下标越阶的错误,原因是NCALTECH101删除了faces类而保留了BACKGROUND_GOOGLE类有8709张图片,tmp_sampler_list可能会生成0-8709的随机数,而在braincog.dataset.dataset中的caltech101相关操作是移除BACKGROUND_GOOGLE类与Ncaltech101似乎不一样,以至于最后整个caltech101只有8677张图片造成下标越界,我取消类移除BACKGROUND_GOOGLE的操作,并且将build_dataset中的对应num_classes改成102,成功跑通了代码,但我不知道您在工作中是怎么处理这部分的,也许是此部分影响了最终复现精度。

2、我注意到论文中您是将RGB图像转换到HSV空间,进行类匹配并使RGB数据可以进VGG_SNN的backbone,但看代码中注释了复制V通道与步长,而是直接取了RGB的B通道进行通道复制和步长复制输入VGG_SNN的backbone。我想请教下您在工作中使用的是哪一种方法,如果取Value通道似乎与我们为了利用RGB数据的颜色等信息指导DVS数据的初衷有所偏差,跑出来的结果也较差,但是如果是取用蓝色通道并复制,是出于什么样的动机呢?

我设置的参数如下,仅有batchsize和workers由于显存大小设置不同: model:Transfer_VGG_SNN step: 10 batch_size: 16 act_fun: QGateGrad # activation function device: 0 seed: 42 num_classes: 101 traindata_ratio: 1.0 domain_loss: True # store true domain_loss_coefficient: 0.5 TET_loss: True smoothing: 0.0 no_use_hsv: True regularization: True workers: 0

期待得到您的回复!

ppx-hub commented 1 month ago
  1. 是的,类别影响了最后的精度。NCALTECH101数据集是Caltech101数据集的DVS版本,原始的Caltech101数据集,不算BACKGROUND_GOOGLE,有101类,算上有102类,所以num_classes改成102可以跑通代码。尽管如此,NCALTECH101数据集类别是比Caltech101少一类的,具体来说,NCALTECH101数据集将Caltech101数据集中的“easy_face”和“face”两类,仅保留了"easy_face"这一类(这一类是人脸居中的大头照)。 8709张图片是对的,即移除“face”类别后剩余的图片数,NCALTECH101数据集可以查看我上传的tonic库中的ncaltech101.py

在代码实现中,需要对Caltech101修改两处,以使得类别数和少一类的NCALTECH101对齐。 A)手动的对Caltech101文件进行修改(即braincog.dataset.datasets,build_dataset中用到的datasets.Caltech101),,该文件的位置处于/usr/local/anaconda3/envs/all_hx/lib/python3.7/site-packages/torchvision/datasets/caltech.py, 注释掉remove背景类这一行 B)对caltech101数据集,手动删除"Faces'这一类(一个文件夹)

上述两处修改后的文件可以在这里查看:链接:https://pan.baidu.com/s/1n3eAw0uLbBpYA1jnoVO2QA 提取码:38hh

确实忘记在仓库中说这一改动了,但是他很重要,谢谢你的提醒,后续加到readme.md的说明中

  1. DVS数据是由事件摄像机在设定阈值的情况下,对物体亮度变化超过阈值做出反应的数据,因此他只有1和-1两个通道,每个通道每个像素点的值都是0、1,同时带有时间戳。而常见的RGB数据是3通道的,我们文章的初衷是用RGB指导DVS,更好地让网络模型进行学习。而RGB颜色空间不能显式地反映数据的强度,因此我们将其转换到HSV空间,并只取value维度,复制两份儿以和DVS数据对齐输入。我们没有直接取RGB的B通道,尽管代码中注释了复制V通道与步长,但转换到HSV空间的操作是在读取rgb数据集中完成的,即braincog.datasets.datasets中的get_transfer_CALTECH101_data,use_hsv参数,如果use_hsv=True,那么他会在build_dataset函数中执行:
    if use_hsv: print("Used V-channel!") t.append(ConvertHSV()) 这一步完成了RGB到HSV颜色空间的转换。关于hsv颜色空间对于指导dvs数据集的优势,相关的实验可以在arxiv版本中的附录中找到相关实验结果。

  2. 您的配置没有问题,workers设为>0的数可以保证数据读取和主进程分开,如果硬件允许他会获得更好的性能;batch_size对实验结果的影响不大