valeoai / xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Other
194 stars 36 forks source link

Segmentation fault #3

Closed luck528 closed 4 years ago

luck528 commented 4 years ago

Dear authors,

Thanks a lot for your very nice work firstly!

I am now trying to run your code on my own machine. However, I encounter the problems on "Segmentation fault". The process is as following:

I build the environment and preprocess the data according to your instruction. Then I would like run the command for usa_singpore:

python xmuda/train_xmuda.py --cfg=configs/nuscenes/usa_singapore/xmuda.yaml OUTPUT_DIR ./checkpoint/

Then I get the output as:

**xmuda/train_xmuda.py:377: UserWarning: Output directory exists. warnings.warn('Output directory exists.') 2020-06-20 19:39:37,040 xmuda INFO: 1 GPUs available 2020-06-20 19:39:37,040 xmuda INFO: Namespace(config_file='configs/nuscenes/usa_singapore/xmuda.yaml', opts=['OUTPUT_DIR', './checkpoint/']) 2020-06-20 19:39:37,040 xmuda INFO: Loaded configuration file configs/nuscenes/usa_singapore/xmuda.yaml 2020-06-20 19:39:37,041 xmuda INFO: Running with config: AUTO_RESUME: True DATALOADER: DROP_LAST: True NUM_WORKERS: 1 DATASET_SOURCE: NuScenesSCN: augmentation: color_jitter: (0.4, 0.4, 0.4) flip_x: 0.5 fliplr: 0.5 noisy_rot: 0.1 rot_z: 6.2831 transl: True full_scale: 4096 image_normalizer: () merge_classes: True nuscenes_dir: /mydata/data/datasets/nuScenes/ preprocess_dir: /mydata/datasets_preprocess/nuScenes_preprocess/preprocess/ resize: (400, 225) scale: 20 use_image: True TRAIN: ('train_usa',) TYPE: NuScenesSCN DATASET_TARGET: NuScenesSCN: augmentation: color_jitter: (0.4, 0.4, 0.4) flip_x: 0.5 fliplr: 0.5 noisy_rot: 0.1 rot_z: 6.2831 transl: True full_scale: 4096 image_normalizer: () merge_classes: True nuscenes_dir: /mydata/datasets/nuScenes/ preprocess_dir: /mydata/datasets_preprocess/nuScenes_preprocess/preprocess/ pselab_paths: () resize: (400, 225) scale: 20 use_image: True TEST: ('test_singapore',) TRAIN: ('train_singapore',) TYPE: NuScenesSCN VAL: ('val_singapore',) MODEL: TYPE: MODEL_2D: CKPT_PATH: DUAL_HEAD: True NUM_CLASSES: 5 TYPE: UNetResNet34 UNetResNet34: pretrained: True MODEL_3D: CKPT_PATH: DUAL_HEAD: True NUM_CLASSES: 5 SCN: block_reps: 1 full_scale: 4096 in_channels: 1 m: 16 num_planes: 7 residual_blocks: False TYPE: SCN OPTIMIZER: Adam: betas: (0.9, 0.999) BASE_LR: 0.001 TYPE: Adam WEIGHT_DECAY: 0.0 OUTPUT_DIR: ./checkpoint/ RESUME_PATH: RESUME_STATES: True RNG_SEED: 1 SCHEDULER: CLIP_LR: 0.0 MAX_ITERATION: 100000 MultiStepLR: gamma: 0.1 milestones: (80000, 90000) TYPE: MultiStepLR TRAIN: BATCH_SIZE: 8 CHECKPOINT_PERIOD: 5000 CLASS_WEIGHTS: [2.47956584, 4.26788384, 5.71114131, 3.80241668, 1.0] FROZEN_PATTERNS: () LOG_PERIOD: 50 MAX_TO_KEEP: 100 SUMMARY_PERIOD: 50 XMUDA: lambda_logcoral: 0.0 lambda_minent: 0.0 lambda_pl: 0.0 lambda_xm_src: 1.0 lambda_xm_trg: 0.1 VAL: BATCH_SIZE: 32 LOG_PERIOD: 20 METRIC: seg_iou PERIOD: 5000 2020-06-20 19:39:37,704 xmuda.train INFO: Build 2D model: Net2DSeg( (net_2d): UNetResNet34( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (4): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (5): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (dec_t_conv_stage5): Sequential( (0): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_conv_stage4): Sequential( (0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_t_conv_stage4): Sequential( (0): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_conv_stage3): Sequential( (0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_t_conv_stage3): Sequential( (0): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_conv_stage2): Sequential( (0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_t_conv_stage2): Sequential( (0): ConvTranspose2d(64, 64, kernel_size=(2, 2), stride=(2, 2)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) ) (dec_conv_stage1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (dropout): Dropout(p=0.4, inplace=False) ) (linear): Linear(in_features=64, out_features=5, bias=True) (linear2): Linear(in_features=64, out_features=5, bias=True) )

Parameters: 2.36e+07

2020-06-20 19:39:37,737 xmuda.train INFO: Build 3D model: Net3DSeg( (net_3d): UNetSCN( (sparseModel): Sequential( (0): InputLayer() (1): SubmanifoldConvolution 1->16 C3 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(16,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 16->16 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(16,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 16->32 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(32,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 32->32 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(32,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 32->48 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(48,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 48->48 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(48,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 48->64 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 64->64 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 64->80 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(80,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 80->80 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(80,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 80->96 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(96,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 96->96 C3 ) (1): ConcatTable( (0): Identity() (1): Sequential( (0): BatchNormLeakyReLU(96,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): Convolution 96->112 C2/2 (2): Sequential( (0): Sequential( (0): BatchNormLeakyReLU(112,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 112->112 C3 ) ) (3): BatchNormLeakyReLU(112,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 112->96 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(192,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 192->96 C3 ) ) (3): BatchNormLeakyReLU(96,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 96->80 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(160,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 160->80 C3 ) ) (3): BatchNormLeakyReLU(80,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 80->64 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(128,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 128->64 C3 ) ) (3): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 64->48 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(96,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 96->48 C3 ) ) (3): BatchNormLeakyReLU(48,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 48->32 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(64,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 64->32 C3 ) ) (3): BatchNormLeakyReLU(32,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (4): Deconvolution 32->16 C2/2 ) ) (2): JoinTable() (3): Sequential( (0): BatchNormLeakyReLU(32,eps=0.0001,momentum=0.99,affine=True,leakiness=0) (1): SubmanifoldConvolution 32->16 C3 ) ) (3): BatchNormReLU(16,eps=0.0001,momentum=0.99,affine=True) (4): OutputLayer() ) ) (linear): Linear(in_features=16, out_features=5, bias=True) (linear2): Linear(in_features=16, out_features=5, bias=True) )

Parameters: 2.69e+06

2020-06-20 19:39:40,425 xmuda.train INFO: No checkpoint found. Initializing model from scratch 2020-06-20 19:39:40,426 xmuda.train INFO: No checkpoint found. Initializing model from scratch Initialize Nuscenes dataloader Load ('train_usa',) Initialize Nuscenes dataloader Load ('train_singapore',) Initialize Nuscenes dataloader Load ('val_singapore',) 2020-06-20 19:40:26,556 xmuda.train INFO: Start training from iteration 0 Segmentation fault**

I try to run it on single GPU of "TITAN XP".

I would be very appreciated if you and any others could help!

Thanks again!

maxjaritz commented 4 years ago

Hi, Thanks for using our code!

Maybe there is a problem with SparseConvNet. Please test it by running:

$ python xmuda/models/scn_unet.py

If you get a segmentation fault here, please check that you have the correct CUDA version 10.0 and not 10.1.

luck528 commented 4 years ago

Hi,

Thanks a lot for your reply! Yes, when I run the command $ python xmuda/models/scn_unet.py, I get the same segmentation fault error. However, I check the CUDA version and it shows that the CUDA is 10.0:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

I know this is a problem more related to the sparseconvnet but I am still wondering whether you have other advice on the problem, since I have tried the solution in the issue of sparseconvnet repo.

Thanks a lot!

maxjaritz commented 4 years ago

Maybe check the PyTorch version. Do you use PyTorch version 1.4?

luck528 commented 4 years ago

Hi,

Thanks a lot for your help! Yes, my Pytorch version is 1.4, but the error still exists.

However, the problem is solved now by building the sparseconvnet from the source code.

Thanks a lot! I will close the issue.

weiliuxm commented 4 years ago

Hi,

Thanks a lot for your help! Yes, my Pytorch version is 1.4, but the error still exists.

However, the problem is solved now by building the sparseconvnet from the source code.

Thanks a lot! I will close the issue.

Hi, could you share how to build the sparseconvnet from the source code, pls?
Thank you.

Eaton2022 commented 2 years ago

Hi,

Thanks a lot for your help! Yes, my Pytorch version is 1.4, but the error still exists.

However, the problem is solved now by building the sparseconvnet from the source code.

Thanks a lot! I will close the issue.

Hi,I meet the same problem, can you share how to solve this problem in detail? Thanks a lot.

Eaton2022 commented 2 years ago

Hi, Thanks a lot for your help! Yes, my Pytorch version is 1.4, but the error still exists. However, the problem is solved now by building the sparseconvnet from the source code. Thanks a lot! I will close the issue.

Hi,I meet the same problem, can you share how to solve this problem in detail? Thanks a lot.

solved. The problem is that it didn't compile well with sparseconvnet. You should pip install --upgrade git+https://github.com/facebookresearch/SparseConvNet.git.

you should see "Successfully installed sparseconvnet-0.2" at terminal.