UX-Decoder / Segment-Everything-Everywhere-All-At-Once

[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Apache License 2.0
4.2k stars 354 forks source link

the code doesn't work for me #43

Open viibridges opened 1 year ago

viibridges commented 1 year ago

Following the guide, i manage to install all requirements in a brandy new conda env. I tried to run the zebra example (in which i am interested the most) got no segmentation results. I had tried other examples, but in no vain (no segmentation results at all). image

the outputs from my terminal console seemed alright, no error message except some warnings. here is the package list i installed, would you be so kind to tell me the reason i failed:

absl-py 1.4.0 accelerate 0.19.0 aiofiles 23.1.0 aiohttp 3.8.4 aiosignal 1.3.1 altair 5.0.1 antlr4-python3-runtime 4.9.3 anyio 3.7.0 appdirs 1.4.4 astunparse 1.6.3 async-timeout 4.0.2 attrs 23.1.0 black 21.4b2 cachetools 5.3.1 certifi 2023.5.7 charset-normalizer 3.1.0 cityscapesScripts 2.2.2 click 8.1.3 cloudpickle 2.2.1 cmake 3.26.3 coloredlogs 15.0.1 contourpy 1.0.7 cycler 0.11.0 detectron2 0.6 diffdist 0.1 diffusers 0.11.1 einops 0.6.1 exceptiongroup 1.1.1 fastapi 0.95.2 ffmpy 0.3.0 filelock 3.12.0 flatbuffers 23.5.26 fonttools 4.39.4 frozenlist 1.3.3 fsspec 2023.5.0 ftfy 6.1.1 future 0.18.3 fvcore 0.1.5.post20221221 gast 0.4.0 google-auth 2.19.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 gradio 3.31.0 gradio_client 0.2.5 grpcio 1.54.2 h11 0.14.0 h5py 3.8.0 httpcore 0.17.2 httpx 0.24.1 huggingface-hub 0.14.1 humanfriendly 10.0 hydra-core 1.3.2 idna 3.4 imageio 2.30.0 importlib-metadata 6.6.0 importlib-resources 5.12.0 invisible-watermark 0.1.5 iopath 0.1.9 Jinja2 3.1.2 joblib 1.2.0 json-tricks 3.17.0 jsonschema 4.17.3 keras 2.11.0 kiwisolver 1.4.4 kornia 0.6.4 lazy_loader 0.2 libclang 16.0.0 linkify-it-py 2.0.2 lit 16.0.5 llvmlite 0.40.0 Markdown 3.4.3 markdown-it-py 2.2.0 MarkupSafe 2.1.2 matplotlib 3.7.1 mdit-py-plugins 0.3.3 mdurl 0.1.2 more-itertools 9.1.0 mpmath 1.3.0 multidict 6.0.4 mup 1.0.0 mypy-extensions 1.0.0 networkx 3.1 nltk 3.8.1 numba 0.57.0 numpy 1.23.5 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 omegaconf 2.3.0 onnx 1.12.0 onnxruntime 1.15.0 openai 0.27.7 openai-whisper 20230314 opencv-python 4.7.0.72 opt-einsum 3.3.0 orjson 3.8.14 packaging 23.1 pandas 2.0.2 pathspec 0.11.1 Pillow 9.5.0 pip 23.0.1 pkgutil_resolve_name 1.3.10 portalocker 2.7.0 protobuf 3.19.6 psutil 5.9.5 pyarrow 12.0.0 pyasn1 0.5.0 pyasn1-modules 0.3.0 pycocotools 2.0.4 pydantic 1.10.8 pydot 1.4.2 pydub 0.25.1 Pygments 2.15.1 pyparsing 3.0.9 pyquaternion 0.9.9 pyrsistent 0.19.3 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3 PyWavelets 1.4.1 PyYAML 6.0 regex 2023.5.5 requests 2.31.0 requests-oauthlib 1.3.1 rsa 4.9 scann 1.2.9 scikit-image 0.20.0 scikit-learn 1.2.2 scipy 1.9.1 seaborn 0.12.2 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 67.8.0 shapely 2.0.1 six 1.16.0 sniffio 1.3.0 starlette 0.27.0 sympy 1.12 tabulate 0.9.0 tenacity 8.2.2 tensorboard 2.11.2 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.11.1 tensorflow-estimator 2.11.0 tensorflow-io-gcs-filesystem 0.32.0 termcolor 2.3.0 threadpoolctl 3.1.0 tifffile 2023.4.12 tiktoken 0.3.3 timm 0.4.12 tokenizers 0.12.1 toml 0.10.2 toolz 0.12.0 torch 2.0.1 torchmetrics 0.6.0 torchvision 0.15.2 tqdm 4.65.0 transformers 4.19.2 triton 2.0.0 typing 3.7.4.3 typing_extensions 4.6.2 tzdata 2023.3 uc-micro-py 1.0.2 urllib3 1.26.16 uvicorn 0.22.0 vision-datasets 0.2.2 wcwidth 0.2.6 websockets 11.0.3 Werkzeug 2.3.4 wheel 0.38.4 wrapt 1.15.0 yacs 0.1.8 yarl 1.9.2 zipp 3.15.0

MaureenZOU commented 1 year ago

What gpu env you are using?

Mahmoud-Ali-FCIS commented 1 year ago

Same problem here! | NVIDIA: 510.108.03 CUDA Version: 11.6 |

arjunbijanki commented 8 months ago

I found that on my GPU (NVidia T1200 Laptop GPU), the autocast to fp16 was causing the model to return garbage. Consider disabling it at this line:

torch.autocast(device_type='cuda', dtype=torch.float16)