shenyunhang / APE

[CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception
https://arxiv.org/abs/2312.02153
Apache License 2.0
459 stars 28 forks source link

docker环境问题 #14

Open zhiwenhou1227 opened 6 months ago

zhiwenhou1227 commented 6 months ago
  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

shenyunhang commented 6 months ago

@zhiwenhou1227

这个镜像不是我们提供的。 我们用的xformers等依赖库是从源码编译的,因为cuda版本的限制,直接pip安装可能不兼容。

在pytorch1.12.1,cuda11.3,python3.9,v100的环境下,根据官方教程,我们可以正常编译和运行以下依赖库。 xformers==0.0.23 apex==22.03 bitsandbytes==0.41.0

另外,使用更高的torch版本和对应的依赖库也可以的。

kongdebug commented 6 months ago
  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

Cristhine commented 6 months ago
  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢

kongdebug commented 6 months ago

环境名称ape

---原始邮件--- 发件人: @.> 发送时间: 2024年1月10日(周三) 下午3:33 收件人: @.>; 抄送: @.**@.>; 主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14)

使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像

但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Cristhine commented 6 months ago

环境名称ape ---原始邮件--- 发件人: @.> 发送时间: 2024年1月10日(周三) 下午3:33 收件人: @.>; 抄送: @.**@.>; 主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14) 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64 显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗 你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118 你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

请问您在什么显卡环境infer或者训练的 我在a100卡上跑个infer会报错 error in ms_deformable_im2col_cuda: no kernel image is available for execution on the device 看起来是架构不对

kongdebug commented 6 months ago

在4090显卡上跑的

---原始邮件--- 发件人: @. ub.com> 发送时间: 2024年1月10日(周三) 下午4:06 收件人: @.>; 抄送: @.**@.>; 主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14)

环境名称ape … ---原始邮件--- 发件人: @.> 发送时间: 2024年1月10日(周三) 下午3:33 收件人: @.>; 抄送: @.@.>; 主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14) 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64 显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗 你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118 你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连torch都没 请问这个镜像的python哪个环境是正常的呢 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

请问您在什么显卡环境infer或者训练的 我在a100卡上跑个infer会报错 error in ms_deformable_im2col_cuda: no kernel image is available for execution on the device 看起来是架构不对

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

shutu-1 commented 5 months ago

你试着在容器里面运行 python3 -m pip install -e . 我是这样后可以的

环境名称ape ... ---原始邮件--- 发件人: @.**> 发送时间: 2024年1月10日(周三) 下午3:33 收件人: @.**>;抄送: @.**@。**>;主题: Re: [shenyunhang/APE] docker环境问题 (Issue #14) 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) 键: shape=(4, 1024, 16, 64) (torch.float32) 值: shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 不支持 0.0 flshattF,因为: xFormers 不是在 CUDA 支持下构建的 dtype=torch.float32(支持:{torch.bfloat16, torch.float16}) 需要具有计算能力的 GPU > 7.5 不支持 tritonflashattF,因为: xFormers 不是在 CUDA 支持下构建的 dtype=torch.float32(支持:{torch.bfloat16、torch.float16}) triton 不可用 需要 A100 GPU 不支持 cutlassF,因为: xFormers 不是在 CUDA 支持下构建的 不支持 smallkF,因为: xFormers 不是在 CUDA 支持下构建的 max(query.shape[-1] != value.shape[-1]) > 32 具有自定义比例 不支持 每头嵌入:64 显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗 你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像apecu118 你好我使用你得镜像下载使用 进去之后执行python命令 环境里面连火炬都没 请问这个镜像的python哪个环境是正常的呢 — Reply to this email directly to be view on GitHub, or unsubscribe.您收到此消息是因为您发表了评论。邮件 ID: @_.***>

请问您在什么显卡环境infer或者训练的 我在a100卡上跑个infer会报错 error in ms_deformable_im2col_cuda: no kernel image is available for execution on the device 看起来是架构不对

wdc233 commented 1 month ago

你之前没有xformer如何运行的呀,我这没有xformer压根运行不了呀

  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

我这没有xformer压根运行不了啊

shutu-1 commented 1 month ago

keyk13/ape_cu118:v1

你之前没有xformer如何运行的呀,我这没有xformer压根运行不了呀

  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

我这没有xformer压根运行不了啊

我使用的keyk13/ape_cu118:v1镜像,里面有 “xformers 0.0.23.post1+cu118 pypi_0 pypi” 环境呢,可能是你镜像要换成最新的,进“https://hub.docker.com/repository/docker/keyk13/ape_cu118”后点击tag,下载最新的v1镜像

wdc233 commented 1 month ago

keyk13/ape_cu118:v1

你之前没有xformer如何运行的呀,我这没有xformer压根运行不了呀

  1. 使用docker pull keyk13/ape_image:v1 拉取了在以上问题中提供的镜像
  2. 但是在容器中没有找到xformers库,pip install xformers 会安装0.0.23版本,自动更新torch版本;如果安装0.0.17版本,会有以下报错 NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(4, 1024, 16, 64) (torch.float32) key : shape=(4, 1024, 16, 64) (torch.float32) value : shape=(4, 1024, 16, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) requires a GPU with compute capability > 7.5 tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 has custom scale unsupported embed per head: 64

显卡v100, 不太清楚是啥问题,可以在docker 镜像中更新可用的xformers吗

你好,之前docker镜像中确实没下xformers,因为作者配置文件中没写而且不需要xformers也能运行。而在该环境中安装xformers我也遇到了问题。目前我是在torch2.1.2+cu118的环境中配置好了xformers与ape的依赖,并上传了该镜像ape_cu118

我这没有xformer压根运行不了啊

我使用的keyk13/ape_cu118:v1镜像,里面有 “xformers 0.0.23.post1+cu118 pypi_0 pypi” 环境呢,可能是你镜像要换成最新的,进“https://hub.docker.com/repository/docker/keyk13/ape_cu118”后点击tag,下载最新的v1镜像

image 链接不对

shutu-1 commented 1 month ago

用这个 docker pull keyk13/ape_cu118:v1 直接拉