贼有意思的AI干货,满足你的好奇心。服务咨询+v(aargguoyu),邮箱: aargwangguoyu@outlook.com AARG-AI Research Group, exploring AI and its practical implementation. Let's embrace the future!
No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(40, 6768, 1, 64) (torch.bfloat16) key : shape=(40, 6768, 1, 64) (torch.bfloat16) value : shape=(40, 6768, 1, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.bfloat16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 64