Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. #19

Open
ZeeMenng opened this issue Sep 28, 2024 · 8 comments

Comments

@ZeeMenng
Copy link

换了各种weight_type都不行。

image
@wailovet
Copy link
Contributor

关掉fp8_fast_mode可以吗

@ZeeMenng
Copy link
Author

不行,关掉后显示RuntimeError: unsupported scalarType
image
got prompt
transformer type: 5b
GGUF: False
model weight dtype: torch.float8_e4m3fn manual cast dtype: torch.float16
Encoded latents shape: torch.Size([1, 1, 16, 60, 90])
/opt/homebrew/Caskroom/miniconda/base/lib/python3.12/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: huggingface/transformers#31884
warnings.warn(
Requested to load SD3ClipModel_
Loading 1 new model
loaded completely 0.0 4541.693359375 True
!!! Exception during processing !!! unsupported scalarType
Traceback (most recent call last):
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/nodes.py", line 841, in process
autocast_context = torch.autocast(mm.get_autocast_device(device)) if autocastcondition else nullcontext()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 229, in init
dtype = torch.get_autocast_dtype(device_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: unsupported scalarType

@wailovet
Copy link
Contributor

更新一下ComfyUI-CogVideoXWrapper试试

@ZeeMenng
Copy link
Author

全都升级过了,一样的错误,试了各种办法不行,很奇怪。

我的是M1 Pro,Python我用3.12和3.11两个版本分别作了尝试,都不行。不知道哪个依赖包版本有问题,还是怎样的。

image image image

@wailovet
Copy link
Contributor

目前看起来支持应该有些问题, 可以看 kijai/ComfyUI-CogVideoXWrapper#59

@YAY-3M-TA3
Copy link

For MPS, you can try:
in CogVideoXLoader node in ComfyUI, set
weight_type to bf16
enable_vae_encode_tiling to true

then a code change:
custom_cogvideox_transformer_3d.py

Line: 103, change to this:
query.to(torch.bfloat16), key.to(torch.bfloat16), value.to(torch.bfloat16), attn_mask=attention_mask, dropout_p=0.0, is_causal=False

@LONGG1126
Copy link

你解决这个问题了吗

@lvboy1
Copy link

lvboy1 commented Dec 4, 2024

我是m1max运行flux模型的时候出现这个问题,解决方式是打开comfyui的时候用cpu运行
python main.py --force-fp16 --use-split-cross-attention --cpu
切换cpu运行以后可以完整运行流程,不过速度很慢很慢=。=
感觉是mac支持的问题,要是想快只能换电脑

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants