-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM on two 80GB GPUs #49
Comments
+1 |
1 similar comment
+1 |
I also encountered this problem, have you solved it now? @kyleliang919 @edisonzf2020 |
unfortunately no, I think you probably need at least 320 GB to handle this run. |
thank you for your reply. |
I used 8 A800 (80g) GPU to run the following, however it still encounters OOM error. Am i missing something or did i set something incorrectly? accelerate launch finetune.py |
Both with or without lora hits the OOM error, this is on only 8K sequence length, so memory consumption should be around 4x smaller compared with training on 16K sequence length.
accelerate is configured to use two GPU and FSDP.
The text was updated successfully, but these errors were encountered: