You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to flag a recent change in the DeepSeek-R1 model on HuggingFace Hub (including its distilled version). the chat template in tokenizer_config.json was updated to add a line "\n" at the end. it seems this modification aims to ensure the model begins from the “thinking” process rather than skipping that step entirely.
However, this change has unfortunately disrupted the functionality of the reasoning-parser in vLLM for DeepSeek-R1. I’m not certain who oversees the deepseek_r1 reasoning-parser within vLLM, but I would greatly appreciate any assistance in resolving this issue as soon as possible.
Thank you!
The text was updated successfully, but these errors were encountered:
Hello everyone,
I wanted to flag a recent change in the DeepSeek-R1 model on HuggingFace Hub (including its distilled version). the chat template in tokenizer_config.json was updated to add a line "\n" at the end. it seems this modification aims to ensure the model begins from the “thinking” process rather than skipping that step entirely.
However, this change has unfortunately disrupted the functionality of the reasoning-parser in vLLM for DeepSeek-R1. I’m not certain who oversees the deepseek_r1 reasoning-parser within vLLM, but I would greatly appreciate any assistance in resolving this issue as soon as possible.
Thank you!
The text was updated successfully, but these errors were encountered: