Skip to content

Force DeepSeek r1 models to think for as long as you wish

License

Notifications You must be signed in to change notification settings

qunash/r1-overthinker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Open In Colab

DeepSeek R1 Overthinker

Using this app you can force DeepSeek R1 models to think more deeply by extending their reasoning process. It uses unsloth optimized models for better performance and unlimited context length (only limited by available VRAM).

The app works by detecting when the model tries to conclude thoughts too early and replacing those with prompts that encourage additional reasoning, continuing until a minimum threshold of thinking set by you is reached.



App by anzorq. If you like it, please consider supporting me:

Buy Me A Coffee


image

Features

  • 🤔 Force models to think longer and more thoroughly
  • 🔄 Customizable reasoning extensions and thinking thresholds
  • 🎯 Fine-grained control over model parameters (temperature, top-p, etc.)
  • 💭 Visible thinking process with token count tracking
  • 📝 LaTeX support for mathematical expressions
  • 🖥️ Optimized for various VRAM configurations
  • ♾️ Unlimited context length (VRAM-dependent)
  • 🔄 Choose from multiple model sizes (1.5B to 70B parameters)

Available Models

You can choose from any of the unsloth-optimized distilled DeepSeek R1 models:

Qwen-based Models

LLaMA-based Models

Choose the model size based on your available VRAM and performance requirements. Larger models generally provide better quality responses but require more VRAM. Qwen and LLaMA architectures may perform differently on various tasks.

Note: You can run models up to 14B parameters on a free Google Colab T4 GPU.

Related Work

s1: Simple test-time scaling

The paper "s1: Simple test-time scaling" is an independent work by Niklas Muennighoff et al. that tests and validates the approach used in this repository. The key contributions of the paper include:

  • Developing budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation.
  • Curating a small dataset s1K of 1,000 questions paired with reasoning traces.
  • Achieving strong reasoning performance and test-time scaling with the Qwen2.5-32B-Instruct language model.

For more details, see the paper's repository.

Credits

License

This project is licensed under the MIT License. See the LICENSE file for details.


Visitors

About

Force DeepSeek r1 models to think for as long as you wish

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published