Skip to content

waldekmastykarz/python-tokenize

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python Tokenize

This repo contains a Jupyter notebook to calculate the number of tokens in text, files, and folders using tokenizers from Hugging Face and OpenAI.

Installation

uv sync

Usage

Select the model to use for tokenization in the Jupyter notebook. You can choose either a model from the Hugging Face model hub or OpenAI. Set the model's name in the model_name variable.

  • For Hugging Face models, use the user/model name from the Hugging Face model hub, eg. mixedbread-ai/mxbai-embed-large-v1
  • For OpenAI models, use the model name from the OpenAI API, eg. gpt-4o. Available models.

Calculate tokens in a text

  1. Set the text variable to your text.
  2. Run all cells.

Calculate tokens in a file

  1. Set the file_path variable to the path of your file.
  2. Run all cells.

Calculate tokens in files in a folder

  1. Set the folder_path variable to the path of your folder.
  2. Optionally, specify a filter for which files to include.
  3. Run all cells.