-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Whisper language detection #1097
base: main
Are you sure you want to change the base?
Conversation
Thanks for the PR! This will certainly be a useful feature. Regarding the implementation, I think it can be greatly simplified as follows:
Currently, the implementation seems to perform a full generation step (could be hundreds of forward passes). |
Sorry about that, it was simpler to code and the performance impact for my app was minimal! I've reworked things now to only run one pass for language detection. Thanks for all the work on this library. |
7bbc92f
to
db84540
Compare
hey there, please approve this feature, it is a quite useful feature :) |
const output = await this.generate({ | ||
...options, | ||
generation_config: { | ||
...generation_config, | ||
good_words_ids, | ||
num_beams: 1, | ||
do_sample: false, | ||
}, | ||
stopping_criteria, | ||
decoder_input_ids, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be able to replace this with a single forward pass (by called this.forward(...)
instead of using a generation step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of user options for (and logic in) generate and I wanted to respect it while running language detection. It was simpler to extend generate to just stop after one pass than to duplicate that and use forward directly.
Like, hypothetically, a user adds a logits processor that suppresses the first 10 seconds worth of tokens. There is a 15s audio clip in two languages, and the context switches at 10s. The language detection should detect the second language not the first.
* @returns {Promise<number[]>} A list of language token IDs detected. | ||
*/ | ||
async _detect_language(options) { | ||
const inputs = options.inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When testing this PR "inputs" was in my case not present, instead I had "input_features".
I noticed the type returns: "(Tensor of varying shape depending on the modality, optional): The sequence used as a prompt for the generation or as model inputs to the encoder. If null the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values."
By the way thanks for adding language detection, hope it will be merged soon :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I can't reproduce. And reading the typing, sounds like the input_ids
/input_values
/input_features
should always be stored as inputs
.
And even if the typing is sometimes wrong, patching _detect_language()
to use for ex. options?.inputs ?? options?.input_features
still won't fix the generate()
function which is currently in main. So it sounds like maybe worth filing a separate issue and or PR.
But if you're interested in just trying an alternative build, my "develop" branch is a fork of v3.0.2 with the language detection patch applied that works for me in a real app. Hope it helps!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it depends on the model used, you probably can reproduce it with https://huggingface.co/onnx-community/whisper-large-v3-turbo - I was able to fix it with const inputs = options.inputs ?? options.input_features;
in _detect_language on my side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've already used turbo and it works fine for me, sorry! (I do get an unrelated error when using turbo instead of small in the test suite.)
I guess it's up to the maintainer to decide what to do with this PR, and edits are enabled.
But I don't understand why you're not also getting issues with the generate()
code that's currently in main. And if so, that's worth a separate issue and PR.
See #302
Adds support for automatically detecting language to Whisper tasks.
The existing HuggingFace and Whisper implementations in Python were used as reference:
Hugging Face Transformers
Original Whisper
Also updates the existing Whisper test suites, including adding a string similarity check for actual model output (as opposed to just output length). Please note that the "new" development dependency for these tests, "fastest-levenshtein" is already used by "webpack-cli".