HuggingFace models in NeMo #7365
Closed
joaocp98662
started this conversation in
General
Replies: 1 comment 7 replies
-
We're working to add more community models to Nemo. Currently, falcon is not supported because of its special architecture. We do support llama2 recently. |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I would like to use several Transformer models provided by HuggingFace (e.g. tiiuae/falcon-7b) for inference using NeMo, but I cannot understand how to do it. I found a method get_huggingface_lm_model() from the nemo.collections.nlp.modules.common.huggingface.huggingface_utils, that imports the HF's model using the AutoModel class from Transformers. If the model is a BERT it works fine, but the Falcon LLM is detected as a RW model and fails to match any of the model types defined in the HUGGINGFACE_MODELS (BertModel, DistilBertModel, CamembertModel, RobertaModel, AlbertModel, GPT2Model) in the huggingface_utils.py, and raises an exception saying "Use HuggingFace API directly in NeMo".
How can i convert HF models to NeMo for inference and trainning?
I've searched on the NeMo documentation, and through the internet, but I'm not finding any relevant information nor examples. Can someone help me or point me in the right direction please?
Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions