You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm just trying to translate long texts consisting of 6 - 8 sentences but not exceeding 100 tokens in general (in order to not overcome model's memory consumption) using pretrained Helsinki-NLP Opus models. My input file is composed of every such text per separate line. According to the recommended workflow, I initially run preprocess.sh. The preprocessed texts contain all the terms present in the original texts. Then I perform translation with marian-decoder. But my marian-decoder output turns out to contain translations of only one or two initial sentences of each original text. Why does it take place? Is there any workaround?
The text was updated successfully, but these errors were encountered:
Hello team,
I'm just trying to translate long texts consisting of 6 - 8 sentences but not exceeding 100 tokens in general (in order to not overcome model's memory consumption) using pretrained Helsinki-NLP Opus models. My input file is composed of every such text per separate line. According to the recommended workflow, I initially run preprocess.sh. The preprocessed texts contain all the terms present in the original texts. Then I perform translation with marian-decoder. But my marian-decoder output turns out to contain translations of only one or two initial sentences of each original text. Why does it take place? Is there any workaround?
The text was updated successfully, but these errors were encountered: