You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In paper, you say the image/text is first through VGG19/TextCNN. However, I can not find the VGG19 or TextCNN in train.lua. So I guess the data you used here is not original data but the features generated by VGG19 and TextCNN offline. Is that true?
The text was updated successfully, but these errors were encountered:
Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.
Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.
I have a RAW Pascal Sentence dataset having Images and Text based on 20 Classes.
I am having difficulty in extracting feature Vector for Image(VGG19) and Text(Sentence CNN). Could you please share some insights or point me to a direction where I can find any implementations on that part. I want to extract features for Images using Resnet50 etc and for Text using LSTM etc.
Yes, we use pre-trained features. You can extract VGG19 and TextCNN features of your custom datasets easily as there exists multiple implementations of them on github.
In paper, you say the image/text is first through VGG19/TextCNN. However, I can not find the VGG19 or TextCNN in train.lua. So I guess the data you used here is not original data but the features generated by VGG19 and TextCNN offline. Is that true?
The text was updated successfully, but these errors were encountered: