You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for the great work; we appreciate it. I have several questions/suggestions about it, though:
Do you plan to publish the explicit network architecture code soon? It would be beneficial for further research.
There are many absolute paths and importing issues throughout the codebase. Even some paths are hardcoded, which makes it more cumbersome to work with than needed. I suggest fixing those for easy usability.
Do you have an individual .py file to generate novel views from real-life images? As far as I understood, the inversion pipeline is as follows: first, apps/infer_hybrid_encoder.py generates a w, then inversion/scripts/run_pti.py fine-tunes the said w. Can I use that fine-tuned w and your hybrid encoder, along with different angles fed to the generator & renderer for the novel views? Have you applied further adjustments for generating new views (to get the images in the last row of Fig. 7. on the SIGGRAPH paper)?
Best,
Batuhan
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the great work; we appreciate it. I have several questions/suggestions about it, though:
apps/infer_hybrid_encoder.py
generates a w, theninversion/scripts/run_pti.py
fine-tunes the said w. Can I use that fine-tuned w and your hybrid encoder, along with different angles fed to the generator & renderer for the novel views? Have you applied further adjustments for generating new views (to get the images in the last row of Fig. 7. on the SIGGRAPH paper)?Best,
Batuhan
The text was updated successfully, but these errors were encountered: