You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I have a question about how cross-fade works for the realtime GUI. This is more a question that could become either a feature request or a bug.
Theoretically it should be blending the chunks into each other, which should have nearly no latency hit as the slider would presumably control just how much of each chunk it uses.
But it seems like cross-fade adds latency to realtime output in a flat addition. Ex, I measured my latency for multiple crossfade values, and a cross-fade of .15 has almost exactly .1ms more latency than .05 which is quite a bit for real time communication.
But I was measuring from a cold start, as in i'd not be speaking, then speak, and measure the delta between my real voice and the changed voice.
So my question is: is cross-fade attempting to process silence? Like, will it append (length of crossfade) milliseconds at the beginning of a converted chunk so that it can cross-fade that silence into the output? If so, is this intended? If not, what causes the extra latency from crossfade?
Thank you so much for any insight! ❤️
The text was updated successfully, but these errors were encountered:
I also noted that real-time inference uses the GPU even during silence; this honestly seems a waste of resources. I believe this is an opportunity for performance gains by implementing some sort of noise gate in a function handling "response threshold".
Hello! I have a question about how cross-fade works for the realtime GUI. This is more a question that could become either a feature request or a bug.
Theoretically it should be blending the chunks into each other, which should have nearly no latency hit as the slider would presumably control just how much of each chunk it uses.
But it seems like cross-fade adds latency to realtime output in a flat addition. Ex, I measured my latency for multiple crossfade values, and a cross-fade of .15 has almost exactly .1ms more latency than .05 which is quite a bit for real time communication.
But I was measuring from a cold start, as in i'd not be speaking, then speak, and measure the delta between my real voice and the changed voice.
So my question is: is cross-fade attempting to process silence? Like, will it append (length of crossfade) milliseconds at the beginning of a converted chunk so that it can cross-fade that silence into the output? If so, is this intended? If not, what causes the extra latency from crossfade?
Thank you so much for any insight! ❤️
The text was updated successfully, but these errors were encountered: