Open DVLP opened 2 months ago
Fair point, have suffered from this myself, so it will be implemented in some of nearest releases.
Note for future me: the easy part is to add stream
property to AssistantSettings
struct, the hard part is to add non streaming response handling, which is missed in current plugin implementation.
A thoughts in update to this issue. I'm considering a non streaming solution for such as a bad one: if there's a network lag occurring (there's a very few of them for the gpt-4o at the time being, but this isn't the fact for most of third party services, where both model prompting pace and network delay can be significant) it leaving a user within a quite uncomfortable state, when he/she is unable to determine whether it's safe to use the view which he/she focused on now or not. Meaning it's completely opaque does the request already failed completely or is it pending, and changing view state would break things.
This thing could be acceptable if it was a read only view (output view, separate read-only chat tab), but it's a far from being convenient with the ordinary view where he/she content presented.
The compromise solution for that issue, more like work around tbh, is to set a view checkpoint before streaming execution, and to provide a separate contextual binding for undo them all at once for the cmd/ctrl+z (contextual means, that this command wouldn't be performed out of this very strict conditions), but this is a relatively effort extensive solution to implement, so it's unlikely that I would take it into work soonish.
I think some sort of Sublime Text loading indicator along the bottom bar would be sufficient to tell you "I'm working on it"?
Currently once the answer comes back it takes a lot of undo operations to just to revert. There should be an option to disable streaming globally or for certain modes.