Closed joeduffy closed 1 year ago
We've seen a few cases where tokens appear to be dropped between the Pulumi AI backend and front end. It's possible these are cases where the OpenAI API returns multiple tokens per "chunk" in its streaming results. It does seem like it's become more common recently, perhaps due to performance improvements from the GPT4 API that may be triggering some race condition in our streaming implementation?
I just saw what I think is this again in a weird way; notice the "Google Cloud PlatformG)" bit:
The pattern you're asking about describes setting up a VPC (Virtual Private Cloud) within Google Cloud PlatformG), creating subnetworks within that VPC, and applying firewall rules to control the traffic within and between these subnetworks.
Query was: https://www.pulumi.com/ai/?convid=863b57e4-0565-48d5-b200-cb6fa645dddd.
@AaronFriel Am I correct that the imminent release (today) will address this problem?
Yeah, that looks like it dropped tokens around (GCP).
It looks like this was addressed with the new site rollout; seen no reports since.
See below. Is there a bug perhaps in how we detect and format code blocks? It could very well be that ChatGPT gave us a garbage response, but I've never seen this before, including from ChatGPT itself.