Closed MaxGhenis closed 1 week ago
@anth-volk It seems like in Analysis.jsx
, we are not using the openai package at all. I can make the prompt show but the data is hard coded. How is the openai api supposed to be used with regards to this component?
@czhou578 Thanks for bringing this up. We're actually going to switch over to using a competitor, Claude (per this discussion), so could you hold off until then? We were looking to have Claude implemented by Friday, as it's a high-priority fix, and if you'd like to take that on (or do part and have assistance), you could do so once we have a subscription, which should be by 5PM Eastern US today, and open an issue. Are you interested at all?
Sure. I'll do a part with assistance then. Thanks for letting me know!
I think the prompt can be fixed independently of the Claude switch. Viewing the prompt occurs before the LLM query.
@czhou578 In that case, let me look into it.
I now better understand the issue at hand here. The prompt itself is hard-coded, which is correct. The API itself is what interacts with OpenAI, so in the code, where you find the API call to /analysis
, this generates the output. The problem identified in this issue is that the prompt itself isn't output into the CodeEditor
block.
Makes sense. I'll just make it so that the hard coded prompt shows up in the CodeBlock.jsx
component.
Example - this shows even when the microsim completes
https://policyengine.org/uk/policy?focus=policyOutput.analysis&reform=3177®ion=uk&timePeriod=2024&baseline=1