Closed jryu01 closed 5 months ago
Do we have examples that this works as intended? Do we have examples of successful query -> response pairs? Where is this documented?
Do we have examples that this works as intended? Do we have examples of successful query -> response pairs? Where is this documented?
Here's one of the working examples I've tried
from gollm.openai.tool_utils import generate_response query = """ ... The following sections describe the input and output of an optimization process for a epidemilogy model. ... ... INPUT: ... { ... "id": "a28314b2-c000-4c60-9bee-ce72eb4aef22", ... "updatedOn": "2024-05-28T13:07:42.875+00:00", ... "name": "5c9f3355-986c-4b96-9dc4-c6661a4b9772", ... "fileNames": [], ... "temporary": false, ... "publicAsset": false, ... "executionPayload": { ... "engine": "ciemss", ... "user_id": "no_user_provided", ... "model_config_id": "2a44ace9-26ab-44ba-9de8-7644edc5ace5", ... "timespan": { ... "start": 0, ... "end": 10 ... }, ... "policy_interventions": { ... "selection": "param_value", ... "param_names": [ ... "beta" ... ], ... "param_values": [ ... 0 ... ], ... "start_time": [ ... 2 ... ] ... }, ... "step_size": 1, ... "qoi": { ... "method": "max", ... "contexts": [ ... "Infected" ... ] ... }, ... "risk_bound": 1, ... "initial_guess_interventions": [ ... 0.0001 ... ], ... "bounds_interventions": [ ... [ ... 0 ... ], ... [ ... 1 ... ] ... ], ... "extra": { ... "num_samples": 100, ... "inferred_parameters": null, ... "maxiter": 5, ... "maxfeval": 25, ... "is_minimized": true, ... "alpha": 0.95, ... "solver_method": "dopri5", ... "solver_options": {} ... }, ... "fixed_static_parameter_interventions": [ ... { ... "timestep": 5, ... "name": "beta", ... "value": 0.01 ... } ... ] ... }, ... "resultFiles": [ ... "policy.json", ... "optimize_results.json", ... "optimize_results.dill" ... ], ... "type": "OPTIMIZATION", ... "status": "COMPLETE", ... "progress": 0, ... "startTime": "2024-05-28T13:23:46.962", ... "completedTime": "2024-05-28T13:20:12.956", ... "engine": "CIEMSS", ... "updates": [] ... } ... ... OUTPUT: ... { ... "minimization_failures": 0, ... "nfev": 99, ... "lowest_optimization_result": { ... "x": "[1.e-05]", ... "status": 1, ... "success": "True", ... "message": "Optimization terminated successfully.", ... "nfev": 15, ... "fun": 1e-05, ... "maxcv": 0.0 ... }, ... "x": "[0.0001]", ... "fun": 1e-05, ... "message": [ ... "requested number of basinhopping iterations completed successfully" ... ], ... "nit": 5, ... "success": "True" ... } ... ... Provide a summary in 100 words or less.""" generate_response(query) 'The optimization process for the epidemiology model, identified by "a28314b2-c000-4c60-9bee-ce72eb4aef22," successfully completed with no minimization failures. The process involved adjusting the parameter "beta" over a timespan of 10 units, with a focus on minimizing the number of infected individuals. The final result achieved a function value of 1e-05 after 99 function evaluations and 5 iterations. The optimization was successful, with the best parameter value found being 0.0001. The process utilized the CIEMSS engine and various solver options, completing on May 28, 2024.'
As I mentioned though I'm expecting this function to be used to request very generic llm query from the user. Also here's the reference to the related issue https://github.com/orgs/DARPA-ASKEM/projects/3/views/2?pane=issue&itemId=66836041
Description
As part of https://github.com/orgs/DARPA-ASKEM/projects/3/views/2?pane=issue&itemId=66836041
Added a function for a completion task that send a request to OpenAI model to generate a response based on a given instruction.
Resolves #(issue)