This PR adds a CLI command to call out to the new Apollo Server
This is how I expect most consumers to use the AI services being developed over in gen.
Some services, like adaptor generation, are likely to get a dedicated command, with a simpler interface and more help.
Apollo logs are streamed into the CLI via websocket connection.
The CLI recognises the { files } key in results and will either nicely log the files to stdout out write the files to disk, according to to input parameters.
Since this is bleeding edge functionality I only intend to add very light test coverage
Related issue
Closes #680
Basic Usage
The basic usage format is:
openfn apollo <service-name> path/to/input.json
By default the CLI will call out to the staging server (which obviously doesn't exist yet). But you can pass --local to use a local server on the default port, or set OPENFN_APOLLO_DEFAULT_ENV=local to default to using a local server.
Output will be logged to stdout by default, or you can pas -o and set the output path, just like other services.
Sample output
$ openfn apollo adaptor_gen tmp/adaptor-cat-fact.json --local
CLI] ♦ Calling Apollo service: adaptor_gen
[CLI] ✔ Using apollo server at http://localhost:3000
[CLI] ♦ Calling apollo: ws://localhost:3000/services/adaptor_gen
[APO] ℹ None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[APO] ℹ INFO:adaptor_gen:Generating adaptor template for /facts
[APO] ℹ INFO:adaptor_gen:prompt: Create an OpenFn function that accesses the //facts endpoint
[APO] ℹ INFO:signature_generator:Generating signature for model gpt3_turbo
[APO] ℹ INFO:signature_generator:Parsing OpenAPI specification
[APO] ℹ INFO:signature_generator:Extracting API information from parsed spec with provided instruction
[APO] ℹ INFO:signature_generator:Generating gpt3_turbo prompt for signature generation
[APO] ℹ INFO:inference.gpt3_turbo:OpenAI GPT-3.5 Turbo client loaded.
[APO] ℹ INFO:inference.gpt3_turbo:Generating
[APO] ℹ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
[APO] ℹ INFO:inference.gpt3_turbo:done
[APO] ℹ INFO:signature_generator:Signature generation complete
[APO] ℹ INFO:code_generator.prompts:Generating prompt for: code
[APO] ℹ INFO:code_generator.prompts:Prompt generation complete for: code
[APO] ℹ INFO:inference.gpt3_turbo:OpenAI GPT-3.5 Turbo client loaded.
[APO] ℹ INFO:inference.gpt3_turbo:Generating
[APO] ℹ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
[APO] ℹ INFO:inference.gpt3_turbo:done
[CLI] ✔ Result:
-------------
Adaptor.d.ts
-------------
/**
* Retrieves facts from the //facts endpoint and includes them in the state data.
* Sends a GET request to the //facts endpoint.
* @example
* getFacts(callback)
* @function
* @param {Function} callback - A callback which is invoked with the resulting state at the end of this operation. Allows users to customize the resulting state. State.data includes the response from the //facts endpoint.
* @example <caption>Get facts from the //facts endpoint</caption>
* getFacts()
* @returns {Function} A function that updates the state with the retrieved facts.
*/
export function getFacts(callback?: Function): Operation;
-----------
Adaptor.js
-----------
```javascript
import { http } from '@openfn/language-common';
/**
* Retrieves facts from the //facts endpoint and includes them in the state data.
* Sends a GET request to the //facts endpoint.
* @example
* getFacts(callback)
* @function
* @param {Function} callback - A callback which is invoked with the resulting state at the end of this operation. Allows users to customize the resulting state. State.data includes the response from the //facts endpoint.
* @example <caption>Get facts from the //facts endpoint</caption>
* getFacts()
* @returns {Function} A function that updates the state with the retrieved facts.
*/
export function getFacts(callback?: Function): Operation {
return async (state) => {
try {
const response = await http.get('//facts');
const data = response.data;
const newState = { ...state, data };
if (callback) {
return callback(newState);
}
return newState;
} catch (error) {
console.error('Error retrieving facts:', error);
return state;
}
};
}
## Future Work
Maybe in this PR, maybe not?
* [ ] `openfn apollo list` should list available services
* [ ] `openfn apollo help <service-name>` should print help for a specific service (probably the readme)
* [x] Stream logs back to the CLI via websocket
Short Description
This PR adds a CLI command to call out to the new Apollo Server
This is how I expect most consumers to use the AI services being developed over in
gen
.Some services, like adaptor generation, are likely to get a dedicated command, with a simpler interface and more help.
Apollo logs are streamed into the CLI via websocket connection.
The CLI recognises the
{ files }
key in results and will either nicely log the files to stdout out write the files to disk, according to to input parameters.Since this is bleeding edge functionality I only intend to add very light test coverage
Related issue
Closes #680
Basic Usage
The basic usage format is:
By default the CLI will call out to the staging server (which obviously doesn't exist yet). But you can pass
--local
to use a local server on the default port, or setOPENFN_APOLLO_DEFAULT_ENV=local
to default to using a local server.Output will be logged to stdout by default, or you can pas
-o
and set the output path, just like other services.Sample output