jozu-ai / kitops

An open source DevOps tool for packaging and versioning AI/ML models, datasets, code, and configuration into an OCI artifact.
https://KitOps.ml
Apache License 2.0
510 stars 56 forks source link

Updates to dev UI #300

Open gorkem opened 6 months ago

gorkem commented 6 months ago

Describe the problem you're trying to solve Dev UI should be more helpful with helping the application developers integrate with the model.

Describe the solution you'd like

annigro commented 6 months ago

https://build.nvidia.com/explore/discover

annigro commented 6 months ago

UX Design

gorkem commented 6 months ago

Here is some more clarification on the requests.

POST /completion is an API specific to llama.cpp server. There is example code available for its usage which we will adjust for code generation feature.

POST /v1/chat/completions The endpoint for chat completion API is compatible with the OpenAI endpoint. We can use the existing OpenAPI libraries for code generation.

Options

The initial thought was to reduce the options to match OpenAI. However, with further thinking and considering the options are supported by both endpoints. We should continue to use all options but categorize them better.

Categories

Text Generation Controls

Sampling and Diversity

Advanced Settings and Customization

Probability and Statistical Controls

single or multiple page

I have not really found a good reason to keep the multiple page. It is unintuitive to go back and forth for changing the parameters. I suggest we do a single page implementation.

annigro commented 6 months ago

--> We talked about how the generated code for chat mode can get really long and therefore messy. @gorkem I assume the first two lines are your solution. How would it look in the UI?

javisperez commented 6 months ago

@annigro no, Gorkem's first two lines are related to internal code usage, and should be transparent for the UI other than one line. We talked about doing something like this for it:

message: [{ actor: 'user', content: 'foo bar fooz' }]

and when is too long:

message: [{ actor: 'user', content: 'foo bar ...' }] <-- ellipsis but for the message's content only, and only if is too long.

but clicking on "copy code" would always copy the whole thing, regardless of the ellipsis.

javisperez commented 6 months ago

@gorkem i gave a try to the open ai api but looks like both the payload and the response is different than the one we already have in llama.cpp (which makes sense). I vote to keep using the completion llama.cpp endpoint instead or redoing everything including the api layer just to support openai v1 endpoints. Thoughts?

annigro commented 5 months ago

@gorkem Figma file to review

gorkem commented 5 months ago

I did a pass left a few comments.

annigro commented 4 months ago

Categories and values for devmode