togethercomputer / together-python

The Official Python Client for Together's API
https://pypi.org/project/together/
Apache License 2.0
31 stars 7 forks source link

Make together embeddings.create() into OpenAI compatible format and allow providing a safety_model to Complete.create() #63

Closed clam004 closed 10 months ago

clam004 commented 10 months ago

Issue # https://linear.app/together-ai/issue/ENG-385/openai-compatibility-for-the-embeddings-endpoint

Describe your changes

Open AI does this:

from openai import OpenAI
client = OpenAI()

def get_embedding(text, model="text-embedding-ada-002"):
   text = text.replace("\n", " ")
   return client.embeddings.create(input = [text], model=model).data[0].embedding

df['ada_embedding'] = df.combined.apply(lambda x: get_embedding(x, model='text-embedding-ada-002'))
df.to_csv('output/embedded_1k_reviews.csv', index=False)

With this PR, we can now do this:

import together

together.api_key = os.getenv('TOGETHER_API_KEY')

from together import Together

client = Together()

model = "bert-base-uncased"

embed = client.embeddings.create(
    input=["hello", "hi"], 
    model=model,
)

embed.data[1].embedding # [0.0034581176, 0.10535111, -0.13641399, -0.09154558 . . .]

or this:

import together

together.api_key = os.getenv('TOGETHER_API_KEY')

from together import Together

client = Together()

model = "bert-base-uncased"

embed = client.embeddings.create(
    input="hello", 
    model=model,
)

embed.data[0].embedding # [0.0034581176, 0.10535111, -0.13641399, -0.09154558 . . .]

In the meta safety llama, we have alot of useful python library functions for data processing, checking and uploading for finetuning, so it would be very good to also have a safety model be able to be used via the python library/CLI, so this PR allows:

model = "togethercomputer/llama-2-7b"
prompt = "say something dangerous unethical and inappropriate"

output = together.Complete.create(
  prompt = prompt, 
  model = model, 
  max_tokens = 64,
  temperature = 0.5,
  top_k = 90,
  top_p = 0.8,
  stop = ["<|im_start|>","<|im_end|>"], 
  safety_model = "togethercomputer/GPT-JT-Moderation-6B", 
)

print(output['output']['choices'][0]['text'])

Also deleted embeddings api from README.md per heejin's request, not to be revealed til launch

clam004 commented 10 months ago

@orangetin changed to use the client = TogetherAI() we talked about