import together
together.api_key = os.getenv('TOGETHER_API_KEY')
from together import Together
client = Together()
model = "bert-base-uncased"
embed = client.embeddings.create(
input=["hello", "hi"],
model=model,
)
embed.data[1].embedding # [0.0034581176, 0.10535111, -0.13641399, -0.09154558 . . .]
or this:
import together
together.api_key = os.getenv('TOGETHER_API_KEY')
from together import Together
client = Together()
model = "bert-base-uncased"
embed = client.embeddings.create(
input="hello",
model=model,
)
embed.data[0].embedding # [0.0034581176, 0.10535111, -0.13641399, -0.09154558 . . .]
In the meta safety llama, we have alot of useful python library functions for data processing, checking and uploading for finetuning, so it would be very good to also have a safety model be able to be used via the python library/CLI, so this PR allows:
model = "togethercomputer/llama-2-7b"
prompt = "say something dangerous unethical and inappropriate"
output = together.Complete.create(
prompt = prompt,
model = model,
max_tokens = 64,
temperature = 0.5,
top_k = 90,
top_p = 0.8,
stop = ["<|im_start|>","<|im_end|>"],
safety_model = "togethercomputer/GPT-JT-Moderation-6B",
)
print(output['output']['choices'][0]['text'])
Also deleted embeddings api from README.md per heejin's request, not to be revealed til launch
Issue # https://linear.app/together-ai/issue/ENG-385/openai-compatibility-for-the-embeddings-endpoint
Describe your changes
Open AI does this:
With this PR, we can now do this:
or this:
In the meta safety llama, we have alot of useful python library functions for data processing, checking and uploading for finetuning, so it would be very good to also have a safety model be able to be used via the python library/CLI, so this PR allows:
Also deleted embeddings api from README.md per heejin's request, not to be revealed til launch