Closed ParseDark closed 1 month ago
Code:
import ell
@ell.simple(model="gpt-4o")
def hello(name: str):
"""You are a helpful assistant.""" # System prompt
return f"Say hello to {name}!" # User prompt
greeting = hello("Sam Altman")
print(greeting)
What version of ell are you on? Can you pip install --upgrade ell.
What version of ell are you on? Can you pip install --upgrade ell.
thanks, ell dev team. Currently my ell-ai version is 0.0.12
# pip list | grep ell
ell-ai 0.0.12
So I try to re-install
pip install ell-ai
# Successfully installed ell-ai-0.0.12
I re-run the Hello World example, but I still get this error.:
1 │ import ell
2 │ import logging
3 │ from typing import List
4 │
5 │ # Configure logging
6 │ logging.basicConfig(level=logging.DEBUG)
7 │
8 │ @ell.simple(model="gpt-4o")
9 │ def hello(name: str) -> str:
10 │ """You are a helpful assistant.""" # System prompt
11 │ try:
12 │ greeting = f"Say hello to {name}!" # User prompt
13 │ logging.debug(f"Greeting generated for {name}: {greeting}")
14 │ return greeting
15 │ except Exception as e:
16 │ logging.error(f"Error generating greeting for {name}: {e}")
17 │ return "Error generating greeting."
18 │
19 │ def greet_multiple(names: List[str]) -> List[str]:
20 │ """Greet multiple people."""
21 │ greetings = []
22 │ for name in names:
23 │ greeting = hello(name)
24 │ greetings.append(greeting)
25 │ logging.debug(f"All greetings generated: {greetings}")
26 │ return greetings
27 │
28 │ # Example usage of the new function
29 │ if __name__ == "__main__":
30 │ greeting = hello("Sam Altman")
31 │ print(greeting) # Debug information
32 │ greetings_list = greet_multiple(["Alice", "Bob", "Charlie"])
33 │ print(greetings_list) # Debug information
DEBUG:httpcore.connection:connect_tcp.started host='127.0.0.1' port=8083 local_address=None timeout=5.0 socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f7df8010460>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'Connection established', [])
DEBUG:httpcore.proxy:start_tls.started ssl_context=<ssl.SSLContext object at 0x7f7e0b042cc0> server_hostname='api.openai.com' timeout=5.0
DEBUG:httpcore.proxy:start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f7df80106a0>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 29 Sep 2024 05:13:19 GMT'), (b'Content-Type', b'text/event-stream'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Cache-Control', b'no-cache'), (b'Cache-Control', b'no-cache'), (b'X-Oneapi-Request-Id', b'202409291313166292105581082218'), (b'CF-Cache-Status', b'DYNAMIC'), (b'Report-To', b'{"endpoints":[{"url":"https:\\/\\/a.nel.cloudflare.com\\/report\\/v4?s=rLzSbiz%2BexC5ZYfdJM2s8%2FsyMxXpKjPV6dKqSsMGnv%2BGWQOJt83nGqbwI9v3cTEzINjRvtYGwzsjblXUaO5grClnOeoQbJMtHXLZLV679e50HXSYkZ08UV7vOOrl2%2B%2F8"}],"group":"cf-nel","max_age":604800}'), (b'NEL', b'{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'), (b'Strict-Transport-Security', b'max-age=31536000; includeSubDomains; preload'), (b'X-Content-Type-Options', b'nosniff'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8ca97ba2bf16a343-TPE')])
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
DEBUG:openai._base_client:HTTP Response: POST https://api.openai.com/v1/chat/completions "200 OK" Headers([('date', 'Sun, 29 Sep 2024 05:13:19 GMT'), ('content-type', 'text/event-stream'), ('transfer-encoding', 'chunked'), ('connection', 'keep-alive'), ('cache-control', 'no-cache'), ('cache-control', 'no-cache'), ('x-oneapi-request-id', '202409291313166292105581082218'), ('cf-cache-status', 'DYNAMIC'), ('report-to', '{"endpoints":[{"url":"https:\\/\\/a.nel.cloudflare.com\\/report\\/v4?s=rLzSbiz%2BexC5ZYfdJM2s8%2FsyMxXpKjPV6dKqSsMGnv%2BGWQOJt83nGqbwI9v3cTEzINjRvtYGwzsjblXUaO5grClnOeoQbJMtHXLZLV679e50HXSYkZ08UV7vOOrl2%2B%2F8"}],"group":"cf-nel","max_age":604800}'), ('nel', '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'), ('strict-transport-security', 'max-age=31536000; includeSubDomains; preload'), ('x-content-type-options', 'nosniff'), ('server', 'cloudflare'), ('cf-ray', '8ca97ba2bf16a343-TPE')])
DEBUG:openai._base_client:request_id: None
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
Traceback (most recent call last):
File "/Volumes/workspace/SWorkspace/ell_dir/main.py", line 30, in <module>
greeting = hello("Sam Altman")
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/ell/lmp/_track.py", line 64, in tracked_func
return func_to_track(*fn_args, **fn_kwargs, _invocation_origin=invocation_id)[0]
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/ell/lmp/complex.py", line 68, in model_call
(result, final_api_params, metadata) = provider.call(ell_call, origin_id=_invocation_origin, logger=_logger if should_log else None)
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/ell/provider.py", line 127, in call
messages, metadata = self.translate_from_provider(
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/ell/providers/openai.py", line 126, in translate_from_provider
Message(role=role,
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/ell/types/message.py", line 319, in __init__
super().__init__(content=content, role=role)
File "/Users/hawei/miniconda3/envs/ell/lib/python3.10/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Message
role
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.9/v/string_type
response:
OH! What is "gptapi.us"?
OH! What is "gptapi.us"?
Thanks, @MadcowD . well. you know i am from China. if follow the policy of the openai. we can't use the openai service. But my country also has a third-party company that does the proxy things. basically is creating an outset server to do the proxy. I will keep trying another tool. Thanks for your help.
I think o can still fix this for you! It just the proxy forgot to send you a role :)
I think o can still fix this for you! It just the proxy forgot to send you a role :)
Oooooooi!!! thanks I also think proxy server didn't send the role field is the root cause. Here is what i change on the source code.
openai client call on here
So my mind is if the calling part have one condition for the None. Maybe in the Message type part we should be handle this condition. Should i create a PR for this condition? I don't want to offend, I just want to discuss whether this situation needs to be handled, because the proxy is not part of openai, but I really want to use this library haha