I have followed the tutorial till querying unstructured data source. Changed the import to core/experimental from llama_index.
I get this error when run the main.py
`& : File C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\Scripts\Activate.ps1 cannot be loaded because running scripts is disabled on this
system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
PS C:\Users\User\Desktop\Software_AI_Coding\Agent AI> & "c:/Users/User/Desktop/Software_AI_Coding/Agent AI/ai/Scripts/python.exe" "c:/Users/User/Desktop/Software_AI_Coding/Agent AI/main.py"
Country Population 2024 Population 2023 Area (km2) Density (/km2) Growth Rate World % World Rank
0 India 1441719852 1428627663 3M 485.0 0.0092 0.1801 1
1 China 1425178782 1425671352 9.4M 151.0 -0.0003 0.1780 2
2 United States 341814420 339996563 9.1M 37.0 0.0053 0.0427 3
3 Indonesia 279798049 277534122 1.9M 149.0 0.0082 0.0350 4
4 Pakistan 245209815 240485658 770.9K 318.0 0.0196 0.0306 5
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.402442 seconds
Retrying request to /chat/completions in 0.402442 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.803746 seconds
Retrying request to /chat/completions in 0.803746 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 1.705472 seconds
Retrying request to /chat/completions in 1.705472 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
WARNING:llama_index.llms.openai.utils:Retrying llama_index.llms.openai.base.OpenAI._chat in 0.5640260803215358 seconds as it raised RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}.
Retrying llama_index.llms.openai.base.OpenAI._chat in 0.5640260803215358 seconds as it raised RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}.
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.449893 seconds
Retrying request to /chat/completions in 0.449893 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.986588 seconds
Retrying request to /chat/completions in 0.986588 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 1.740227 seconds
Retrying request to /chat/completions in 1.740227 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
WARNING:llama_index.llms.openai.utils:Retrying llama_index.llms.openai.base.OpenAI._chat in 0.45345187770399686 seconds as it raised RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}.
Retrying llama_index.llms.openai.base.OpenAI._chat in 0.45345187770399686 seconds as it raised RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}.
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.428594 seconds
Retrying request to /chat/completions in 0.428594 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 0.980027 seconds
Retrying request to /chat/completions in 0.980027 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 1.687376 seconds
Retrying request to /chat/completions in 1.687376 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
Traceback (most recent call last):
File "c:\Users\User\Desktop\Software_AI_Coding\Agent AI\main.py", line 22, in <module>
population_query_engine.query("whats the population of canada")
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 311, in wrapper
result = func(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\base\base_query_engine.py", line 52, in query
query_result = self._query(str_or_query_bundle)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 311, in wrapper
result = func(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\experimental\query_engine\pandas\pandas_query_engine.py", line 165, in _query
pandas_response_str = self._llm.predict(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 311, in wrapper
result = func(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\llms\llm.py", line 596, in predict
chat_response = self.chat(messages)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 311, in wrapper
result = func(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\core\llms\callbacks.py", line 173, in wrapped_llm_chat
f_return_val = f(_self, messages, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\llms\openai\base.py", line 355, in chat
return chat_fn(messages, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\llms\openai\base.py", line 106, in wrapper
return retry(f)(self, *args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 336, in wrapped_f
return copy(f, *args, **kw)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 376, in iter
result = action(retry_state)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 418, in exc_check
raise retry_exc.reraise()
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 185, in reraise
raise self.last_attempt.result()
File "C:\Python310\lib\concurrent\futures\_base.py", line 439, in result
return self.__get_result()
File "C:\Python310\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\tenacity\__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\llama_index\llms\openai\base.py", line 429, in _chat
response = client.chat.completions.create(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_utils\_utils.py", line 274, in wrapper
return func(*args, **kwargs)
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\resources\chat\completions.py", line 815, in create
return self._post(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1277, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 954, in request
return self._request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1043, in _request
return self._retry_request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1092, in _retry_request
return self._request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1043, in _request
return self._retry_request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1092, in _retry_request
return self._request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1043, in _request
return self._retry_request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1092, in _retry_request
return self._request(
File "C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\lib\site-packages\openai\_base_client.py", line 1058, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}`
I have followed the tutorial till querying unstructured data source. Changed the import to core/experimental from llama_index. I get this error when run the main.py `& : File C:\Users\User\Desktop\Software_AI_Coding\Agent AI\ai\Scripts\Activate.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:3