AIHawk-co / Auto_Jobs_Applier_AI_Agent

Auto_Jobs_Applier_AI_Agent by AIHawk is an AI Agent that automates the jobs application process. Utilizing artificial intelligence, it enables users to apply for multiple jobs in an automated and personalized way.
https://aihawk.co/
Other
22.21k stars 3.27k forks source link

[BUG]: Uses Gpt4o when set to run Ollama with Llama 3.2 in config.yaml file #591

Open sonicnerd14 opened 3 weeks ago

sonicnerd14 commented 3 weeks ago

Describe the bug

The program seems to have a bug where it doesn't know which model you've stated you want to use. I've tried Llama 3.2 with Ollama and Gemini, but when trying to run either of these AI hawk insists to use GPT 4o. You can not omit a API key, or otherwise the program will not run. Even if you try using Ollama which requires no API key and only URL. Inputting a Gemini key does not work as any Gemini key seems to be invalid according to the script. However, I would like to use ollama when possible, and I would like to figure out a fix to get it to run and not GPT 4o.

Steps to reproduce

  1. Modify config.yaml file with the following:

llm_model_type: ollama llm_model: 'llama3.2' llm_api_url: http://127.0.0.1:11434/

  1. Save and Run AI Hawk

Expected behavior

Running Llama 3.2 with Ollama

Actual behavior

Running GPT 4o mini

Branch

None

Branch name

No response

Python version

No response

LLM Used

No response

Model used

No response

Additional context

No response

sarob commented 2 weeks ago

I did not experience this behavior.

I can use ollama and gemini. tested and verified.

try pulling the latest main updates and try again.

sonicnerd14 commented 2 weeks ago

I did not experience this behavior.

I can use ollama and gemini. tested and verified.

try pulling the latest main updates and try again.

I've attempted this several times, and I get the same results. Even after pulling lastest updates. Here is an example of the error I receive when the bot attempts to make a resume with Gemini:

2024-10-26 11:42:39.190 | ERROR    | src.aihawk_easy_applier:_create_and_upload_resume:488 - Failed to generate resume: 'LoggerChatModel' object has no attribute 'logger'
2024-10-26 11:42:39.198 | ERROR    | src.aihawk_easy_applier:_create_and_upload_resume:490 - Traceback: Traceback (most recent call last):
  File "C:\Python312\Lib\site-packages\lib_resume_builder_AIHawk\gpt_resume_job_description.py", line 119, in __call__
    reply = self.llm(messages)
            ^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\_api\deprecation.py", line 180, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1016, in __call__
    generation = self.generate(
                 ^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 634, in generate
    raise e
  File "C:\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 624, in generate
    self._generate_with_cache(
  File "C:\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 846, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 589, in _generate
    response = self.client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\openai\resources\chat\completions.py", line 646, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\openai\_base_client.py", line 1266, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\openai\_base_client.py", line 942, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\openai\_base_client.py", line 1046, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: AIzaSyCl***************************zH7U. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Auto_Jobs_Applier_AIHawk\src\aihawk_easy_applier.py", line 460, in _create_and_upload_resume
    resume_pdf_base64 = self.resume_generator_manager.pdf_base64(job_description_text=job.description)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\lib_resume_builder_AIHawk\manager_facade.py", line 78, in pdf_base64
    self.resume_generator.create_resume_job_description_text(style_path, job_description_text, temp_html_path)
  File "C:\Python312\Lib\site-packages\lib_resume_builder_AIHawk\resume_generator.py", line 37, in create_resume_job_description_text
    gpt_answerer.set_job_description_from_text(job_description_text)
  File "C:\Python312\Lib\site-packages\lib_resume_builder_AIHawk\gpt_resume_job_description.py", line 247, in set_job_description_from_text
    output = chain.invoke({"text": job_description_text})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2878, in invoke
    input = context.run(step.invoke, input, config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 4474, in invoke
    return self._call_with_config(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1785, in _call_with_config
    context.run(
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 4330, in _invoke
    output = call_func_with_variable_args(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\lib_resume_builder_AIHawk\gpt_resume_job_description.py", line 133, in __call__
    self.logger.error(f"Unexpected error occurred: {str(e)}, retrying in {retry_delay} seconds... (Attempt {attempt + 1}/{max_retries})")
    ^^^^^^^^^^^
AttributeError: 'LoggerChatModel' object has no attribute 'logger'

2024-10-26 11:42:39.199 | ERROR    | src.aihawk_easy_applier:fill_up:346 - Failed to find form elements: 'LoggerChatModel' object has no attribute 'logger'

Screenshot 2024-10-26 114725 Screenshot 2024-10-26 114713

From my experience, the only way I've managed to get this bot to work is with GPT 4o, and when I change the model type and provider it doesn't change anything and insists to use GPT 4o anyways. Maybe I'm missing something or the setup instructions aren't fully clear when it comes to customizing your LLM and provider of choice.

shivatejesh commented 2 weeks ago

I too face the same issue.

varun2948 commented 2 weeks ago

i managed to fix all the issue with the ollama but with a hack

sonicnerd14 commented 2 weeks ago

i managed to fix all the issue with the ollama but with a hack

You mean like a bug fix? Could you explain what you did to get it to work with Ollama at least?

49Simon commented 2 weeks ago

Refer #649

Vrushank796 commented 1 week ago

Just curious, what are you setting as llm_api_key while using ollama locally or you are leaving it empty ?

Vrushank796 commented 1 week ago

Stuck at DEBUG | src.llm.llm_manager:__call__:253 - Attempting to call the LLM with messages while using llama 2

sonicnerd14 commented 1 week ago

Just curious, what are you setting as llm_api_key while using ollama locally or you are leaving it empty ?

When I use Gemini I enter the Gemini API key, if I use Ollama I keep whichever key is in there. However, it seems like this error only occurs when attempting to allow the bot to create a custom resume for each application. Only ChatGPT works with that for some reason, Ollama and Gemini will only work if you use your own resume using --resume and the path of the resumes location. However, I'm trying to see if I can possibly fix this problem myself so that all options are usable together, and perhaps even add Claude, LM Studio, and Groq support.