abi / screenshot-to-code

Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
https://screenshottocode.com
MIT License
56.35k stars 6.94k forks source link

Ollama Support #354

Open k2an opened 3 months ago

k2an commented 3 months ago

I love your project, I want to use it with local ollama+llava and i tried many way including asking chat gpt. I am on Windows 11, i tried docker and no go. changed api address from settings in frontend also

api key as "ollama" and 
Api Url as http://localhost:11434/v1/ 

and i tested my local ollama+llava answering and running with postman.

changed frontend\src\lib\models.ts

// Keep in sync with backend (llm.py)
// Order here matches dropdown order
export enum CodeGenerationModel {
  GPT_4O_2024_05_13 = "llava", --> HERE
  GPT_4_TURBO_2024_04_09 = "gpt-4-turbo-2024-04-09",
  GPT_4_VISION = "gpt_4_vision",
  CLAUDE_3_SONNET = "claude_3_sonnet",
}

// Will generate a static error if a model in the enum above is not in the descriptions
export const CODE_GENERATION_MODEL_DESCRIPTIONS: {
  [key in CodeGenerationModel]: { name: string; inBeta: boolean };
} = {
  "llava": { name: "LLava", inBeta: false }, --> and here
  "gpt-4-turbo-2024-04-09": { name: "GPT-4 Turbo (Apr 2024)", inBeta: false },
  gpt_4_vision: { name: "GPT-4 Vision (Nov 2023)", inBeta: false },
  claude_3_sonnet: { name: "Claude 3 Sonnet", inBeta: false },
};

also backend\llm.py

Actual model versions that are passed to the LLMs and stored in our logs

class Llm(Enum):
    GPT_4_VISION = "gpt-4-vision-preview"
    GPT_4_TURBO_2024_04_09 = "gpt-4-turbo-2024-04-09"
    GPT_4O_2024_05_13 = "llava"
    CLAUDE_3_SONNET = "claude-3-sonnet-20240229"
    CLAUDE_3_OPUS = "claude-3-opus-20240229"
    CLAUDE_3_HAIKU = "claude-3-haiku-20240307"

 Will throw errors if you send a garbage string
def convert_frontend_str_to_llm(frontend_str: str) -> Llm:
    if frontend_str == "gpt_4_vision":
        return Llm.GPT_4_VISION
    elif frontend_str == "claude_3_sonnet":
        return Llm.CLAUDE_3_SONNET
    elif frontend_str == "llava":
        return Llm.GPT_4O_2024_05_13
    else:
        return Llm(frontend_str)

console and backend errors below

INFO:     ('127.0.0.1', 57364) - "WebSocket /generate-code" [accepted]
Incoming websocket connection...        
INFO:     connection open
Received params
Generating html_tailwind code for uploaded image using Llm.GPT_4O_2024_05_13 model...
Using OpenAI API key from client-side settings dialog
Using OpenAI Base URL from client-side settings dialog
generating code...
ERROR:    Exception in ASGI application
Traceback (most recent call last):      
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 250, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)      
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^      
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\fastapi\applications.py", line 276, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\middleware\errors.py", line 149, in __call__    
    await self.app(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\middleware\cors.py", line 75, in __call__       
    await self.app(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__ 
    raise exc
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__ 
    await self.app(scope, receive, sender)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\routing.py", line 341, in handle
    await self.app(scope, receive, send)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\starlette\routing.py", line 82, in app
    await func(session)
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\fastapi\routing.py", line 289, in app
    await dependant.call(**values)      
  File "C:\Users\k\screenshot-to-code\backend\routes\generate_code.py", line 262, in stream_code
    completion = await stream_openai_response(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\k\screenshot-to-code\backend\llm.py", line 60, in stream_openai_response
    stream = await client.chat.completions.create(**params)  # type: ignore     
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\k\AppData\Local\pypoetry\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\openai\resources\chat\completions.py", line 1334, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
y\Cache\virtualenvs\backend-bYKjg4sG-py3.11\Lib\site-packages\openai\_base_client.py", line 1532, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'json: cannot unmarshal array into Go struct field Message.messages.content of type string', 'type': 'invalid_request_error', 'param': None, 'code': None}}
INFO:     connection closed

sstc2 Ekran görüntüsü 2024-06-05 192114 sstc Ekran görüntüsü 2024-06-05 192036

If can be use on local server it'll be awesome! Thanks for consideration

abi commented 3 months ago

We're definitely interested in adding Ollama support to this project. Thanks for opening this issue.

isaganijaen commented 2 months ago

I'm also looking forward to this feature! ✨

cognitivetech commented 1 month ago

👀

HuangKaibo2017 commented 3 weeks ago

Yeah, it will would great to support ollama, LM studio, llama.cpp and more well-known opensource LLMs, like MiniCPM for vision.

Yitianw commented 2 weeks ago

16548AB1