Closed Jonhels closed 1 month ago
Communication through api endpoint now working when writing a message to the ollama 3.1 model. This is great. Response object shown in postman.
Keep in mind that the response time is on my macbook. In a production enviroment it would be much more quicker, but as always when it comes to llms it really comes to the power of the users computer.
Backend service api endpoint for post between ollama 3.1 and api endpoint. This is a crucial step for creating a rag chatbot. I need to be able to communicate with the ollama local llm before i can go to the next step.
This does not mean that the model will remember any previous message. This means that it should give a response. Memory can be added later when its time to make it more complex