Open austinmw opened 9 months ago
We have a blog! (Assumes you are using api-based models like openai though, hosting your own models is another beast on its own)
Thanks! I see that it's using OpenAI and deployed to Vercel. I think you'd gain a large audience if you had another blog on deploying with Amazon Bedrock+Claude to AWS! :)
@austinmw thats as simple as changing the llm and embeddings in the service context to use bedrock (which we support! In python at least)
https://docs.llamaindex.ai/en/stable/examples/embeddings/bedrock.html https://docs.llamaindex.ai/en/stable/examples/llm/bedrock.html#bedrock
from llama_index import ServiceContext, set_global_service_context
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
set_global_service_context(service_context)
(Just realized this is the TS repo, not python lol)
Thanks! Do you happen to know the best way to deploy Next.js apps on AWS? I'm not very familiar with React frameworks 😅
Hhmm, I'm not sure. I've only ever used vercel
@austinmw, here are your deployment options for NextJS: https://nextjs.org/docs/pages/building-your-application/deploying#self-hosting You can even do a static HTML export if you're using just the NextJS frontend (and Python or Express as the backend).
LlamaIndexTS already supports using Claude models (see https://github.com/run-llama/LlamaIndexTS/blob/dd054137bf16a043d9581bc70432ff1129640516/packages/core/src/llm/LLM.ts#L617) - you just would have to modify the generated code by create-llama
.
This also looks like an interesting option to deploy NextJS to AWS: https://docs.sst.dev/start/nextjs
Hi, are there any directions available for cloud deployment of
create-llama
applications?