pluralsh / plural

Deploy open source software on Kubernetes in record time. πŸš€
https://www.plural.sh
Other
1.35k stars 64 forks source link

chore(deps): update dependency llama-index to v0.10.13 [security] #1296

Open plural-renovate[bot] opened 6 months ago

plural-renovate[bot] commented 6 months ago

This PR contains the following updates:

Package Update Change
llama-index (source) minor ==0.7.4 -> ==0.10.13

GitHub Vulnerability Alerts

CVE-2023-39662

An issue in llama_index v.0.7.13 and before allows a remote attacker to execute arbitrary code via the exec parameter in PandasQueryEngine function.

CVE-2024-4181

A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the RunGpt framework from JinaAI to connect to Language Learning Models (LLMs). The vulnerability arises from the improper use of the eval function, allowing a malicious or compromised LLM hosting provider to execute arbitrary commands on the client's machine. This issue was fixed in version 0.10.13. The exploitation of this vulnerability could lead to a hosting provider gaining full control over client machines.


Release Notes

run-llama/llama_index (llama-index) ### [`v0.10.13`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#01013---2024-02-26) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.12...v0.10.13) ##### New Features - Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning ([#​11311](https://togithub.com/run-llama/llama_index/issues/11311)) - Added support for `mistral-large` ([#​11398](https://togithub.com/run-llama/llama_index/issues/11398)) - Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral ([#​11373](https://togithub.com/run-llama/llama_index/issues/11373)) - Added fsspec support to SimpleDirectoryReader ([#​11303](https://togithub.com/run-llama/llama_index/issues/11303)) ##### Bug Fixes / Nits - Fixed an issue with context window + prompt helper ([#​11379](https://togithub.com/run-llama/llama_index/issues/11379)) - Moved OpenSearch vector store to BasePydanticVectorStore ([#​11400](https://togithub.com/run-llama/llama_index/issues/11400)) - Fixed function calling in fireworks LLM ([#​11363](https://togithub.com/run-llama/llama_index/issues/11363)) - Made cohere embedding types more automatic ([#​11288](https://togithub.com/run-llama/llama_index/issues/11288)) - Improve function calling in react agent ([#​11280](https://togithub.com/run-llama/llama_index/issues/11280)) - Fixed MockLLM imports ([#​11376](https://togithub.com/run-llama/llama_index/issues/11376)) ### [`v0.10.12`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#01012---2024-02-22) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.11...v0.10.12) ##### New Features - Added `llama-index-postprocessor-colbert-rerank` package ([#​11057](https://togithub.com/run-llama/llama_index/issues/11057)) - `MyMagicAI` LLM ([#​11263](https://togithub.com/run-llama/llama_index/issues/11263)) - `MariaTalk` LLM ([#​10925](https://togithub.com/run-llama/llama_index/issues/10925)) - Add retries to github reader ([#​10980](https://togithub.com/run-llama/llama_index/issues/10980)) - Added FireworksAI embedding and LLM modules ([#​10959](https://togithub.com/run-llama/llama_index/issues/10959)) ##### Bug Fixes / Nits - Fixed string formatting in weaviate ([#​11294](https://togithub.com/run-llama/llama_index/issues/11294)) - Fixed off-by-one error in semantic splitter ([#​11295](https://togithub.com/run-llama/llama_index/issues/11295)) - Fixed `download_llama_pack` for multiple files ([#​11272](https://togithub.com/run-llama/llama_index/issues/11272)) - Removed `BUILD` files from packages ([#​11267](https://togithub.com/run-llama/llama_index/issues/11267)) - Loosened python version reqs for all packages ([#​11267](https://togithub.com/run-llama/llama_index/issues/11267)) - Fixed args issue with chromadb ([#​11104](https://togithub.com/run-llama/llama_index/issues/11104)) ### [`v0.10.11`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#01011---2024-02-21) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.10...v0.10.11) ##### Bug Fixes / Nits - Fixed multi-modal LLM for async acomplete ([#​11064](https://togithub.com/run-llama/llama_index/issues/11064)) - Fixed issue with llamaindex-cli imports ([#​11068](https://togithub.com/run-llama/llama_index/issues/11068)) ### [`v0.10.10`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#01010---2024-02-20) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.9...v0.10.10) I'm still a bit wonky with our publishing process -- apologies. This is just a version bump to ensure the changes that were supposed to happen in 0.10.9 actually did get published. (AF) ### [`v0.10.9`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0109---2024-02-20) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.8...v0.10.9) - add llama-index-cli dependency ### [`v0.10.8`](https://togithub.com/run-llama/llama_index/compare/v0.10.7...v0.10.8) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.7...v0.10.8) ### [`v0.10.7`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0107---2024-02-19) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.6...v0.10.7) ##### New Features - Added Self-Discover llamapack ([#​10951](https://togithub.com/run-llama/llama_index/issues/10951)) ##### Bug Fixes / Nits - Fixed linting in CICD ([#​10945](https://togithub.com/run-llama/llama_index/issues/10945)) - Fixed using remote graph stores ([#​10971](https://togithub.com/run-llama/llama_index/issues/10971)) - Added missing LLM kwarg in NoText response synthesizer ([#​10971](https://togithub.com/run-llama/llama_index/issues/10971)) - Fixed openai import in rankgpt ([#​10971](https://togithub.com/run-llama/llama_index/issues/10971)) - Fixed resolving model name to string in openai embeddings ([#​10971](https://togithub.com/run-llama/llama_index/issues/10971)) - Off by one error in sentence window node parser ([#​10971](https://togithub.com/run-llama/llama_index/issues/10971)) ### [`v0.10.6`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0106---2024-02-17) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.5...v0.10.6) First, apologies for missing the changelog the last few versions. Trying to figure out the best process with 400+ packages. At some point, each package will have a dedicated changelog. But for now, onto the "master" changelog. ##### New Features - Added `NomicHFEmbedding` ([#​10762](https://togithub.com/run-llama/llama_index/issues/10762)) - Added `MinioReader` ([#​10744](https://togithub.com/run-llama/llama_index/issues/10744)) ##### Bug Fixes / Nits - Various fixes for clickhouse vector store ([#​10799](https://togithub.com/run-llama/llama_index/issues/10799)) - Fix index name in neo4j vector store ([#​10749](https://togithub.com/run-llama/llama_index/issues/10749)) - Fixes to sagemaker embeddings ([#​10778](https://togithub.com/run-llama/llama_index/issues/10778)) - Fixed performance issues when splitting nodes ([#​10766](https://togithub.com/run-llama/llama_index/issues/10766)) - Fix non-float values in reranker + b25 ([#​10930](https://togithub.com/run-llama/llama_index/issues/10930)) - OpenAI-agent should be a dep of openai program ([#​10930](https://togithub.com/run-llama/llama_index/issues/10930)) - Add missing shortcut imports for query pipeline components ([#​10930](https://togithub.com/run-llama/llama_index/issues/10930)) - Fix NLTK and tiktoken not being bundled properly with core ([#​10930](https://togithub.com/run-llama/llama_index/issues/10930)) - Add back `llama_index.core.__version__` ([#​10930](https://togithub.com/run-llama/llama_index/issues/10930)) ### [`v0.10.5`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#llama-index-core-01050) - added dead simple `FnAgentWorker` for custom agents ([#​14329](https://togithub.com/run-llama/llama_index/issues/14329)) - Pass the kwargs on when build_index_from_nodes ([#​14341](https://togithub.com/run-llama/llama_index/issues/14341)) - make async utils a bit more robust to nested async ([#​14356](https://togithub.com/run-llama/llama_index/issues/14356)) ### [`v0.10.4`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#llama-index-core-01040) - Added `PropertyGraphIndex` and other supporting abstractions. See the [full guide](https://docs.llamaindex.ai/en/latest/module_guides/indexing/lpg_index_guide/) for more details ([#​13747](https://togithub.com/run-llama/llama_index/issues/13747)) - Updated `AutoPrevNextNodePostprocessor` to allow passing in response mode and LLM ([#​13771](https://togithub.com/run-llama/llama_index/issues/13771)) - fix type handling with return direct ([#​13776](https://togithub.com/run-llama/llama_index/issues/13776)) - Correct the method name to `_aget_retrieved_ids_and_texts` in retrievval evaluator ([#​13765](https://togithub.com/run-llama/llama_index/issues/13765)) - fix: QueryTransformComponent incorrect call `self._query_transform` ([#​13756](https://togithub.com/run-llama/llama_index/issues/13756)) - implement more filters for `SimpleVectorStoreIndex` ([#​13365](https://togithub.com/run-llama/llama_index/issues/13365)) ### [`v0.10.3`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0103---2024-02-13) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.1...v0.10.3) ##### Bug Fixes / Nits - Fixed passing in LLM to `as_chat_engine` ([#​10605](https://togithub.com/run-llama/llama_index/issues/10605)) - Fixed system prompt formatting for anthropic ([#​10603](https://togithub.com/run-llama/llama_index/issues/10603)) - Fixed elasticsearch vector store error on `__version__` ([#​10656](https://togithub.com/run-llama/llama_index/issues/10656)) - Fixed import on openai pydantic program ([#​10657](https://togithub.com/run-llama/llama_index/issues/10657)) - Added client back to opensearch vector store exports ([#​10660](https://togithub.com/run-llama/llama_index/issues/10660)) - Fixed bug in SimpleDirectoryReader not using file loaders properly ([#​10655](https://togithub.com/run-llama/llama_index/issues/10655)) - Added lazy LLM initialization to RankGPT ([#​10648](https://togithub.com/run-llama/llama_index/issues/10648)) - Fixed bedrock embedding `from_credentials` passing ing the model name ([#​10640](https://togithub.com/run-llama/llama_index/issues/10640)) - Added back recent changes to TelegramReader ([#​10625](https://togithub.com/run-llama/llama_index/issues/10625)) ### [`v0.10.1`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#01016---2024-03-05) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.10.0...v0.10.1) ##### New Features - Anthropic support for new models ([#​11623](https://togithub.com/run-llama/llama_index/issues/11623), [#​11612](https://togithub.com/run-llama/llama_index/issues/11612)) - Easier creation of chat prompts ([#​11583](https://togithub.com/run-llama/llama_index/issues/11583)) - Added a raptor retriever llama-pack ([#​11527](https://togithub.com/run-llama/llama_index/issues/11527)) - Improve batch cohere embeddings through bedrock ([#​11572](https://togithub.com/run-llama/llama_index/issues/11572)) - Added support for vertex AI embeddings ([#​11561](https://togithub.com/run-llama/llama_index/issues/11561)) ##### Bug Fixes / Nits - Ensure order in async embeddings generation ([#​11562](https://togithub.com/run-llama/llama_index/issues/11562)) - Fixed empty metadata for csv reader ([#​11563](https://togithub.com/run-llama/llama_index/issues/11563)) - Serializable fix for composable retrievers ([#​11617](https://togithub.com/run-llama/llama_index/issues/11617)) - Fixed milvus metadata filter support ([#​11566](https://togithub.com/run-llama/llama_index/issues/11566)) - FIxed pydantic import in clickhouse vector store ([#​11631](https://togithub.com/run-llama/llama_index/issues/11631)) - Fixed system prompts for gemini/vertext-gemini ([#​11511](https://togithub.com/run-llama/llama_index/issues/11511)) ### [`v0.10.0`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0100-0101---2024-02-12) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.48...v0.10.0) ##### Breaking Changes - Several changes are introduced. See the [full blog post](https://blog.llamaindex.ai/llamaindex-v0-10-838e735948f8) for complete details. ### [`v0.9.48`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0948---2024-02-12) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.47...v0.9.48) ##### Bug Fixes / Nits - Add back deprecated API for BedrockEmbdding ([#​10581](https://togithub.com/run-llama/llama_index/issues/10581)) ### [`v0.9.47`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0947---2024-02-11) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.46...v0.9.47) Last patch before v0.10! ##### New Features - add conditional links to query pipeline ([#​10520](https://togithub.com/run-llama/llama_index/issues/10520)) - refactor conditional links + add to cookbook ([#​10544](https://togithub.com/run-llama/llama_index/issues/10544)) - agent + query pipeline cleanups ([#​10563](https://togithub.com/run-llama/llama_index/issues/10563)) ##### Bug Fixes / Nits - Add sleep to fix lag in chat stream ([#​10339](https://togithub.com/run-llama/llama_index/issues/10339)) - OllamaMultiModal kwargs ([#​10541](https://togithub.com/run-llama/llama_index/issues/10541)) - Update Ingestion Pipeline to handle empty documents ([#​10543](https://togithub.com/run-llama/llama_index/issues/10543)) - Fixing minor spelling error ([#​10516](https://togithub.com/run-llama/llama_index/issues/10516)) - fix elasticsearch async check ([#​10549](https://togithub.com/run-llama/llama_index/issues/10549)) - Docs/update slack demo colab ([#​10534](https://togithub.com/run-llama/llama_index/issues/10534)) - Adding the possibility to use the IN operator for PGVectorStore ([#​10547](https://togithub.com/run-llama/llama_index/issues/10547)) - fix agent reset ([#​10562](https://togithub.com/run-llama/llama_index/issues/10562)) - Fix MD duplicated Node id from multiple docs ([#​10564](https://togithub.com/run-llama/llama_index/issues/10564)) ### [`v0.9.46`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0946---2024-02-08) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.45.post1...v0.9.46) ##### New Features - Update pooling strategy for embedding models ([#​10536](https://togithub.com/run-llama/llama_index/issues/10536)) - Add Multimodal Video RAG example ([#​10530](https://togithub.com/run-llama/llama_index/issues/10530)) - Add SECURITY.md ([#​10531](https://togithub.com/run-llama/llama_index/issues/10531)) - Move agent module guide up one-level ([#​10519](https://togithub.com/run-llama/llama_index/issues/10519)) ##### Bug Fixes / Nits - Deeplake fixes ([#​10529](https://togithub.com/run-llama/llama_index/issues/10529)) - Add Cohere section for llamaindex ([#​10523](https://togithub.com/run-llama/llama_index/issues/10523)) - Fix md element ([#​10510](https://togithub.com/run-llama/llama_index/issues/10510)) ### [`v0.9.45.post1`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0945post1---2024-02-07) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.45...v0.9.45.post1) ##### New Features - Upgraded deeplake vector database to use BasePydanticVectorStore ([#​10504](https://togithub.com/run-llama/llama_index/issues/10504)) ##### Bug Fixes / Nits - Fix MD parser for inconsistency tables ([#​10488](https://togithub.com/run-llama/llama_index/issues/10488)) - Fix ImportError for pypdf in MetadataExtractionSEC.ipynb ([#​10491](https://togithub.com/run-llama/llama_index/issues/10491)) ### [`v0.9.45`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0945post1---2024-02-07) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.44...v0.9.45) ##### New Features - Upgraded deeplake vector database to use BasePydanticVectorStore ([#​10504](https://togithub.com/run-llama/llama_index/issues/10504)) ##### Bug Fixes / Nits - Fix MD parser for inconsistency tables ([#​10488](https://togithub.com/run-llama/llama_index/issues/10488)) - Fix ImportError for pypdf in MetadataExtractionSEC.ipynb ([#​10491](https://togithub.com/run-llama/llama_index/issues/10491)) ### [`v0.9.44`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0944---2024-02-05) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.43...v0.9.44) ##### New Features - ollama vision cookbook ([#​10438](https://togithub.com/run-llama/llama_index/issues/10438)) - Support Gemini "transport" configuration ([#​10457](https://togithub.com/run-llama/llama_index/issues/10457)) - Add Upstash Vector ([#​10451](https://togithub.com/run-llama/llama_index/issues/10451)) ### [`v0.9.43`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0943---2024-02-03) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.42.post2...v0.9.43) ##### New Features - Add multi-modal ollama ([#​10434](https://togithub.com/run-llama/llama_index/issues/10434)) ##### Bug Fixes / Nits - update base class for astradb ([#​10435](https://togithub.com/run-llama/llama_index/issues/10435)) ### [`v0.9.42.post2`](https://togithub.com/run-llama/llama_index/compare/v0.9.42.post1...v0.9.42.post2) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.42.post1...v0.9.42.post2) ### [`v0.9.42.post1`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0942post1---2024-02-02) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.42...v0.9.42.post1) ##### New Features - Add Async support for Base nodes parser ([#​10418](https://togithub.com/run-llama/llama_index/issues/10418)) ### [`v0.9.42`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0942post1---2024-02-02) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.41...v0.9.42) ##### New Features - Add Async support for Base nodes parser ([#​10418](https://togithub.com/run-llama/llama_index/issues/10418)) ### [`v0.9.41`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0941---2024-02-01) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.40...v0.9.41) ##### New Features - Nomic Embedding ([#​10388](https://togithub.com/run-llama/llama_index/issues/10388)) - Dashvector support sparse vector ([#​10386](https://togithub.com/run-llama/llama_index/issues/10386)) - Table QA with MarkDownParser and Benchmarking ([#​10382](https://togithub.com/run-llama/llama_index/issues/10382)) - Simple web page reader ([#​10395](https://togithub.com/run-llama/llama_index/issues/10395)) ##### Bug Fixes / Nits - fix full node content in KeywordExtractor ([#​10398](https://togithub.com/run-llama/llama_index/issues/10398)) ### [`v0.9.40`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0940---2024-01-30) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.39...v0.9.40) ##### New Features - Improve and fix bugs for MarkdownElementNodeParser ([#​10340](https://togithub.com/run-llama/llama_index/issues/10340)) - Fixed and improve Perplexity support for new models ([#​10319](https://togithub.com/run-llama/llama_index/issues/10319)) - Ensure system_prompt is passed to Perplexity LLM ([#​10326](https://togithub.com/run-llama/llama_index/issues/10326)) - Extended BaseRetrievalEvaluator to include an optional PostProcessor ([#​10321](https://togithub.com/run-llama/llama_index/issues/10321)) ### [`v0.9.39`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0939---2024-01-26) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.38...v0.9.39) ##### New Features - Support for new GPT Turbo Models ([#​10291](https://togithub.com/run-llama/llama_index/issues/10291)) - Support Multiple docs for Sentence Transformer Fine tuning([#​10297](https://togithub.com/run-llama/llama_index/issues/10297)) ##### Bug Fixes / Nits - Marvin imports fixed ([#​9864](https://togithub.com/run-llama/llama_index/issues/9864)) ### [`v0.9.38`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0938---2024-01-25) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.37.post1...v0.9.38) ##### New Features - Support for new OpenAI v3 embedding models ([#​10279](https://togithub.com/run-llama/llama_index/issues/10279)) ##### Bug Fixes / Nits - Extra checks on sparse embeddings for qdrant ([#​10275](https://togithub.com/run-llama/llama_index/issues/10275)) ### [`v0.9.37.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.37...v0.9.37.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.37...v0.9.37.post1) ### [`v0.9.37`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0937---2024-01-24) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.36...v0.9.37) ##### New Features - Added a RAG CLI utility ([#​10193](https://togithub.com/run-llama/llama_index/issues/10193)) - Added a textai vector store ([#​10240](https://togithub.com/run-llama/llama_index/issues/10240)) - Added a Postgresql based docstore and index store ([#​10233](https://togithub.com/run-llama/llama_index/issues/10233)) - specify tool spec in tool specs ([#​10263](https://togithub.com/run-llama/llama_index/issues/10263)) ##### Bug Fixes / Nits - Fixed serialization error in ollama chat ([#​10230](https://togithub.com/run-llama/llama_index/issues/10230)) - Added missing fields to `SentenceTransformerRerank` ([#​10225](https://togithub.com/run-llama/llama_index/issues/10225)) - Fixed title extraction ([#​10209](https://togithub.com/run-llama/llama_index/issues/10209), [#​10226](https://togithub.com/run-llama/llama_index/issues/10226)) - nit: make chainable output parser more exposed in library/docs ([#​10262](https://togithub.com/run-llama/llama_index/issues/10262)) - :bug: summary index not carrying over excluded metadata keys ([#​10259](https://togithub.com/run-llama/llama_index/issues/10259)) ### [`v0.9.36`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0936---2024-01-23) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.35...v0.9.36) ##### New Features - Added support for `SageMakerEmbedding` ([#​10207](https://togithub.com/run-llama/llama_index/issues/10207)) ##### Bug Fixes / Nits - Fix duplicated `file_id` on openai assistant ([#​10223](https://togithub.com/run-llama/llama_index/issues/10223)) - Fix circular dependencies for programs ([#​10222](https://togithub.com/run-llama/llama_index/issues/10222)) - Run `TitleExtractor` on groups of nodes from the same parent document ([#​10209](https://togithub.com/run-llama/llama_index/issues/10209)) - Improve vectara auto-retrieval ([#​10195](https://togithub.com/run-llama/llama_index/issues/10195)) ### [`v0.9.35`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0935---2024-01-22) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.34...v0.9.35) ##### New Features - `beautifulsoup4` dependency to new optional extra `html` ([#​10156](https://togithub.com/run-llama/llama_index/issues/10156)) - make `BaseNode.hash` an `@property` ([#​10163](https://togithub.com/run-llama/llama_index/issues/10163)) - Neutrino ([#​10150](https://togithub.com/run-llama/llama_index/issues/10150)) - feat: JSONalyze Query Engine ([#​10067](https://togithub.com/run-llama/llama_index/issues/10067)) - \[wip] add custom hybrid retriever notebook ([#​10164](https://togithub.com/run-llama/llama_index/issues/10164)) - add from_collection method to ChromaVectorStore class ([#​10167](https://togithub.com/run-llama/llama_index/issues/10167)) - CLI experiment v0: ask ([#​10168](https://togithub.com/run-llama/llama_index/issues/10168)) - make react agent prompts more editable ([#​10154](https://togithub.com/run-llama/llama_index/issues/10154)) - Add agent query pipeline ([#​10180](https://togithub.com/run-llama/llama_index/issues/10180)) ##### Bug Fixes / Nits - Update supabase vecs metadata filter function to support multiple fields ([#​10133](https://togithub.com/run-llama/llama_index/issues/10133)) - Bugfix/code improvement for LanceDB integration ([#​10144](https://togithub.com/run-llama/llama_index/issues/10144)) - `beautifulsoup4` optional dependency ([#​10156](https://togithub.com/run-llama/llama_index/issues/10156)) - Fix qdrant aquery hybrid search ([#​10159](https://togithub.com/run-llama/llama_index/issues/10159)) - make hash a [@​property](https://togithub.com/property) ([#​10163](https://togithub.com/run-llama/llama_index/issues/10163)) - fix: bug on poetry install of llama-index\[postgres] ([#​10171](https://togithub.com/run-llama/llama_index/issues/10171)) - \[doc] update jaguar vector store documentation ([#​10179](https://togithub.com/run-llama/llama_index/issues/10179)) - Remove use of not-launched finish_message ([#​10188](https://togithub.com/run-llama/llama_index/issues/10188)) - Updates to Lantern vector stores docs ([#​10192](https://togithub.com/run-llama/llama_index/issues/10192)) - fix typo in multi_document_agents.ipynb ([#​10196](https://togithub.com/run-llama/llama_index/issues/10196)) ### [`v0.9.34`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0934---2024-01-19) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.33...v0.9.34) ##### New Features - Added SageMakerEndpointLLM ([#​10140](https://togithub.com/run-llama/llama_index/issues/10140)) - Added support for Qdrant filters ([#​10136](https://togithub.com/run-llama/llama_index/issues/10136)) ##### Bug Fixes / Nits - Update bedrock utils for Claude 2:1 ([#​10139](https://togithub.com/run-llama/llama_index/issues/10139)) - BugFix: deadlocks using multiprocessing ([#​10125](https://togithub.com/run-llama/llama_index/issues/10125)) ### [`v0.9.33`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0933---2024-01-17) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.32...v0.9.33) ##### New Features - Added RankGPT as a postprocessor ([#​10054](https://togithub.com/run-llama/llama_index/issues/10054)) - Ensure backwards compatibility with new Pinecone client version bifucation ([#​9995](https://togithub.com/run-llama/llama_index/issues/9995)) - Recursive retriever all the things ([#​10019](https://togithub.com/run-llama/llama_index/issues/10019)) ##### Bug Fixes / Nits - BugFix: When using markdown element parser on a table containing comma ([#​9926](https://togithub.com/run-llama/llama_index/issues/9926)) - extend auto-retrieval notebook ([#​10065](https://togithub.com/run-llama/llama_index/issues/10065)) - Updated the Attribute name in llm_generators ([#​10070](https://togithub.com/run-llama/llama_index/issues/10070)) - jaguar vector store add text_tag to add_kwargs in add() ([#​10057](https://togithub.com/run-llama/llama_index/issues/10057)) ### [`v0.9.32`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0932---2024-01-16) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.31...v0.9.32) ##### New Features - added query-time row retrieval + fix nits with query pipeline over structured data ([#​10061](https://togithub.com/run-llama/llama_index/issues/10061)) - ReActive Agents w/ Context + updated stale link ([#​10058](https://togithub.com/run-llama/llama_index/issues/10058)) ### [`v0.9.31`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0931---2024-01-15) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.30...v0.9.31) ##### New Features - Added selectors and routers to query pipeline ([#​9979](https://togithub.com/run-llama/llama_index/issues/9979)) - Added sparse-only search to qdrant vector store ([#​10041](https://togithub.com/run-llama/llama_index/issues/10041)) - Added Tonic evaluators ([#​10000](https://togithub.com/run-llama/llama_index/issues/10000)) - Adding async support to firestore docstore ([#​9983](https://togithub.com/run-llama/llama_index/issues/9983)) - Implement mongodb docstore `put_all` method ([#​10014](https://togithub.com/run-llama/llama_index/issues/10014)) ##### Bug Fixes / Nits - Properly truncate sql results based on `max_string_length` ([#​10015](https://togithub.com/run-llama/llama_index/issues/10015)) - Fixed `node.resolve_image()` for base64 strings ([#​10026](https://togithub.com/run-llama/llama_index/issues/10026)) - Fixed cohere system prompt role ([#​10020](https://togithub.com/run-llama/llama_index/issues/10020)) - Remove redundant token counting operation in SentenceSplitter ([#​10053](https://togithub.com/run-llama/llama_index/issues/10053)) ### [`v0.9.30`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0930---2024-01-11) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.29...v0.9.30) ##### New Features - Implements a Node Parser using embeddings for Semantic Splitting ([#​9988](https://togithub.com/run-llama/llama_index/issues/9988)) - Add Anyscale Embedding model support ([#​9470](https://togithub.com/run-llama/llama_index/issues/9470)) ##### Bug Fixes / Nits - nit: fix pandas get prompt ([#​10001](https://togithub.com/run-llama/llama_index/issues/10001)) - Fix: Token counting bug ([#​9912](https://togithub.com/run-llama/llama_index/issues/9912)) - Bump jinja2 from 3.1.2 to 3.1.3 ([#​9997](https://togithub.com/run-llama/llama_index/issues/9997)) - Fix corner case for qdrant hybrid search ([#​9993](https://togithub.com/run-llama/llama_index/issues/9993)) - Bugfix: sphinx generation errors ([#​9944](https://togithub.com/run-llama/llama_index/issues/9944)) - Fix: `language` used before assignment in `CodeSplitter` ([#​9987](https://togithub.com/run-llama/llama_index/issues/9987)) - fix inconsistent name "text_parser" in section "Use a Text Splitter… ([#​9980](https://togithub.com/run-llama/llama_index/issues/9980)) - :bug: fixing batch size ([#​9982](https://togithub.com/run-llama/llama_index/issues/9982)) - add auto-async execution to query pipelines ([#​9967](https://togithub.com/run-llama/llama_index/issues/9967)) - :bug: fixing init ([#​9977](https://togithub.com/run-llama/llama_index/issues/9977)) - Parallel Loading with SimpleDirectoryReader ([#​9965](https://togithub.com/run-llama/llama_index/issues/9965)) - do not force delete an index in milvus ([#​9974](https://togithub.com/run-llama/llama_index/issues/9974)) ### [`v0.9.29`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0929---2024-01-10) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.28.post2...v0.9.29) ##### New Features - Added support for together.ai models ([#​9962](https://togithub.com/run-llama/llama_index/issues/9962)) - Added support for batch redis/firestore kvstores, async firestore kvstore ([#​9827](https://togithub.com/run-llama/llama_index/issues/9827)) - Parallelize `IngestionPipeline.run()` ([#​9920](https://togithub.com/run-llama/llama_index/issues/9920)) - Added new query pipeline components: function, argpack, kwargpack ([#​9952](https://togithub.com/run-llama/llama_index/issues/9952)) ##### Bug Fixes / Nits - Updated optional langchain imports to avoid warnings ([#​9964](https://togithub.com/run-llama/llama_index/issues/9964)) - Raise an error if empty nodes are embedded ([#​9953](https://togithub.com/run-llama/llama_index/issues/9953)) ### [`v0.9.28.post2`](https://togithub.com/run-llama/llama_index/compare/v0.9.28.post1...v0.9.28.post2) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.28.post1...v0.9.28.post2) ### [`v0.9.28.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.28...v0.9.28.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.28...v0.9.28.post1) ### [`v0.9.28`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0928---2024-01-09) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.27...v0.9.28) ##### New Features - Added support for Nvidia TenorRT LLM ([#​9842](https://togithub.com/run-llama/llama_index/issues/9842)) - Allow `tool_choice` to be set during agent construction ([#​9924](https://togithub.com/run-llama/llama_index/issues/9924)) - Added streaming support for `QueryPipeline` ([#​9919](https://togithub.com/run-llama/llama_index/issues/9919)) ##### Bug Fixes / Nits - Set consistent doc-ids for llama-index readers ([#​9923](https://togithub.com/run-llama/llama_index/issues/9923), [#​9916](https://togithub.com/run-llama/llama_index/issues/9916)) - Remove unneeded model inputs for HuggingFaceEmbedding ([#​9922](https://togithub.com/run-llama/llama_index/issues/9922)) - Propagate `tool_choice` flag to downstream APIs ([#​9901](https://togithub.com/run-llama/llama_index/issues/9901)) - Add `chat_store_key` to chat memory `from_defaults()` ([#​9928](https://togithub.com/run-llama/llama_index/issues/9928)) ### [`v0.9.27`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0927---2024-01-08) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.26...v0.9.27) ##### New Features - add query pipeline ([#​9908](https://togithub.com/run-llama/llama_index/issues/9908)) - Feature: Azure Multi Modal (fixes: [#​9471](https://togithub.com/run-llama/llama_index/issues/9471)) ([#​9843](https://togithub.com/run-llama/llama_index/issues/9843)) - add postgres docker ([#​9906](https://togithub.com/run-llama/llama_index/issues/9906)) - Vectara auto_retriever ([#​9865](https://togithub.com/run-llama/llama_index/issues/9865)) - Redis Chat Store support ([#​9880](https://togithub.com/run-llama/llama_index/issues/9880)) - move more classes to core ([#​9871](https://togithub.com/run-llama/llama_index/issues/9871)) ##### Bug Fixes / Nits / Smaller Features - Propagate `tool_choice` flag to downstream APIs ([#​9901](https://togithub.com/run-llama/llama_index/issues/9901)) - filter out negative indexes from faiss query ([#​9907](https://togithub.com/run-llama/llama_index/issues/9907)) - added NE filter for qdrant payloads ([#​9897](https://togithub.com/run-llama/llama_index/issues/9897)) - Fix incorrect id assignment in MyScale query result ([#​9900](https://togithub.com/run-llama/llama_index/issues/9900)) - Qdrant Text Match Filter ([#​9895](https://togithub.com/run-llama/llama_index/issues/9895)) - Fusion top k for hybrid search ([#​9894](https://togithub.com/run-llama/llama_index/issues/9894)) - Fix ([#​9867](https://togithub.com/run-llama/llama_index/issues/9867)) sync_to_async to avoid blocking during asynchronous calls ([#​9869](https://togithub.com/run-llama/llama_index/issues/9869)) - A single node passed into compute_scores returns as a float ([#​9866](https://togithub.com/run-llama/llama_index/issues/9866)) - Remove extra linting steps ([#​9878](https://togithub.com/run-llama/llama_index/issues/9878)) - add vectara links ([#​9886](https://togithub.com/run-llama/llama_index/issues/9886)) ### [`v0.9.26`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0926---2024-01-05) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.25.post1...v0.9.26) ##### New Features - Added a `BaseChatStore` and `SimpleChatStore` abstraction for dedicated chat memory storage ([#​9863](https://togithub.com/run-llama/llama_index/issues/9863)) - Enable custom `tree_sitter` parser to be passed into `CodeSplitter` ([#​9845](https://togithub.com/run-llama/llama_index/issues/9845)) - Created a `BaseAutoRetriever` base class, to allow other retrievers to extend to auto modes ([#​9846](https://togithub.com/run-llama/llama_index/issues/9846)) - Added support for Nvidia Triton LLM ([#​9488](https://togithub.com/run-llama/llama_index/issues/9488)) - Added `DeepEval` one-click observability ([#​9801](https://togithub.com/run-llama/llama_index/issues/9801)) ##### Bug Fixes / Nits - Updated the guidance integration to work with the latest version ([#​9830](https://togithub.com/run-llama/llama_index/issues/9830)) - Made text storage optional for doctores/ingestion pipeline ([#​9847](https://togithub.com/run-llama/llama_index/issues/9847)) - Added missing `sphinx-automodapi` dependency for docs ([#​9852](https://togithub.com/run-llama/llama_index/issues/9852)) - Return actual node ids in weaviate query results ([#​9854](https://togithub.com/run-llama/llama_index/issues/9854)) - Added prompt formatting to LangChainLLM ([#​9844](https://togithub.com/run-llama/llama_index/issues/9844)) ### [`v0.9.25.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.25...v0.9.25.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.25...v0.9.25.post1) ### [`v0.9.25`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0925---2024-01-03) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.24...v0.9.25) ##### New Features - Added concurrancy limits for dataset generation ([#​9779](https://togithub.com/run-llama/llama_index/issues/9779)) - New `deepeval` one-click observability handler ([#​9801](https://togithub.com/run-llama/llama_index/issues/9801)) - Added jaguar vector store ([#​9754](https://togithub.com/run-llama/llama_index/issues/9754)) - Add beta multimodal ReAct agent ([#​9807](https://togithub.com/run-llama/llama_index/issues/9807)) ##### Bug Fixes / Nits - Changed default batch size for OpenAI embeddings to 100 ([#​9805](https://togithub.com/run-llama/llama_index/issues/9805)) - Use batch size properly for qdrant upserts ([#​9814](https://togithub.com/run-llama/llama_index/issues/9814)) - `_verify_source_safety` uses AST, not regexes, for proper safety checks ([#​9789](https://togithub.com/run-llama/llama_index/issues/9789)) - use provided LLM in element node parsers ([#​9776](https://togithub.com/run-llama/llama_index/issues/9776)) - updated legacy vectordb loading function to be more robust ([#​9773](https://togithub.com/run-llama/llama_index/issues/9773)) - Use provided http client in AzureOpenAI ([#​9772](https://togithub.com/run-llama/llama_index/issues/9772)) ### [`v0.9.24`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0924---2023-12-30) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.23...v0.9.24) ##### New Features - Add reranker for BEIR evaluation ([#​9743](https://togithub.com/run-llama/llama_index/issues/9743)) - Add Pathway integration. ([#​9719](https://togithub.com/run-llama/llama_index/issues/9719)) - custom agents implementation + notebook ([#​9746](https://togithub.com/run-llama/llama_index/issues/9746)) ##### Bug Fixes / Nits - fix beam search for vllm: add missing parameter ([#​9741](https://togithub.com/run-llama/llama_index/issues/9741)) - Fix alpha for hrbrid search ([#​9742](https://togithub.com/run-llama/llama_index/issues/9742)) - fix token counter ([#​9744](https://togithub.com/run-llama/llama_index/issues/9744)) - BM25 tokenizer lowercase ([#​9745](https://togithub.com/run-llama/llama_index/issues/9745)) ### [`v0.9.23`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0923---2023-12-28) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.22...v0.9.23) ##### Bug Fixes / Nits - docs: fixes qdrant_hybrid.ipynb typos ([#​9729](https://togithub.com/run-llama/llama_index/issues/9729)) - make llm completion program more general ([#​9731](https://togithub.com/run-llama/llama_index/issues/9731)) - Refactor MM Vector store and Index for empty collection ([#​9717](https://togithub.com/run-llama/llama_index/issues/9717)) - Adding IF statement to check for Schema using "Select" ([#​9712](https://togithub.com/run-llama/llama_index/issues/9712)) - allow skipping module loading in `download_module` and `download_llama_pack` ([#​9734](https://togithub.com/run-llama/llama_index/issues/9734)) ### [`v0.9.22`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0922---2023-12-26) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.21...v0.9.22) ##### New Features - Added `.iter_data()` method to `SimpleDirectoryReader` ([#​9658](https://togithub.com/run-llama/llama_index/issues/9658)) - Added async support to `Ollama` LLM ([#​9689](https://togithub.com/run-llama/llama_index/issues/9689)) - Expanding pinecone filter support for `in` and `not in` ([#​9683](https://togithub.com/run-llama/llama_index/issues/9683)) ##### Bug Fixes / Nits - Improve BM25Retriever performance ([#​9675](https://togithub.com/run-llama/llama_index/issues/9675)) - Improved qdrant hybrid search error handling ([#​9707](https://togithub.com/run-llama/llama_index/issues/9707)) - Fixed `None` handling in `ChromaVectorStore` ([#​9697](https://togithub.com/run-llama/llama_index/issues/9697)) - Fixed postgres schema creation if not existing ([#​9712](https://togithub.com/run-llama/llama_index/issues/9712)) ### [`v0.9.21`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0921---2023-12-23) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.20...v0.9.21) ##### New Features - Added zilliz cloud as a managed index ([#​9605](https://togithub.com/run-llama/llama_index/issues/9605)) ##### Bug Fixes / Nits - Bedrock client and LLM fixes ([#​9671](https://togithub.com/run-llama/llama_index/issues/9671), [#​9646](https://togithub.com/run-llama/llama_index/issues/9646)) ### [`v0.9.20`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0920---2023-12-21) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.19...v0.9.20) ##### New Features - Added `insert_batch_size` to limit number of embeddings held in memory when creating an index, defaults to 2048 ([#​9630](https://togithub.com/run-llama/llama_index/issues/9630)) - Improve auto-retrieval ([#​9647](https://togithub.com/run-llama/llama_index/issues/9647)) - Configurable Node ID Generating Function ([#​9574](https://togithub.com/run-llama/llama_index/issues/9574)) - Introduced action input parser ([#​9575](https://togithub.com/run-llama/llama_index/issues/9575)) - qdrant sparse vector support ([#​9644](https://togithub.com/run-llama/llama_index/issues/9644)) - Introduced upserts and delete in ingestion pipeline ([#​9643](https://togithub.com/run-llama/llama_index/issues/9643)) - Add Zilliz Cloud Pipeline as a Managed Index ([#​9605](https://togithub.com/run-llama/llama_index/issues/9605)) - Add support for Google Gemini models via VertexAI ([#​9624](https://togithub.com/run-llama/llama_index/issues/9624)) - support allowing additional metadata filters on autoretriever ([#​9662](https://togithub.com/run-llama/llama_index/issues/9662)) ##### Bug Fixes / Nits - Fix pip install commands in LM Format Enforcer notebooks ([#​9648](https://togithub.com/run-llama/llama_index/issues/9648)) - Fixing some more links and documentations ([#​9633](https://togithub.com/run-llama/llama_index/issues/9633)) - some bedrock nits and fixes ([#​9646](https://togithub.com/run-llama/llama_index/issues/9646)) ### [`v0.9.19`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0919---2023-12-20) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.18...v0.9.19) ##### New Features - new llama datasets `LabelledEvaluatorDataset` & `LabelledPairwiseEvaluatorDataset` ([#​9531](https://togithub.com/run-llama/llama_index/issues/9531)) ### [`v0.9.18`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0918---2023-12-20) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.17...v0.9.18) ##### New Features - multi-doc auto-retrieval guide ([#​9631](https://togithub.com/run-llama/llama_index/issues/9631)) ##### Bug Fixes / Nits - fix(vllm): make Vllm's 'complete' method behave the same as other LLM class ([#​9634](https://togithub.com/run-llama/llama_index/issues/9634)) - FIx Doc links and other documentation issue ([#​9632](https://togithub.com/run-llama/llama_index/issues/9632)) ### [`v0.9.17`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0917---2023-12-19) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.16.post1...v0.9.17) ##### New Features - \[example] adding user feedback ([#​9601](https://togithub.com/run-llama/llama_index/issues/9601)) - FEATURE: Cohere ReRank Relevancy Metric for Retrieval Eval ([#​9495](https://togithub.com/run-llama/llama_index/issues/9495)) ##### Bug Fixes / Nits - Fix Gemini Chat Mode ([#​9599](https://togithub.com/run-llama/llama_index/issues/9599)) - Fixed `types-protobuf` from being a primary dependency ([#​9595](https://togithub.com/run-llama/llama_index/issues/9595)) - Adding an optional auth token to the TextEmbeddingInference class ([#​9606](https://togithub.com/run-llama/llama_index/issues/9606)) - fix: out of index get latest tool call ([#​9608](https://togithub.com/run-llama/llama_index/issues/9608)) - fix(azure_openai.py): add missing return to subclass override ([#​9598](https://togithub.com/run-llama/llama_index/issues/9598)) - fix mix up b/w 'formatted' and 'format' params for ollama api call ([#​9594](https://togithub.com/run-llama/llama_index/issues/9594)) ### [`v0.9.16.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.16...v0.9.16.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.16...v0.9.16.post1) ### [`v0.9.16`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0916---2023-12-18) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.15.post2...v0.9.16) ##### New Features - agent refactor: step-wise execution ([#​9584](https://togithub.com/run-llama/llama_index/issues/9584)) - Add OpenRouter, with Mixtral demo ([#​9464](https://togithub.com/run-llama/llama_index/issues/9464)) - Add hybrid search to neo4j vector store ([#​9530](https://togithub.com/run-llama/llama_index/issues/9530)) - Add support for auth service accounts for Google Semantic Retriever ([#​9545](https://togithub.com/run-llama/llama_index/issues/9545)) ##### Bug Fixes / Nits - Fixed missing `default=None` for `LLM.system_prompt` ([#​9504](https://togithub.com/run-llama/llama_index/issues/9504)) - Fix [#​9580](https://togithub.com/run-llama/llama_index/issues/9580) : Incorporate metadata properly ([#​9582](https://togithub.com/run-llama/llama_index/issues/9582)) - Integrations: Gradient\[Embeddings,LLM] - sdk-upgrade ([#​9528](https://togithub.com/run-llama/llama_index/issues/9528)) - Add mixtral 8x7b model to anyscale available models ([#​9573](https://togithub.com/run-llama/llama_index/issues/9573)) - Gemini Model Checks ([#​9563](https://togithub.com/run-llama/llama_index/issues/9563)) - Update OpenAI fine-tuning with latest changes ([#​9564](https://togithub.com/run-llama/llama_index/issues/9564)) - fix/Reintroduce `WHERE` filter to the Sparse Query for PgVectorStore ([#​9529](https://togithub.com/run-llama/llama_index/issues/9529)) - Update Ollama API to ollama v0.1.16 ([#​9558](https://togithub.com/run-llama/llama_index/issues/9558)) - ollama: strip invalid `formatted` option ([#​9555](https://togithub.com/run-llama/llama_index/issues/9555)) - add a device in optimum push [#​9541](https://togithub.com/run-llama/llama_index/issues/9541) ([#​9554](https://togithub.com/run-llama/llama_index/issues/9554)) - Title vs content difference for Gemini Embedding ([#​9547](https://togithub.com/run-llama/llama_index/issues/9547)) - fix pydantic fields to float ([#​9542](https://togithub.com/run-llama/llama_index/issues/9542)) ### [`v0.9.15.post2`](https://togithub.com/run-llama/llama_index/compare/v0.9.15.post1...v0.9.15.post2) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.15.post1...v0.9.15.post2) ### [`v0.9.15.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.15...v0.9.15.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.15...v0.9.15.post1) ### [`v0.9.15`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0915---2023-12-13) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.14.post3...v0.9.15) ##### New Features - Added full support for Google Gemini text+vision models ([#​9452](https://togithub.com/run-llama/llama_index/issues/9452)) - Added new Google Semantic Retriever ([#​9440](https://togithub.com/run-llama/llama_index/issues/9440)) - added `from_existing()` method + async support to OpenAI assistants ([#​9367](https://togithub.com/run-llama/llama_index/issues/9367)) ##### Bug Fixes / Nits - Fixed huggingface LLM system prompt and messages to prompt ([#​9463](https://togithub.com/run-llama/llama_index/issues/9463)) - Fixed ollama additional kwargs usage ([#​9455](https://togithub.com/run-llama/llama_index/issues/9455)) ### [`v0.9.14.post3`](https://togithub.com/run-llama/llama_index/compare/v0.9.14.post2...v0.9.14.post3) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.14.post2...v0.9.14.post3) ### [`v0.9.14.post2`](https://togithub.com/run-llama/llama_index/compare/v0.9.14.post1...v0.9.14.post2) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.14.post1...v0.9.14.post2) ### [`v0.9.14.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.14...v0.9.14.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.14...v0.9.14.post1) ### [`v0.9.14`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0914---2023-12-11) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.13...v0.9.14) ##### New Features - Add MistralAI LLM ([#​9444](https://togithub.com/run-llama/llama_index/issues/9444)) - Add MistralAI Embeddings ([#​9441](https://togithub.com/run-llama/llama_index/issues/9441)) - Add `Ollama` Embedding class ([#​9341](https://togithub.com/run-llama/llama_index/issues/9341)) - Add `FlagEmbeddingReranker` for reranking ([#​9285](https://togithub.com/run-llama/llama_index/issues/9285)) - feat: PgVectorStore support advanced metadata filtering ([#​9377](https://togithub.com/run-llama/llama_index/issues/9377)) - Added `sql_only` parameter to SQL query engines to avoid executing SQL ([#​9422](https://togithub.com/run-llama/llama_index/issues/9422)) ##### Bug Fixes / Nits - Feat/PgVector Support custom hnsw.ef_search and ivfflat.probes ([#​9420](https://togithub.com/run-llama/llama_index/issues/9420)) - fix F1 score definition, update copyright year ([#​9424](https://togithub.com/run-llama/llama_index/issues/9424)) - Change more than one image input for Replicate Multi-modal models from error to warning ([#​9360](https://togithub.com/run-llama/llama_index/issues/9360)) - Removed GPT-Licensed `aiostream` dependency ([#​9403](https://togithub.com/run-llama/llama_index/issues/9403)) - Fix result of BedrockEmbedding with Cohere model ([#​9396](https://togithub.com/run-llama/llama_index/issues/9396)) - Only capture valid tool names in react agent ([#​9412](https://togithub.com/run-llama/llama_index/issues/9412)) - Fixed `top_k` being multiplied by 10 in azure cosmos ([#​9438](https://togithub.com/run-llama/llama_index/issues/9438)) - Fixed hybrid search for OpenSearch ([#​9430](https://togithub.com/run-llama/llama_index/issues/9430)) ##### Breaking Changes - Updated the base `LLM` interface to match `LLMPredictor` ([#​9388](https://togithub.com/run-llama/llama_index/issues/9388)) - Deprecated `LLMPredictor` ([#​9388](https://togithub.com/run-llama/llama_index/issues/9388)) ### [`v0.9.13`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0913---2023-12-06) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.12...v0.9.13) ##### New Features - Added batch prediction support for `LabelledRagDataset` ([#​9332](https://togithub.com/run-llama/llama_index/issues/9332)) ##### Bug Fixes / Nits - Fixed save and load for faiss vector store ([#​9330](https://togithub.com/run-llama/llama_index/issues/9330)) ### [`v0.9.12`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0912---2023-12-05) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.11.post1...v0.9.12) ##### New Features - Added an option `reuse_client` to openai/azure to help with async timeouts. Set to `False` to see improvements ([#​9301](https://togithub.com/run-llama/llama_index/issues/9301)) - Added support for `vLLM` llm ([#​9257](https://togithub.com/run-llama/llama_index/issues/9257)) - Add support for python 3.12 ([#​9304](https://togithub.com/run-llama/llama_index/issues/9304)) - Support for `claude-2.1` model name ([#​9275](https://togithub.com/run-llama/llama_index/issues/9275)) ##### Bug Fixes / Nits - Fix embedding format for bedrock cohere embeddings ([#​9265](https://togithub.com/run-llama/llama_index/issues/9265)) - Use `delete_kwargs` for filtering in weaviate vector store ([#​9300](https://togithub.com/run-llama/llama_index/issues/9300)) - Fixed automatic qdrant client construction ([#​9267](https://togithub.com/run-llama/llama_index/issues/9267)) ### [`v0.9.11.post1`](https://togithub.com/run-llama/llama_index/compare/v0.9.11...v0.9.11.post1) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.11...v0.9.11.post1) ### [`v0.9.11`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0911---2023-12-03) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.10...v0.9.11) ##### New Features - Make `reference_contexts` optional in `LabelledRagDataset` ([#​9266](https://togithub.com/run-llama/llama_index/issues/9266)) - Re-organize `download` module ([#​9253](https://togithub.com/run-llama/llama_index/issues/9253)) - Added document management to ingestion pipeline ([#​9135](https://togithub.com/run-llama/llama_index/issues/9135)) - Add docs for `LabelledRagDataset` ([#​9228](https://togithub.com/run-llama/llama_index/issues/9228)) - Add submission template notebook and other doc updates for `LabelledRagDataset` ([#​9273](https://togithub.com/run-llama/llama_index/issues/9273)) ##### Bug Fixes / Nits - Convert numpy to list for `InstructorEmbedding` ([#​9255](https://togithub.com/run-llama/llama_index/issues/9255)) ### [`v0.9.10`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#0910---2023-11-30) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.9...v0.9.10) ##### New Features - Advanced Metadata filter for vector stores ([#​9216](https://togithub.com/run-llama/llama_index/issues/9216)) - Amazon Bedrock Embeddings New models ([#​9222](https://togithub.com/run-llama/llama_index/issues/9222)) - Added PromptLayer callback integration ([#​9190](https://togithub.com/run-llama/llama_index/issues/9190)) - Reuse file ids for `OpenAIAssistant` ([#​9125](https://togithub.com/run-llama/llama_index/issues/9125)) ##### Breaking Changes / Deprecations - Deprecate ExactMatchFilter ([#​9216](https://togithub.com/run-llama/llama_index/issues/9216)) ### [`v0.9.9`](https://togithub.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#099---2023-11-29) [Compare Source](https://togithub.com/run-llama/llama_index/compare/v0.9.8.post1...v0.9.9) ##### New Features - Add new abstractions for `LlamaDataset`'s ([#​9165](https://togithub.com/run-llama/llama_index/issues/9165)) - Add metadata filtering and MMR mode support for `AstraDBVectorStore` ([#​9193](https://togithub.com/run-llama/llama_index/issues/9193)) - Allowing newest `scikit-learn` versions ([#​9213](https://togithub.com/run-llama/llama_index/issues/9213)) ##### Breaking Changes / Deprecations - Added `LocalAI` demo and began deprecation cycle ([#​9151](https://togithub.com/run-llama/llama_index/issues/9151)) - Deprecate `QueryResponseDataset` and `DatasetGenerator` of `evaluation` module ([#​9165](https://togithub.com/run-llama/llama_index/issues/9165)) ##### Bug Fixes / Nits - Fix bug in `download_utils.py` with pointing to wrong repo ([#​9215](https://togithub.com/run-llama/l

Configuration

πŸ“… Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

β™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

πŸ”• Ignore: Close this PR and you won't be reminded about this update again.



This PR has been generated by Renovate Bot.

stoat-app[bot] commented 6 months ago

Easy and customizable dashboards for your build system. Learn more about Stoat β†—οΈŽ

Static Hosting

Name Link Commit Status
api-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
rtc-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
core-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
cron-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
email-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
worker-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
api-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
graphql-coverage Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
rtc-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
core-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
cron-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
email-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
worker-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…
graphql-test-results Visit e4268c9063ff0d0d2b92237416c3631924f86f7c βœ…

Job Runtime

job runtime chart

debug