Open dasheffie opened 7 months ago
@dasheffie, linking the PR for this - #2125
I ran into this today, is there a workaround to not have this behavior? I wanted to use the pageContent (javascript api via langchain) as an actual content store and present the data as it was stored (think users notes on a topic/thing) but the newlines are getting crushed on insert.
I could dump another copy into the metadata, but that seems wasteful?
@gbarton, you can use Langchain's Embeddings for this. Chroma has an adapter for it:
# pip install chromadb==0.5.13 langchain langchain-openai langchain-chroma
import chromadb
from chromadb.utils.embedding_functions import create_langchain_embedding
from langchain_openai import OpenAIEmbeddings
langchain_embeddings = OpenAIEmbeddings(
model="text-embedding-3-large",
api_key=os.environ["OPENAI_API_KEY"],
)
ef = create_langchain_embedding(langchain_embeddings)
client = chromadb.PersistentClient(path="/chroma-data")
collection = client.get_or_create_collection(name="my_collection", embedding_function=ef)
collection.add(ids=["1"],documents=["test document goes here"])
Thank you for your reply! I do currently use my own embeddings, is it meant to bypass the newline ripping? Forgot to clarify I'm using langchainjs in a webservice. Its pretty similar, something like:
import { VectorStore } from "@langchain/core/vectorstores";
import { Chroma } from "@langchain/community/vectorstores/chroma";
const embeddings = new OllamaEmbeddings({
model: embeddingModel,
baseUrl
});
const store = new Chroma(embeddings, {
collectionName: "store",
url: endpoint,
});
/**
* splits the document into smaller pieces
*/
private async split(document: Document) {
const transformer = new HtmlToTextTransformer();
const sequence = transformer.pipe(new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 0,
}));
return await sequence.invoke([document]);
}
/**
* Splits the document into smaller, optimized pieces and writes them to the document store.
* This method employs a smart splitting strategy to ensure efficient storage of documents.
*/
private async write(document: Document) {
const docs = await this.split(document);
const ids = docs.map((d) => uuidv4());
await store.addDocuments(docs, { ids });
return docs;
}
async create(doc: CRMDoc) {
const startTime = new Date().getTime();
let content = doc.text;
if (content.length == 0) {
return; // TODO: notify error
}
if (content.length == 0) {
return; // TODO: notify error
}
const { text, ...rest} = doc;
const document: Document = {
pageContent: content,
metadata: rest,
}
let docs = await write(document);
}
actually using my own embeddings does work. The answer was in my split function, HtmlToTextTransformer or the RecursiveCharacterTextSplitter is also stripping out newlines. Thanks for pointing me in the right direction :)
@gbarton, I've now added LangchainJS Embeddings integration that will help with your case #2945
thank you! Much appreciated :)
I just want add that changing newlines to spaces, also effects the number of tokens and thus does not allow to compute whether the original text still fits into the context-length of the OpenAI model.
In my case this caused a bug when switching from a different embedding function to the OpenAI one.
What happened?
Chroma removes newline characters before generating embeddings in Chroma v0.5.0, even though this is now unnecessary (post-V1 models), negatively impacts similarity search results, and makes it more difficult to predict outputs (openai issue 418, langchain issue 3853).
In openai issue 418 BorisPower explains that the preprocessing of newline characters should be removed because it is no longer needed for models like "text-embedding-ada-002". However, if you run the code below, you will see that
chroma
is still replacing newline characters with spaces before generating embeddings, leading to embeddings that differ from the embeddings generated from theopenai
package.Also, could someone please confirm that the only pre-processing of text before embedding that happens in
chroma
is the replacement of newline characters? We do not feel comfortable using a chroma embedding function for our DB unless the preprocessing is transparent.Versions
Chroma v0.5.0, Python 3.11.7, Debian 12
Relevant log output
No response