parthsarthi03 / raptor

The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
https://arxiv.org/abs/2401.18059
MIT License
878 stars 126 forks source link

UMAP n_neighbors must be greater than 1 #30

Open jeffreyzhanghc opened 6 months ago

jeffreyzhanghc commented 6 months ago

Hi team, currently I am building with raptor to achieve the open-domain QA as following: we have data stored as question-answer pair, and when user have a input query, I try to match the query with top-k most related questions asked in my data and concatenate their answer, and then use raptor to try to get a answer for the input query, but when the length of docs in RA.add_documents(docs) gets longer, it gives me "n_neighbors must be greater than 1" error for UMAP part at fit transform in this code chunk: def global_cluster_embeddings( embeddings: np.ndarray, dim: int, n_neighbors: Optional[int] = None, metric: str = "cosine", ) -> np.ndarray: if n_neighbors is None: n_neighbors = int((len(embeddings) - 1) ** 0.5) reduced_embeddings = umap.UMAP( n_neighbors=n_neighbors, n_components=dim, metric=metric ).fit_transform(embeddings) return reduced_embeddings Is there any way to resolve UMAP issue in this case?

cuichenxu commented 5 months ago

Got the same case, have you solved it?

fatlism commented 5 months ago

I also encountered the same problem, is there any solution?

isConic commented 5 months ago

@jeffreyzhanghc
can you pinpoint where in the repo this line of code is?

jeffreyzhanghc commented 5 months ago

@cuichenxu @fatlism Hi, I have not totally understand the case yet, but my initial guess will be during the embedding process I use the original raptor model to train Chinese content, which in longer context yield to this bug very often, yet when I customize my embedding/summarization model for Chinese, this does not shows up for a while. My suggestion will be, if you are training longer text in different language, you might consider try a customized embedding methods specifically for that language, but I am not sure if that can solve the issue

jeffreyzhanghc commented 5 months ago

@jeffreyzhanghc can you pinpoint where in the repo this line of code is?

it is under raptor/cluster_utils.py, line 33

jeffreyzhanghc commented 5 months ago

@jeffreyzhanghc can you pinpoint where in the repo this line of code is?

and for the umap package it is in umap_.py line 2379 in .fit, and lead to error from line 1777 from _validate_parameters()

cuichenxu commented 5 months ago

@cuichenxu @fatlism Hi, I have not totally understand the case yet, but my initial guess will be during the embedding process I use the original raptor model to train Chinese content, which in longer context yield to this bug very often, yet when I customize my embedding/summarization model for Chinese, this does not shows up for a while. My suggestion will be, if you are training longer text in different language, you might consider try a customized embedding methods specifically for that language, but I am not sure if that can solve the issue

Hi, thanks for your insights! I just use texts that include English only. And the embedding model is SBertEmbeddingModel in raptor/EmbeddingModels.py, and it still suffer this, I really do not understand why.

By the way, can you run this to satisfy your aims successfully? Could you please share your custom embedding model code? I tried to implement one, but an error occurred.....

fatlism commented 5 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

cuichenxu commented 5 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

How long does it take when the context is long?

fatlism commented 5 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

How long does it take when the context is long?

A single-threaded execution might take several hours.

lixinze777 commented 4 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

I tried this solution and this is what i got:

File "/home/miniconda3/envs/lib/python3.8/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py", line 1605, in eigsh raise TypeError("Cannot use scipy.linalg.eigh for sparse A with " TypeError: Cannot use scipy.linalg.eigh for sparse A with k >= N. Use scipy.linalg.eigh(A.toarray()) or reduce k.

Wu-tn commented 2 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

I tried this solution and this is what i got:

File "/home/miniconda3/envs/lib/python3.8/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py", line 1605, in eigsh raise TypeError("Cannot use scipy.linalg.eigh for sparse A with " TypeError: Cannot use scipy.linalg.eigh for sparse A with k >= N. Use scipy.linalg.eigh(A.toarray()) or reduce k.

I met the same wrong, how you handle it?

jsvan commented 2 months ago

I don't know about this bug specifically but found that updating the requirements list to download the current version of all reqs instead of legacy solved most of my problems.

Wu-tn commented 2 months ago

I don't know about this bug specifically but found that updating the requirements list to download the current version of all reqs instead of legacy solved most of my problems.

Can you list your requirements version and python version?

Wu-tn commented 2 months ago
if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

It seems that use code above will occur another error: ValueError: n_components must be greater than 0

AnhLD2610 commented 1 month ago

I have the same problem. Can anyone fix it?