Hi!
Thanks for publishing the code and the data from your paper! I am trying to experiment with the example code provided in the README file but I get an error when I run describe_communities_with_plots_complex(G, N=6, data_dir=data_dir_output). Here is the traceback:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 23
17 node_embeddings = load_embeddings(f'{data_dir}/{embedding_file}')
19 visualize_embeddings_2d_pretty_and_sample(node_embeddings,
20 n_clusters=10, n_samples=10,
21 data_dir=data_dir_output, alpha=.7)
---> 23 describe_communities_with_plots_complex(G, N=6, data_dir=data_dir_output)
File ~/anaconda3/envs/repox-llm-new/lib/python3.11/site-packages/GraphReasoning/graph_analysis.py:470, in describe_communities_with_plots_complex(G, N, N_nodes, data_dir)
459 """
460 Detect and describe the top N communities in graph G based on key nodes, with integrated plots.
461 Adds separate plots for average node degree, average clustering coefficient, and betweenness centrality over all communities.
(...)
467 - data_dir (str): Directory to save the plots.
468 """
469 # Detect communities using the Louvain method
--> 470 partition = community_louvain.best_partition(G)
472 # Invert the partition to get nodes per community
473 communities = {}
AttributeError: module 'community' has no attribute 'best_partition'
Also, in the same example code, there is no name provided for the tokenizer_model to use in embedding_tokenizer = AutoTokenizer.from_pretrained(tokenizer_model, ). I used tokenizer_model = 'BAAI/bge-large-en-v1.5' as I read in your paper that this is the model you have used. I just wanted to mention it so you might consider adding it in the example code.
Hi! Thanks for publishing the code and the data from your paper! I am trying to experiment with the example code provided in the README file but I get an error when I run
describe_communities_with_plots_complex(G, N=6, data_dir=data_dir_output)
. Here is the traceback:Also, in the same example code, there is no name provided for the
tokenizer_model
to use inembedding_tokenizer = AutoTokenizer.from_pretrained(tokenizer_model, )
. I usedtokenizer_model = 'BAAI/bge-large-en-v1.5'
as I read in your paper that this is the model you have used. I just wanted to mention it so you might consider adding it in the example code.