Open jw9603 opened 4 months ago
Hello,
In the file you're referring to, you can specify the target and source datasets and get the similarity matrix between all the pairs, which would be later used for the actual training and receiving the most similar cases. However, you might have to make slight adaptations based on the specific file format you have in the Python or script file to ensure you are computing the right matrices and using them in the right way.
Where is the code for finding similar cases located? Is it in cbr_analyser/case_retriever?
How did you proceed from the data in data_without_augmentation to create the data in final_data, with additions like 'counterargument, explanation, structure, goal' included in the final_data?
I Solved this problem
Also, I have one question
How can i solve below problem?
-> I removed the wandb library and reconstructed it to conduct the experiment.
@jw9603 Can you tell me how to reproduce the error? And also, you don't need the wandb library to do the sweep if you don't need the sweep for finding the best hyperparameters as we did. You just have to adjust the code to skip that part.
Are there other datasets on logical fallacies besides the LOGIC dataset that I can experiment with your algorithm on?
Can I use job_scripts/simcse_similarity_calculations.sh for preprocessing data?
https://github.com/UKPLab/argotario/blob/master/data/arguments-en-2018-01-15.tsv