Open ko120 opened 1 year ago
@jinyongyoo mind to take a look?
@ko120 @qiyanjun
Looks like the issue is with the truncate_words_to
keyword argument which isn't part of GreedyWordSwapWIR
. The argument was add in PR #747. @qiyanjun Could you share the background behind the PR and why that argument might have been added?
I have come across the same issue yesterday, suprised to see this hanging here from November. Any updates?
Sorry for the delay.. will take a careful look
PR #747 , added a max length constraint in
max_len = getattr(model_wrapper, "max_length", None) or min(
1024, model_wrapper.tokenizer.model_max_length, model_wrapper.model.config.max_position_embeddings - 2
)
search_method = GreedyWordSwapWIR(wir_method="gradient", truncate_words_to=max_len)
In the GreedyWordSwapWIR class from the PR here, I dont see any argument truncate_words
argument in the __init__
method of the class. Maybe an oversight (I am no contributor though)?
Please feel free to submit a PR to fix this..Sent from my iPhoneOn Apr 25, 2024, at 12:16, JanCe @.***> wrote: In the GreedyWordSwapWIR class from the PR here, I dont see any argument truncate_words argument in the init method of the class. Maybe an oversight (I am no contributor though)?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Describe the bug A clear and concise description of what the bug is.
To Reproduce Steps to reproduce the behavior:
textattack ...
Expected behavior A clear and concise description of what you expected to happen. It ask us to input truncate_words_to Screenshots or Traceback If applicable, add screenshots to help explain your problem. Also, copy and paste tracebacks produced by the bug.
System Information (please complete the following information):
torch==1.7.0, transformers==3.3.0
)Additional context Add any other context about the problem here.