Open JViggiani opened 9 months ago
Hi, thank you for your interest in ZETT! Currently, we don't have any published models online, but training a model takes less than 10 minutes for each split (based on TITAN RTX 24GB). Hope this helps!
That's okay, I managed to get it working. I'm very impressed by the training time and the speed at which it trained.
I have a somewhat unrelated question around the automatic template generation. I see in wrapper.py there is a function:
def load_test_auto_templ(self):
raise NotImplementedError("Use this class: AutoTemplExtractor")
And this is called in:
def generate_entity_span(self,
data: Sentence,
generator: TextGenerator,
tokenizer: Union[PreTrainedTokenizerFast, AutoTokenizer],
target_labels: List[str],
mode: str,
task_type: str,
use_label_constraint: bool) -> List[dict]:
# ...
if mode in ["auto_templ_single", "auto_templ_multi"]:
test_auto_templs = self.load_test_auto_templ()
# ...
for k in final_target_labels:
if mode in ["auto_templ_single", "auto_templ_multi"]:
input_text = []
for nt in range(min(self.top_n_templ, len(test_auto_templs[k]))):
_templ = test_auto_templs[k][nt]
# ...
else:
_templ = self.relname2template[k]
# ...
Is this automatic template generation implemented anywhere and hosted somewhere else? Thank you! :)
This is the code for auto-generated templates by LLM, which was one of our experiments, but not included to the paper since the performance was not good enough. Maybe I should've deleted this part -- sorry for your confusion!
Hello! Very interesting implementation. Do you have a trained model hosted available anywhere for downloading?
Thank you