Closed federicotorrielli closed 1 year ago
Odd, can you share the prompt and surrounding code?
Sure, this is the pipe And this is the call
lemmas is just a list of string prompts. For the generator I used:
def initialize_generator(self):
gen = torch.Generator(device='cuda')
seed = 26111998
return gen.manual_seed(seed)
Ah, I think I see the problem. DAAM doesn't support sliced attention. I'll release an update for that.
Fixed in latest release.
When calling
heat_map_lemma = tc.compute_global_heat_map(prompt)