togethercomputer / MoA

Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
Apache License 2.0
1.67k stars 208 forks source link

questions about the intermediate layers #9

Closed yananchen1989 closed 1 week ago

yananchen1989 commented 2 weeks ago

hello, i cannot follow the mechanism in the intermediate layers from the paper. It is easy to understand the first layer where each LLM takes the same prompt and generate the response separately. but how it works in the following layers ? for example, in the second layer, does it take the original prompt and the concatenation of the outputs of the first layers ? then, if yes, what is the output of each LLM in 2nd layer ?

Thanks.

IsThatYou commented 2 weeks ago

Your understanding is correct as in it takes the original prompt and the concatenation of the output of the first layers. We additionally have an "aggregate" prompt template to combine all of them. Output of each LLM in 2nd layer would be a refined and aggregated response from 1st layer!

yananchen1989 commented 1 week ago

ok, i see. if for example, 5 LLMs in layer2, then there will be 5 refined/aggregated responses, where each of them will be a direct answer to the initial prompt.

IsThatYou commented 1 week ago

That's correct