Can you please clarify the dimensions of (V) and (H_{i})? Assuming the dimension of the LLM is (d) and the number of privacy neurons is (m), how is the dimension of (V) shown as (m \times d)? It is not clear why (d) is involved in the notations. I may be misunderstanding the meaning of "neuron" here.
For example, at the end of each transformer layer, the token is represented with an embedding of dimension (d), and we choose (m) privacy-sensitive positions in this embedding. So, shouldn't (V) or $H$ be a vector of dimension (m)?
Hi Team,
Can you please clarify the dimensions of (V) and (H_{i})? Assuming the dimension of the LLM is (d) and the number of privacy neurons is (m), how is the dimension of (V) shown as (m \times d)? It is not clear why (d) is involved in the notations. I may be misunderstanding the meaning of "neuron" here.
For example, at the end of each transformer layer, the token is represented with an embedding of dimension (d), and we choose (m) privacy-sensitive positions in this embedding. So, shouldn't (V) or $H$ be a vector of dimension (m)?
Thanks