ContextualAI / gritlm

Generative Representational Instruction Tuning
https://arxiv.org/abs/2402.09906
MIT License
479 stars 33 forks source link

Is there hope Grit-Embedding beats this task? #22

Open marioeljuga opened 3 months ago

marioeljuga commented 3 months ago

Thank you for this great model and the corresponding paper. I will definitely cite you in my thesis :)

In the attached experiment, I am trying to "trick" the model by using lexically identical words in the document that is less desired.

The first run was passed by GritLM but not by Instructor.

However, on the second run, where I changed "advanced" to "in-depth" to create yet another lexical match, the model was finally tricked.

Can embeddings be strong enough to beat this test, or is understanding such nuances a task only cross-encoders can solve? As this is kinda recruitment-like scenario, do you think that additional fine-tuning, with some recruitment-domain dataset, would help?

tricky_experiment

Muennighoff commented 3 months ago

This is very interesting! I think it should be possible to get there with pure embeddings. GritLM is pretty close - what instruction are you using? Maybe optimizing the instruction a bit gets the model there.

If you have many tricky examples like this, then fine-tuning on them should help I think. However, if it's just generic hiring data, I'm not as sure.

marioeljuga commented 3 months ago

I tried 2 different instructions:

  1. instruction = Given a project description, retrieve relevant candidates who fulfill the project criteria

  2. instruction = Given a project description, identify the most suitable candidates that fit the project criteria

The second one performed better and the scores from the table are from the second one.