Hellisotherpeople / llm_steer-oobabooga

Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectors, now in oobabooga text generation webui!
MIT License
42 stars 2 forks source link

Is this still working? #2

Closed MotherSoraka closed 4 weeks ago

MotherSoraka commented 3 months ago

i cant get it to work with any of L3, or Gemma 2 models. they all throw errors. tried both llamacpp_HF and llamacpp.

MotherSoraka commented 3 months ago

and why has no one ever talked about this?

MotherSoraka commented 3 months ago

nevermind. im dumb: "Note: This extension only works for models loaded using the "transformers" backend."

Hellisotherpeople commented 2 months ago

and why has no one ever talked about this?

I think the LLM crowd has institutional blindness to the value of techniques which allow things similar to what's available here: https://github.com/ljleb/prompt-fusion-extension

I laugh when folks in the LLM world talk about Lora's like they're a new thing. The stable diffusion crowd had normalized Lora adapter usage/fine-tuning with Stable Diffusion back in 2022 and even today it's hard to find good Lora's for LLMs. Same dynamic.