genaforvena / skiffs

Modular Language Model Architecture that makes Bobeobi Sing The Zangezi
3 stars 1 forks source link

Papers to read #7

Open genaforvena opened 11 months ago

genaforvena commented 11 months ago

https://arxiv.org/abs/2305.07759 https://aclanthology.org/2021.naacl-main.185/ "We show that performance similar to GPT-3 can be obtained with language models that are much “greener” in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models." It might be already done :)))) https://www.lesswrong.com/posts/dMoaBvcxpBE7LcES4/tinystories-small-language-models-that-still-speak-coherent https://aclanthology.org/2021.mrl-1.11/

genaforvena commented 10 months ago

There are several similar projects in the literary direction: https://arxiv.org/abs/2307.01827 https://www.researchgate.net/publication/367322745_What_We_Can_Do_and_Cannot_Do_with_Topic_Modeling_A_Systematic_Review https://www.researchgate.net/publication/280305474_Narratology_and_Deconstruction https://link.springer.com/chapter/10.1007/978-981-19-9056-4_1

there is a big part of me curious of ow much these beautifully sounding ideas could produce any ()if any?) results.

genaforvena commented 10 months ago

Some books cost 140 euro. Oh my gosh!

genaforvena commented 10 months ago

https://en.wikipedia.org/wiki/Neuro-symbolic_AI

genaforvena commented 10 months ago

https://arxiv.org/abs/2401.03038 SPADE: Synthesizing Assertions for Large Language Model Pipelines Shreya Shankar, Haotian Li, Parth Asawa, Madelon Hulsebos, Yiming Lin, J.D. Zamfirescu-Pereira, Harrison Chase, Will Fu-Hinthorn, Aditya G. Parameswaran, Eugene Wu Operationalizing large language models (LLMs) for custom, repetitive data pipelines is challenging, particularly due to their unpredictable and potentially catastrophic failures. Acknowledging the inevitability of these errors, we focus on identifying when LLMs may be generating incorrect responses when used repeatedly as part of data generation pipelines. We present SPADE, a method for automatically synthesizing assertions that identify bad LLM outputs. SPADE analyzes prompt version histories to create candidate assertion functions and then selects a minimal set that fulfills both coverage and accuracy requirements. In testing across nine different real-world LLM pipelines, SPADE efficiently reduces the number of assertions by 14% and decreases false failures by 21% when compared to simpler baselines.