irthomasthomas / undecidability

12 stars 2 forks source link

LMOps/README.md at main · microsoft/LMOps #706

Open irthomasthomas opened 8 months ago

irthomasthomas commented 8 months ago

LMOps/README.md at main · microsoft/LMOps

LMOps

LMOps is a research initiative on fundamental research and technology for building AI products w/ foundation models, especially on the general technology for enabling AI capabilities w/ LLMs and Generative AI models.

Links

News

Prompt Intelligence

Advanced technologies facilitating prompting language models.

Promptist: reinforcement learning for automatic prompt optimization

  • Language models serve as a prompt interface that optimizes user input into model-preferred prompts.

  • Learn a language model for automatic prompt optimization via reinforcement learning.

image

Structured Prompting: consume long-sequence prompts in an efficient way

1) Prepend (many) retrieved (long) documents as context in GPT.

2) Scale in-context learning to many demonstration examples.

image

X-Prompt: extensible prompts beyond NL for descriptive instructions

  • Extensible interface allowing prompting LLMs beyond natural language for fine-grain specifications

  • Context-guided imaginary word learning for general usability

Extensible Prompts for Language Models

LLMA: LLM Accelerators

Accelerate LLM Inference with References

  • Outputs of LLMs often have significant overlaps with some references (e.g., retrieved documents).

  • LLMA losslessly accelerate the inference of LLMs by copying and verifying text spans from references into the LLM inputs.

  • Applicable to important LLM scenarios such as retrieval-augmented generation and multi-turn conversations.

  • Achieves 2~3 times speed-up without additional models.

image

Fundamental Understanding of LLMs

Understanding In-Context Learning

  • According to the demonstration examples, GPT produces meta gradients for In-Context Learning (ICL) through forward computation. ICL works by applying these meta gradients to the model through attention.

  • The meta optimization process of ICL shares a dual view with finetuning that explicitly updates the model parameters with back-propagated gradients.

  • We can translate optimization algorithms (such as SGD with Momentum) to their corresponding Transformer architectures.

image

Hiring: aka.ms/GeneralAI

We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on Foundation Models (aka large-scale pre-trained models) and AGI, NLP, MT, Speech, Document AI and Multimodal AI, please send your resume to fuwei@microsoft.com.

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree.

Microsoft Open Source Code of Conduct

Contact Information

For help or issues using the pre-trained models, please submit a GitHub issue. For other communications, please contact Furu Wei (fuwei@microsoft.com).

Suggested labels

irthomasthomas commented 8 months ago

Related content

626 - Similarity score: 0.9

628 - Similarity score: 0.9

681 - Similarity score: 0.89

317 - Similarity score: 0.89

333 - Similarity score: 0.89

546 - Similarity score: 0.89