microsoft / deep-language-networks

We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
MIT License
87 stars 12 forks source link

Be more robust to errors #16

Closed MarcCote closed 11 months ago

MarcCote commented 11 months ago

Handle exception and add minibatch for large call to GPT-4