I've noticed that the outputs are rather small/truncated compared to the usual models trained on openassistant?
'### Human: Give me a hello world in python? ### Assistant:' 'Sure, here is a simple "hello world" program in Python:\n\n'
'### Human: Give me a hello world in python? ### Assistant:' 'Sure! Here\'s a simple Python program that says "Hello, world!"'
'### Human: Give me a hello world in python? ### Assistant:' 'Here\'s a simple "hello world" program in Python:\n\n```'
'### Human: Give me a hello world in python? ### Assistant:' 'Sure! Here is a sample code in Python:\n```python\nprint("'
'### Human: Give me a hello world in python? ### Assistant:' "Sure, here's a simple `print()` statement:\n```python\n"
From vLLM Colab --> https://colab.research.google.com/drive/1HOxyJVxo0NeVk8oidvR3dvouGBTYO60X?usp=sharing
I've noticed that the outputs are rather small/truncated compared to the usual models trained on openassistant?