I can't seem to find any documentation around how I would specify parameters such as max generation length, stop tokens, temperature, etc., for decoder-based models like GPT-2. Currently my API requests are only generating a single token, and I'd obviously like to generate more (up until a specified stop token preferably).
@tanmayb123 Currently, we are not planning to open those parameters, you can try either to add parameters with Triton or to try to pass the wanted parameters in a json way.
I can't seem to find any documentation around how I would specify parameters such as max generation length, stop tokens, temperature, etc., for decoder-based models like GPT-2. Currently my API requests are only generating a single token, and I'd obviously like to generate more (up until a specified stop token preferably).