c0sogi / llama-api

An OpenAI-like LLaMA inference API
MIT License
111 stars 9 forks source link

Dumb question: definitions.py model parameters #10

Closed Dougie777 closed 1 year ago

Dougie777 commented 1 year ago

I am very sorry for this newbie question. In the definitions.py there are a number of parameters for each model. I assume these correspond to the settings given on the model page. My question is how do I know the variable names you have used for each setting? For example:

airoboros_l2_13b_gguf = LlamaCppModel( model_path="TheBloke/Airoboros-L2-13B-2.1-GGUF", # automatic download max_total_tokens=8192, rope_freq_base=26000, rope_freq_scale=0.5, n_gpu_layers=30, n_batch=8192,

rope_freq_base : It doesn't appear in any of your other examples. I assume your examples are a non-exhaustive usage of all possible parameters. How can I know the variable names you used? Is there a mapping chart somewhere?

Again I apologize for the newbie question that is probably painfully obvious to others.

Thanks, Doug

c0sogi commented 1 year ago

RoPE is a technique that allows a model to operate in a higher context (max_total_tokens) than the context length in which it was trained (maybe 4096). That's what you see. By default, if you do not set the parameter, it is calculated automatically. In other words, just delete the line with the parameter starting with rope.

There's no obivous golden rule for rope_freq_base, but I recommend using this calculations:

    def calculate_rope_alpha(self) -> float:
        """Calculate the RoPE alpha based on the n_ctx.
        Assume that the trained token length is 4096."""
        # The following formula is obtained by fitting the data points
        # (comp, alpha): [(1.0, 1.0) (1.75, 2.0), (2.75, 4.0), (4.1, 8.0)]
        compress_ratio = self.calculate_rope_compress_ratio()
        return (
            -0.00285883 * compress_ratio**4
            + 0.03674126 * compress_ratio**3
            + 0.23873223 * compress_ratio**2
            + 0.49519964 * compress_ratio
            + 0.23218571
        )

    def calculate_rope_freq(self) -> float:
        """Calculate the RoPE frequency based on the n_ctx.
        Assume that the trained token length is 4096."""
        return 10000.0 * self.calculate_rope_alpha() ** (64 / 63)

    def calculate_rope_compress_ratio(self) -> float:
        """Calculate the RoPE embedding compression ratio based on the n_ctx.
        Assume that the trained token length is 4096."""
        return max(self.max_total_tokens / Config.trained_tokens, 1.0)

    def calculate_rope_scale(self) -> float:
        """Calculate the RoPE scaling factor based on the n_ctx.
        Assume that the trained token length is 4096."""
        return 1 / self.calculate_rope_compress_ratio()

Note that this auto calculation methods are present in dev branch now and will be merged soon.

You can refer other parameters in this path: llama_api/schemas/models.py. I recommend you to use IDE such as VSCode, as it will show you some hints for hidden parameters.

Dougie777 commented 1 year ago

Thank you so much!