argilla-io / distilabel

Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verified research papers.
https://distilabel.argilla.io
Apache License 2.0
1.63k stars 129 forks source link

there should be an option to pass `n_ctx` Llama from llama_cpp #593

Closed amritsingh183 closed 6 months ago

amritsingh183 commented 6 months ago

Is your feature request related to a problem? Please describe. I appreciate the work distilabel is doing and making it easier for the community to produce high quality datasets.Thank you!

There is a problem I have faced and a potential solution is outlined in this feature request.

Consider the following code where a smaller LLM is used for response generation and larger LLM is used for feedback

import platform
from pathlib import Path

from distilabel.llms.llamacpp import LlamaCppLLM
from distilabel.pipeline import Pipeline
from distilabel.steps import CombineColumns, KeepColumns, LoadDataFromDicts
from distilabel.steps.tasks import UltraFeedback
from distilabel.steps.tasks.text_generation import TextGeneration

from datasets import DatasetDict, load_dataset

usrHome = str(Path.home())

print(f"Python Platform: {platform.platform()}")

modelPaths = {
    "metaLlama3": f"{usrHome}/models/Meta-Llama-3-8B-Instruct-Q8_0.gguf",
    "phi3mini": f"{usrHome}/models/Phi-3-mini-4k-instruct-q4.gguf",
    "tinyLlama":f"{usrHome}/models/tinyllama-1.1b-chat-v1.0.Q8_0.gguf"
}
datasts = load_dataset(
    f"{usrHome}/datasets/10k_prompts_ranked", split="train"
).filter(lambda r: r["avg_rating"] >= 4 and r["num_responses"] >= 2)
datastsLst = datasts.to_list()
datastsLstSLim = datastsLst[0:4]

with Pipeline("pipeName", description="my example pipe") as pipeline:
    #
    loaded_dataset = LoadDataFromDicts(
        name="load_my_dataset",
        data=datastsLstSLim,
        output_mappings={"prompt": "instruction"},
        batch_size=4,
    )
    #
    smallerLLM = LlamaCppLLM(
        model_path=modelPaths["tinyLlama"], n_gpu_layers=-1, verbose=True
    )
    genWithSmallerLLM = TextGeneration(
        name="genWithSmallerLLM",
        llm=smallerLLM,
        input_batch_size=2,
    )
    #
    loaded_dataset.connect(genWithSmallerLLM)
    #
    combine_columns = CombineColumns(
        name="combine_columns",
        columns=["generation", "model_name"],
        output_columns=["generations", "generation_models"],
    )
    #
    genWithSmallerLLM.connect(combine_columns)
    #
    llama3LLMCPP = LlamaCppLLM(
        model_path=modelPaths["metaLlama3"], n_gpu_layers=-1, verbose=True
    )
    ultrfeedbck = UltraFeedback(
        name="ultrafeedback_meta3",
        llm=llama3LLMCPP,
        aspect="overall-rating",
        output_mappings={"model_name": "ultrafeedback_model"},
        input_batch_size=2,
    )
    #
    combine_columns.connect(ultrfeedbck)
    #
    keep_columns = KeepColumns(
        name="keep_columns",
        columns=[
            "instruction",
            "generations",
            "generation_models",
            "ratings",
            "rationales",
            "ultrafeedback_model",
        ],
    )
    ultrfeedbck.connect(keep_columns)

if __name__ == "__main__":
    distiset = pipeline.run(
        use_cache=False,
        parameters={
            "load_my_dataset": {
                "split": "test",
            },
            "genWithSmallerLLM": {
                "llm": {
                    "generation_kwargs": {"max_new_tokens": 4096, "temperature": 0.8}
                }
            },
            "ultrafeedback_meta3": {
                "llm": {
                    "generation_kwargs": {"max_new_tokens": 4096, "temperature": 0.8}
                }
            },
        },
    )

when this is run the following error pops up

Requested tokens (817) exceed context window of 512

Describe the solution you'd like Allow n_ctx in https://github.com/argilla-io/distilabel/blob/9f38b4931398f626e07cbe2a83ef393de661f428/src/distilabel/llms/llamacpp.py#L72 so that we have the ability to get this

  self._model = Llama(
            model_path=self.model_path.as_posix(),
            chat_format=self.chat_format,
            n_gpu_layers=self.n_gpu_layers,
            verbose=self.verbose,
            n_ctx=4096
        )

By adding n_ctx to https://github.com/argilla-io/distilabel/blob/9f38b4931398f626e07cbe2a83ef393de661f428/src/distilabel/llms/llamacpp.py#L76 the code works perfectly

Describe alternatives you've considered no other option available

Additional context Here is some more data from the trace

Python Platform: macOS-14.4.1-arm64-arm-64bit
[04/28/24 15:03:29] INFO     ['distilabel.pipeline.local'] 📝 Pipeline data will be written to                                                             local.py:113
                             '/Users/amritsingh/.cache/distilabel/pipelines/70f51bfed7ea5f460aaab8e2930af867ef54fbee/data'
Python Platform: macOS-14.4.1-arm64-arm-64bit
[04/28/24 15:03:30] INFO     ['distilabel.pipeline.local'] ⏳ Waiting for all the steps to load...                                                         local.py:366
Python Platform: macOS-14.4.1-arm64-arm-64bit
Python Platform: macOS-14.4.1-arm64-arm-64bit
Python Platform: macOS-14.4.1-arm64-arm-64bit
Python Platform: macOS-14.4.1-arm64-arm-64bit
Python Platform: macOS-14.4.1-arm64-arm-64bit
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /Users/amritsingh/.cache/lm-studio/models/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/tinyllama-1.1b-chat-v1.0.Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = tinyllama_tinyllama-1.1b-chat-v1.0
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 7
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q8_0:  156 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_layer          = 22
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 256
llm_load_print_meta: n_embd_v_gqa     = 256
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5632
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 1.10 B
llm_load_print_meta: model size       = 1.09 GiB (8.50 BPW)
llm_load_print_meta: general.name     = tinyllama_tinyllama-1.1b-chat-v1.0
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.20 MiB
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /Users/amritsingh/.cache/lm-studio/models/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct-Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct-imatrix
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 7
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ggml_backend_metal_buffer_from_ptr: allocated buffer, size =  1114.92 MiB, ( 1114.98 / 10922.67)
llm_load_tensors: offloading 22 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 23/23 layers to GPU
llm_load_tensors:        CPU buffer size =    66.41 MiB
llm_load_tensors:      Metal buffer size =  1114.92 MiB
..........................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 11453.25 MB
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    11.00 MiB, ( 1127.80 / 10922.67)
llama_kv_cache_init:      Metal KV buffer size =    11.00 MiB
llama_new_context_with_model: KV self size  =   11.00 MiB, K (f16):    5.50 MiB, V (f16):    5.50 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    66.50 MiB, ( 1194.30 / 10922.67)
llama_new_context_with_model:      Metal compute buffer size =    66.50 MiB
llama_new_context_with_model:        CPU compute buffer size =     5.01 MiB
llama_new_context_with_model: graph nodes  = 710
llama_new_context_with_model: graph splits = 2
AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 |
Model metadata: {'general.quantization_version': '2', 'tokenizer.chat_template': "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n'  + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}", 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'tokenizer.ggml.bos_token_id': '1', 'tokenizer.ggml.model': 'llama', 'llama.attention.head_count_kv': '4', 'llama.context_length': '2048', 'llama.attention.head_count': '32', 'llama.rope.freq_base': '10000.000000', 'llama.rope.dimension_count': '64', 'general.file_type': '7', 'llama.feed_forward_length': '5632', 'llama.embedding_length': '2048', 'llama.block_count': '22', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'general.name': 'tinyllama_tinyllama-1.1b-chat-v1.0'}
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q8_0:  226 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 7.95 GiB (8.50 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct-imatrix
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size =    0.30 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size =  7605.34 MiB, ( 7605.41 / 10922.67)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   532.31 MiB
llm_load_tensors:      Metal buffer size =  7605.33 MiB
.........................................................................................
[04/28/24 15:03:32] INFO     ['distilabel.pipeline.local'] ⏳ Steps loaded: 4/5                                                                            local.py:380
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 11453.25 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    64.00 MiB, ( 7671.22 / 10922.67)
llama_kv_cache_init:      Metal KV buffer size =    64.00 MiB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.49 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   258.50 MiB, ( 7929.72 / 10922.67)
llama_new_context_with_model:      Metal compute buffer size =   258.50 MiB
llama_new_context_with_model:        CPU compute buffer size =     9.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 |
Model metadata: {'general.quantization_version': '2', 'tokenizer.chat_template': "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}", 'tokenizer.ggml.eos_token_id': '128001', 'tokenizer.ggml.bos_token_id': '128000', 'tokenizer.ggml.model': 'gpt2', 'llama.vocab_size': '128256', 'llama.attention.head_count_kv': '8', 'llama.context_length': '8192', 'llama.attention.head_count': '32', 'general.file_type': '7', 'llama.feed_forward_length': '14336', 'llama.rope.dimension_count': '128', 'llama.rope.freq_base': '500000.000000', 'llama.embedding_length': '4096', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'general.name': 'Meta-Llama-3-8B-Instruct-imatrix', 'llama.block_count': '32'}
[04/28/24 15:03:35] INFO     ['distilabel.pipeline.local'] ⏳ Steps loaded: 5/5                                                                            local.py:380
                    INFO     ['distilabel.pipeline.local'] ✅ All the steps have been loaded!                                                              local.py:384
                    INFO     ['distilabel.step.loadamrit_dataset'] 🧬 Starting yielding batches from generator step 'loadamrit_dataset'. Offset: 0         local.py:747
                    INFO     ['distilabel.step.loadamrit_dataset'] 📨 Step 'loadamrit_dataset' sending batch 0 to output queue                             local.py:825
                    INFO     ['distilabel.step.loadamrit_dataset'] 🏁 Finished running step 'loadamrit_dataset'                                            local.py:715
                    INFO     ['distilabel.step.genWithSmallerLLM'] 📦 Processing batch 0 in 'genWithSmallerLLM'                                            local.py:792

llama_print_timings:        load time =     726.45 ms
llama_print_timings:      sample time =      28.59 ms /   385 runs   (    0.07 ms per token, 13466.25 tokens per second)
llama_print_timings: prompt eval time =     726.31 ms /   127 tokens (    5.72 ms per token,   174.86 tokens per second)
llama_print_timings:        eval time =    4213.97 ms /   384 runs   (   10.97 ms per token,    91.13 tokens per second)
llama_print_timings:       total time =    5438.51 ms /   511 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =     726.45 ms
llama_print_timings:      sample time =       6.89 ms /    83 runs   (    0.08 ms per token, 12041.20 tokens per second)
llama_print_timings: prompt eval time =     244.71 ms /   345 tokens (    0.71 ms per token,  1409.81 tokens per second)
llama_print_timings:        eval time =     930.67 ms /    82 runs   (   11.35 ms per token,    88.11 tokens per second)
llama_print_timings:       total time =    1273.86 ms /   427 tokens
[04/28/24 15:03:42] INFO     ['distilabel.step.genWithSmallerLLM'] 📨 Step 'genWithSmallerLLM' sending batch 0 to output queue                             local.py:825
Llama.generate: prefix-match hit
                    INFO     ['distilabel.step.genWithSmallerLLM'] 📦 Processing batch 1 in 'genWithSmallerLLM'                                            local.py:792

llama_print_timings:        load time =     726.45 ms
llama_print_timings:      sample time =       5.20 ms /    61 runs   (    0.09 ms per token, 11726.26 tokens per second)
llama_print_timings: prompt eval time =      76.20 ms /    93 tokens (    0.82 ms per token,  1220.52 tokens per second)
llama_print_timings:        eval time =     627.89 ms /    60 runs   (   10.46 ms per token,    95.56 tokens per second)
llama_print_timings:       total time =     775.86 ms /   153 tokens
Llama.generate: prefix-match hit

llama_print_timings:        load time =     726.45 ms
llama_print_timings:      sample time =      30.90 ms /   399 runs   (    0.08 ms per token, 12911.37 tokens per second)
llama_print_timings: prompt eval time =      74.76 ms /    86 tokens (    0.87 ms per token,  1150.30 tokens per second)
llama_print_timings:        eval time =    4357.29 ms /   398 runs   (   10.95 ms per token,    91.34 tokens per second)
llama_print_timings:       total time =    4943.47 ms /   484 tokens
[04/28/24 15:03:47] INFO     ['distilabel.step.genWithSmallerLLM'] 📨 Step 'genWithSmallerLLM' sending batch 1 to output queue                             local.py:825
                    INFO     ['distilabel.step.genWithSmallerLLM'] 🏁 Finished running step 'genWithSmallerLLM'                                            local.py:715
                    INFO     ['distilabel.step.combine_columns'] 📦 Processing batch 0 in 'combine_columns'                                                local.py:792
                    INFO     ['distilabel.step.combine_columns'] 📨 Step 'combine_columns' sending batch 0 to output queue                                 local.py:825
                    INFO     ['distilabel.step.combine_columns'] 🏁 Finished running step 'combine_columns'                                                local.py:715
                    INFO     ['distilabel.step.ultrafeedback_meta3'] 📦 Processing batch 0 in 'ultrafeedback_meta3'                                        local.py:792
                    WARNING  ['distilabel.step.ultrafeedback_meta3'] ⚠️ Processing batch 0 with step 'ultrafeedback_meta3' failed. Sending empty batch...   local.py:809
                    WARNING  ['distilabel.step.ultrafeedback_meta3'] Subprocess traceback:                                                                 local.py:813

                             Traceback (most recent call last):
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/distilabel/pipeline/local.py",
                             line 800, in _non_generator_process_loop
                                 result = next(self.step.process_applying_mappings(*batch.data))
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/distilabel/steps/base.py", line
                             391, in process_applying_mappings
                                 for output_rows in generator:
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/distilabel/steps/tasks/base.py",
                             line 141, in process
                                 outputs = self.llm.generate(
                                           ^^^^^^^^^^^^^^^^^^
                               File
                             "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/pydantic/validate_call_decorator.py",
                             line 59, in wrapper_function
                                 return validate_call_wrapper(*args, **kwargs)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                               File
                             "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/pydantic/_internal/_validate_call.py",
                             line 81, in __call__
                                 res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/distilabel/llms/llamacpp.py",
                             line 121, in generate
                                 self._model.create_chat_completion(  # type: ignore
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/llama_cpp/llama.py", line 1675,
                             in create_chat_completion
                                 return handler(
                                        ^^^^^^^^
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/llama_cpp/llama_chat_format.py",
                             line 602, in chat_completion_handler
                                 completion_or_chunks = llama.create_completion(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/llama_cpp/llama.py", line 1511,
                             in create_completion
                                 completion: Completion = next(completion_or_chunks)  # type: ignore
                                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^
                               File "/Users/amritsingh/pythonpackages/anaconda3/envs/distil/lib/python3.11/site-packages/llama_cpp/llama.py", line 989, in
                             _create_completion
                                 raise ValueError(
                             ValueError: Requested tokens (817) exceed context window of 512

                    INFO     ['distilabel.step.ultrafeedback_meta3'] 📨 Step 'ultrafeedback_meta3' sending batch 0 to output queue                         local.py:825
                    INFO     ['distilabel.step.ultrafeedback_meta3'] 📦 Processing batch 1 in 'ultrafeedback_meta3'                                        local.py:792
                    WARNING  ['distilabel.step.ultrafeedback_meta3'] ⚠️ Processing batch 1 with step 'ultrafeedback_meta3' failed. Sending empty batch...   local.py:809
                    WARNING  ['distilabel.step.ultrafeedback_meta3'] Subprocess traceback:
alvarobartt commented 6 months ago

Hi here @amritsingh183! Thanks for opening the issue, indeed we're already working on this as well as aligning the supported params for other LLM providers too, I'll link the PR here once it's created so that you can use distilabel from that branch until v1.1.0 is released!

amritsingh183 commented 6 months ago

Thanks @alvarobartt !!

alvarobartt commented 6 months ago

Hi here @amritsingh183, the PR is still a draft but you can use it for n_ctx with no issues now! Install it from the branch as pip install git+https://github.com/argilla-io/distilabel.git@align-llm-params 👍🏻

Also expect it to be released in ~2 weeks, follow the open roadmap to stay tuned of all the features, fixes and improvements that will come for distilabel v1.1.0

https://github.com/orgs/argilla-io/projects/15

alvarobartt commented 6 months ago

Indeed this has just been merged into develop, so feel free to install it from develop instead 👍🏻

https://github.com/argilla-io/distilabel/pull/594

amritsingh183 commented 6 months ago

I tried the develop branch and it works... Thanks !! :-)