meta-llama / llama

Inference code for Llama models
Other
55.53k stars 9.48k forks source link

Feasibility of using Llama2 LLM on AWS EC2 G4dn.8xLarge and Inferentia 2.8xlarge Instances #566

Open AmlanSamanta opened 1 year ago

AmlanSamanta commented 1 year ago

Hi all,

Is it possible to do inference on the aforementioned machines as we are facing so many issues in Inf2 with Falcon model?

Context:

We are facing issues while using Falcon/Falcoder on the Inf2.8xl machine. We were able to run the same experiment on G5.8xl instance successfully but we are observing that the same code is not working on Inf2 machine instance. We are aware that it has Accelerator instead of NVIDIA GPU. Hence we tried the neuron-core's capability and added required helper code for using the capability of neuron-cores of the instance by using the torch-neuronx library. The code changes and respective error screenshots are provided below for your reference:

Code without any torch-neuronx usage - Generation code snippet:

generation_output = model.generate( input_ids = input_ids, attention_mask = attention_mask, generation_config = generation_config, return_dict_in_generate = True, output_scores = False, max_new_tokens = max_new_tokens, early_stopping = True )

print("generation_output")

print(generation_output)

s = generation_output.sequences[0] output = tokenizer.decode(s)

without any changes

Code using torch-neuronx - helper function code snippet:

def generate_sample_inputs(tokenizer, sequence_length): dummy_input = "dummy" embeddings = tokenizer(dummy_input, max_length=sequence_length, padding="max_length",return_tensors="pt") return tuple(embeddings.values())

def compile_model_inf2(model, tokenizer, sequence_length, num_neuron_cores):

use only one neuron core os.environ["NEURON_RT_NUM_CORES"] = str(num_neuron_cores) import torch_neuronx payload = generate_sample_inputs(tokenizer, sequence_length) return torch_neuronx.trace(model, payload)

model = compile_model_inf2(model, tokenizer, sequence_length=512, num_neuron_cores=1)

with torch-neuron related code1

with torch-neuron related code2

Can this github issue address our specific problems mentioned above? https://github.com/oobabooga/text-generation-webui/issues/2260

My queries are basically:

  1. Can we try Llama 2 on G4dn.8xLarge and Inferentia 2.8xlarge instances or it is not supported yet? If not, which machine instance we should try considering cost-effectiveness?
  2. Is it feasible to do inference with Falcon on Inf2 or should we go for G4dn.8xlarge as we are facing so many issues in Inf2?
mallapraveen commented 1 year ago

@AmlanSamanta We are having same query. Any answers you got or any insights you provide would be great.