aws-neuron / aws-neuron-samples

Example code for AWS Neuron SDK developers building inference and training applications
Other
101 stars 32 forks source link

After successfully compiling, I started to run it, and it reported the following error, there is no code for that country. #72

Open SmallStom opened 2 months ago

SmallStom commented 2 months ago

TypeError Traceback (most recent call last) Cell In[5], line 35 24 prompt = ["a photo of an astronaut riding a horse on mars", 25 "sonic on the moon", 26 "elvis playing guitar while eating a hotdog", (...) 31 "kids playing soccer at the FIFA World Cup" 32 ] 34 # First do a warmup run so all the asynchronous loads can finish ---> 35 image_warmup = pipe(prompt[0]).images[0] 37 plt.title("Image") 38 plt.xlabel("X pixel scaling")

File /opt/aws_neuronx_venv_pytorch_2_1/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, *kwargs): 114 with ctx_factory(): --> 115 return func(args, kwargs)

File /opt/aws_neuronx_venv_pytorch_2_1/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py:1174, in StableDiffusionXLPipeline.call(self, prompt, prompt_2, height, width, num_inference_steps, timesteps, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, negative_original_size, negative_crops_coords_top_left, negative_target_size, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs) 1172 if ip_adapter_image is not None or ip_adapter_image_embeds is not None: 1173 added_cond_kwargs["image_embeds"] = image_embeds -> 1174 noise_pred = self.unet( 1175 latent_model_input, 1176 t, 1177 encoder_hidden_states=prompt_embeds, 1178 timestep_cond=timestep_cond, 1179 cross_attention_kwargs=self.cross_attention_kwargs, 1180 added_cond_kwargs=added_cond_kwargs, 1181 return_dict=False, 1182 )[0] 1184 # perform guidance 1185 if self.do_classifier_free_guidance:

File /opt/aws_neuronx_venv_pytorch_2_1/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, kwargs) 1516 return self._compiled_call_impl(*args, *kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(args, kwargs)

File /opt/aws_neuronx_venv_pytorch_2_1/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, *kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(args, **kwargs) 1529 try: 1530 result = None

TypeError: NeuronUNet.forward() got an unexpected keyword argument 'timestep_cond'

jluntamazon commented 2 months ago

Hi @SmallStom,

Would you be able to specify which sample you were executing and the versions of the software you were using?

SmallStom commented 2 months ago

yes,I use this notebook https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/inference/hf_pretrained_sdxl_base_1024_inference.ipynb

env: aws_neuronx_pytorch_2**

ec2: inf2.8xlarge
image

hannanjgaws commented 2 months ago

Hi SmallStom, thank you for providing the link to the notebook. We are able to reproduce a compiler issue that we are currently debugging. We will keep this ticket updated as we make progress on fixing the issue.