I am using the Docker image as in README
On executing the inference command, After getting the config , the process starts but gets killed by itself at one point
I believe its probably when it tries to build module
model = ( build_module( cfg.model, MODELS, input_size=latent_size, in_channels=vae.out_channels, caption_channels=text_encoder.output_dim, model_max_length=text_encoder.model_max_length, enable_sequence_parallelism=enable_sequence_parallelism, ) .to(device, dtype) .eval() )
I am using the Docker image as in README On executing the inference command, After getting the config , the process starts but gets killed by itself at one point
I believe its probably when it tries to build module
model = ( build_module( cfg.model, MODELS, input_size=latent_size, in_channels=vae.out_channels, caption_channels=text_encoder.output_dim, model_max_length=text_encoder.model_max_length, enable_sequence_parallelism=enable_sequence_parallelism, ) .to(device, dtype) .eval() )
My machine is an Mac M2 Pro.