Open radna0 opened 2 weeks ago
padding is not the issue, issue is
1. Size: 29.66M
XLA label: register allocator spill slots call depth 2
Allocation type: scoped
==========================
If you can dump the HLO following https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#common-debugging-environment-variables-combinations we can open a bug to the XLA team. @will-cromar since you are offcall this week.
I followed the guide, and here is the HLO file. @JackCaoG @will-cromar. *Renamed to .txt, because Github doesn't allow hlo format save1.hlo.0.txt
๐ Bug
The error seems to be related to pixel_values being padded
To Reproduce
Steps to reproduce the behavior:
xr.use_spmd(auto=False)
from torch_xla.experimental.spmd_fully_sharded_data_parallel import ( _prepare_spmd_partition_spec, SpmdFullyShardedDataParallel as FSDPv2, )
IMAGENET_MEAN = (0.485, 0.456, 0.406) IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size): MEAN, STD = IMAGENET_MEAN, IMAGENET_STD transform = T.Compose([ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), T.ToTensor(), T.Normalize(mean=MEAN, std=STD) ]) return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): best_ratio_diff = float('inf') best_ratio = (1, 1) area = width height for ratio in target_ratios: target_aspect_ratio = ratio[0] / ratio[1] ratio_diff = abs(aspect_ratio - target_aspect_ratio) if ratio_diff < best_ratio_diff: best_ratio_diff = ratio_diff best_ratio = ratio elif ratio_diff == best_ratio_diff: if area > 0.5 image_size image_size ratio[0] * ratio[1]: best_ratio = ratio return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False): orig_width, orig_height = image.size aspect_ratio = orig_width / orig_height
def load_image(image_file, input_size=448, max_num=12): image = Image.open(image_file).convert('RGB') transform = build_transform(input_size=input_size) images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) pixel_values = [transform(image) for image in images] pixel_values = torch.stack(pixel_values) return pixel_values
path = 'radna/XLA-InternVL2-8B' model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True, ).eval()
Define the mesh and partition_spec
num_devices = xr.global_runtime_device_count() mesh_shape = (num_devices, 1) device_ids = np.array(range(num_devices))
To be noted, the mesh must have an axis named 'fsdp', which the weights and activations will be sharded on.
mesh = xs.Mesh(device_ids, mesh_shape, ("fsdp", "model")) xs.set_global_mesh(mesh)
model = FSDPv2(model) tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
set the max number of tiles in
max_num
pixel_values = load_image('./image1.jpg', max_num=1).to(torch.bfloat16).to(xm.xla_device()) generation_config = dict(max_new_tokens=1024, do_sample=True)
xs.mark_sharding(pixel_values, xs.get_global_mesh(), _prepare_spmd_partition_spec(pixel_values, shard_maximal=True))
single-image single-round conversation (ๅๅพๅ่ฝฎๅฏน่ฏ)
question = '\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')