kyegomez / Kosmos-X

The Next Generation Multi-Modality Superintelligence
https://discord.gg/GYbXvDGevY
Apache License 2.0
70 stars 11 forks source link

example is not working, alternatives? #5

Open fblgit opened 8 months ago

fblgit commented 8 months ago

The example seems not working, does anyone have a working code for this model ?

Cell In[22], line 10
      7 images.long()
      9 # Pass the sample tensors to the model's forward function
---> 10 output = model.forward(text_tokens=text_tokens, images=images)
     12 # Print the output from the model
     13 print(f"Output: {output}")

File /data/conda/envs/eqbench/lib/python3.10/site-packages/kosmosx/model.py:251, in Kosmos.forward(self, text_tokens, images, **kwargs)
    248     raise
    250 try:
--> 251     return self.decoder(model_input, passed_x=model_input)[0]
    252 except Exception as e:
    253     logging.error(f"Failed during model forward pass: {e}")

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torchscale/architecture/decoder.py:399, in Decoder.forward(self, prev_output_tokens, self_attn_padding_mask, encoder_out, incremental_state, features_only, return_all_hiddens, token_embeddings, **kwargs)
    387 def forward(
    388     self,
    389     prev_output_tokens,
   (...)
    397 ):
    398     # embed tokens and positions
--> 399     x, _ = self.forward_embedding(
    400         prev_output_tokens, token_embeddings, incremental_state
    401     )
    402     is_first_step = self.is_first_step(incremental_state)
    404     # relative position

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torchscale/architecture/decoder.py:368, in Decoder.forward_embedding(self, tokens, token_embedding, incremental_state)
    365         positions = positions[:, -1:]
    367 if token_embedding is None:
--> 368     token_embedding = self.embed_tokens(tokens)
    370 x = embed = self.embed_scale * token_embedding
    372 if positions is not None:

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

File /data/conda/envs/eqbench/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:127, in Embedding.forward(self, input)
    126 def forward(self, input: Tensor) -> Tensor:
--> 127     emb = F.embedding(
    128         input,
    129         self.weight,
    130         self.padding_idx,
    131         self.max_norm,
    132         self.norm_type,
    133         self.scale_grad_by_freq,
    134         self.sparse,
    135     )
    137     return emb

File /data/conda/envs/eqbench/lib/python3.10/site-packages/torch/nn/functional.py:2233, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2227     # Note [embedding_renorm set_grad_enabled]
   2228     # XXX: equivalent to
   2229     # with torch.no_grad():
   2230     #   torch.embedding_renorm_
   2231     # remove once script supports set_grad_enabled
   2232     _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2233 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding)

Upvote & Fund

Fund with Polar

linux-leo commented 6 months ago

@fblgit

@kyegomez is known for creating fake repos hallucinated by LLMs to implement public research by others. Most of his projects are flawed or often inferior copies of other research papers, or non-functional at all. Watch out.

This also puts his Agora collective into question.

fake