Closed rajveer43 closed 2 months ago
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。
@AlphaHinex @solrex @tonyanhq @ZeyuChen can anyone look at this issue!
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。
@AlphaHinex @solrex @tonyanhq @ZeyuChen can anyone look at this issue!
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。
We will feel thankful if you can contribute it and then submit a PR request to PaddleNLP, since we don't have enough time working on this model.
okay, i am willingly submitting the issue
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。
This issue was closed because it has been inactive for 14 days since being marked as stale. 当前issue 被标记为stale已有14天,即将关闭。
Feature request
Implement this in Paddle paddle Multimodal learning aims to build models that can process and relate information from multiple modalities. Despite years of development in this field, it still remains challenging to design a unified network for processing various modalities (e.g. natural language, 2D images, 3D point clouds, audio, video, time series, tabular data) due to the inherent gaps among them. In this work, we propose a framework, named Meta-Transformer, that leverages a frozen encoder to perform multimodal perception without any paired multimodal training data. In Meta-Transformer, the raw input data from various modalities are mapped into a shared token space, allowing a subsequent encoder with frozen parameters to extract high-level semantic features of the input data. Composed of three main components: a unified data tokenizer, a modality-shared encoder, and task-specific heads for downstream tasks, Meta-Transformer is the first framework to perform unified learning across 12 modalities with unpaired data. Experiments on different benchmarks reveal that Meta-Transformer can handle a wide range of tasks including fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). Meta-Transformer indicates a promising future for developing unified multimodal intelligence with transformers.
Motivation
Implementation is available
Your contribution
-