valer1435 / RepoPilot

Open-source AI-powered tool for smart repository maintainance
3 stars 0 forks source link

Multimodal #18

Open valer1435 opened 2 months ago

valer1435 commented 2 months ago

How to use Fedot for multimodal case?

valer1435 commented 2 months ago

@RepoPilotAssistant help

RepoPilotAssistant commented 2 months ago

Here is how you can use Fedot for a multimodal case:

  1. Installation: First, make sure you have FEDOT installed. If not, install it using pip:

    pip install fedot
  2. Data Preparation: Prepare your multimodal data. Multimodal data can include different types of data like tabular, image, text, etc. Ensure that all the data is properly preprocessed and ready for use.

  3. Initialize FEDOT: Import the necessary libraries and initialize a Fedot object with the appropriate parameters. For example:

    from fedot import Fedot
    
    # Initialize Fedot with classification problem type, timeout of 5 minutes, and 'best_quality' preset
    model = Fedot(problem='classification', timeout=5, preset='best_quality', n_jobs=-1)
  4. Fit the Model: Fit the initialized Fedot object to your training data using the fit() method. Pass the features (x_train) and target variable (y_train) as arguments:

    model.fit(features=x_train, target=y_train)
  5. Prediction: Once the model is trained, you can use it to make predictions on new, unseen data using the predict() method:

    prediction = model.predict(features=x_test)
  6. Evaluate the Model: After making predictions, evaluate the performance of your model using appropriate metrics for your task (classification or regression). You can use FEDOT's built-in metric calculation function:

    metrics = model.get_metrics(target=y_test)
  7. Save and Load Models: If you want to save your trained model, you can do so using the save() method. To load a saved model, use the load() method:

    # Save the model
    model.save('my_model.pkl')
    
    # Load the model
    loaded_model = Fedot.load('my_model.pkl')
  8. Advanced Usage: For more advanced usage of FEDOT, such as custom tasks or composite models, refer to the "Advanced usage" section in the documentation.

By following these steps, you should be able to effectively use FEDOT for your multimodal cases.<|eot_id|>

Please follow links below:

  1. Multi-Modal Tasks — FEDOT 0.7.4 documentation
  2. Note
  3. What is FEDOT — FEDOT 0.7.4 documentation
  4. Multimodal data — FEDOT 0.7.4 documentation
  5. Welcome to FEDOT’s documentation! — FEDOT 0.7.4 documentation
valer1435 commented 2 months ago

Thanks @RepoPilotAssistant Please explain how data should looks like

RepoPilotAssistant commented 2 months ago

Based on the provided information, multimodal data in FEDOT is represented using the InputData class for input features and the OutputData class for target variables. For multiple modalities, you can include them within these classes. Here are examples of how you can structure your multimodal data:

  1. Input Data: In this example, we have two modalities: numerical features (represented as a 2D numpy array) and text features (represented using the SupplementaryData class).
from fedot.core.data import InputData, SupplementaryData

# Create input data with multiple modalities (e.g., numerical features and text features)
input_data = InputData(
    idx=np.array([0, 1, 2]),
    task=Task(task_type='classification'),
    data_type=DataTypesEnum.table,
    features=np.array([[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]]),
    supplementary_data=SupplementaryData(
        text_features=['This is a sample text.', 'Another sample text.', 'Yet another sample text.']
    )
)
  1. Output Data: In this example, we have a single target variable represented as a numpy array.
from fedot.core.data import OutputData

# Create output data with a single target variable (e.g., class label)
output_data = OutputData(
    idx=np.array([0, 1, 2]),
    task=Task(task_type='classification'),
    data_type=DataTypesEnum.table,
    target=np.array(['cat', 'dog', 'cat'])
)

You can then use these input and output data objects to train and evaluate machine learning models in FEDOT.

Please follow links below:

  1. Multi-Modal Tasks — FEDOT 0.7.4 documentation
  2. Note
  3. Returns
  4. Multimodal data — FEDOT 0.7.4 documentation
  5. What is FEDOT — FEDOT 0.7.4 documentation
valer1435 commented 1 week ago

Thank you @RepoPilotAssistant

RepoPilotAssistant commented 1 week ago

To structure multimodal data in FEDOT, you can use the MultiModalData class. This class allows you to load data from a CSV file and specify the task type, target columns, and other configurations such as text columns and columns to drop. Here’s a basic example:

from fedot.core.data.multi_modal import MultiModalData

data = MultiModalData.from_csv(
    file_path='multimodal_dataset.csv',
    task='classification',
    target_columns='target_column',
    text_columns=['text_col1', 'text_col2'],
    columns_to_drop=['col_to_drop1', 'col_to_drop2'],
    index_col=None
)

In this example, you define the task type as 'classification', specify the target column, identify text columns, and select columns to drop from the dataset. The index_col parameter is set to None if there is no index column in the dataset. This structure allows FEDOT to handle multimodal data effectively.

Please follow links below:

  1. Multi-Modal Tasks — FEDOT 0.7.4 documentation
  2. After the fitting is completed, you can look at the structure of the resulting pipeline.
  3. Multimodal data — FEDOT 0.7.4 documentation
  4. Data — FEDOT 0.7.4 documentation
  5. What is FEDOT — FEDOT 0.7.4 documentation