Lin-Yijie / Graph-Matching-Networks

PyTorch implementation of Graph Matching Networks, e.g., Graph Matching with Bi-level Noisy Correspondence (COMMON, ICCV 2023), Graph Matching Networks for Learning the Similarity of Graph Structured Objects (GMN, ICML 2019).
Other
271 stars 55 forks source link

Thanks for your excellent work, but why can't I find any work related to adding noise in the code? #11

Open Tp-mi opened 2 months ago

Lin-Yijie commented 2 months ago

As noise is inherent in the datasets like Pascal VOC, we conducted direct comparisons with other methods using ThinkMatch framework. Here, we provide code for generating noisy keypoints in the Synthetic Noise Experiments described in our paper. Once the 'data_noisexxx.json' file is generated, it's necessary to manually update the loaded annotation files in pygmtools with the new filename, this line.

import math
import json
import numpy as np

# The displacement (s, θ) is generated from uniform distribution: s ∼ U (0.1, 0.2), θ ∼ U (0, 360)
# where s is the displacement value and θ is the angle.
scope = 0.2
noise_rate = 0.2 # noisy rate \eta

def generate_noise_theta(keypoint, scope=0.2):
    x = keypoint['x']
    y = keypoint['y']
    if x > 256 or y > 256 or x < 0 or y < 0:
        # mark keypoints out of the bounding box as noise
        print((x, y))
        keypoint['noise'] = True
        return keypoint

    distance = np.random.uniform(0.5, 1) * scope * 256 # s ∼ U (0.1, 0.2)
    theta = np.random.uniform() * 2 * math.pi

    x_add = np.cos(theta) * distance
    y_add = np.sin(theta) * distance

    while (x + x_add) > 256 or (x + x_add) < 0 or (y + y_add) > 256 or (y + y_add) < 0:
        distance = np.random.uniform(0.5, 1) * scope * 256
        theta = np.random.uniform() * 2 * math.pi
        x_add = np.cos(theta) * distance
        y_add = np.sin(theta) * distance

    x = x + x_add
    y = y + y_add
    keypoint['x'] = x
    keypoint['y'] = y
    keypoint['noise'] = True
    return keypoint

# get annotation data
data_path = './data/SPair-71k/data.json'
with open(data_path) as f:
    data_dict = json.load(f)

# get training data list
train_data_path = './data/SPair-71k/train.json'
with open(train_data_path) as f:
    train_data_dict = json.load(f)
train_data_list = []
for id in train_data_dict:
    train_data_list.append(id[0])
    train_data_list.append(id[1])
train_data_list = set(train_data_list)

for index, k in enumerate(train_data_list):
    keypoint_size = len(data_dict[k]['kpts']) # num of keypoints
    noise_num = math.ceil(noise_rate * keypoint_size) # num of keypoints selected as noise
    window = np.random.choice(keypoint_size, noise_num, replace=False) # the index of noisy keypoints
    for i in range(keypoint_size):
        if i in window:
            # for selected keypoints, adding displacement
            data_dict[k]['kpts'][i] = generate_noise_theta(data_dict[k]['kpts'][i], scope)
        else:
            data_dict[k]['kpts'][i]['noise'] = False

noise_name = 'data_noise' + str(noise_rate) + '_scope' + str(scope)
noisedata = './data/SPair-71k/' + noise_name + '.json'
with open(noisedata, "w", encoding="utf-8") as fp:
    json.dump(data_dict, fp, ensure_ascii=False, indent=4)
Tp-mi commented 2 months ago

Thank you very much for your reply. Have you tried using pixel-level noise?

Lin-Yijie commented 2 months ago

In the work COMMON, we do not evaluate such choice, but worth exploration. "Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond" has attempt this case in adversarial training.