KevinMusgrave / pytorch-adapt

Domain adaptation made easy. Fully featured, modular, and customizable.
https://kevinmusgrave.github.io/pytorch-adapt/
MIT License
353 stars 15 forks source link

Specific Architecture #93

Closed rtaiello closed 1 year ago

rtaiello commented 1 year ago

Hi @KevinMusgrave,

I would like to ask the following question, since I'm trying to play with the library and I think that what I want to do it's easily doable exploiting all the library features.

I would like to try to implement the following architecture, given two separate src (src_1, src_2) and given two independent generators (g_1, g_2) and two independent classifiers (C_1, C_2). Where features_1 = G_1 (src_1) is input of C_1, and likewise features_2 = G_2(src_2) is input of C_2. And both features_1 and features_2 are passed to D (DANN's discriminator) which is shared.

Many thanks in advance!

scratch drawio

KevinMusgrave commented 1 year ago

Maybe it depends on the format that your dataset returns. For example, if you know that the first half of each batch will contain src_1 and the second half will be src_2, then you could just wrap your G and C models:

class WrappedG(torch.nn.Module):
    def __init__(self, G1, G2):
        self.G1 = G1
        self.G2 = G2

    def forward(self, x):
        bs = len(x)
        g1_output = self.G1(x[:bs//2])
        g2_output = self.G2(x[bs//2:])
        return torch.cat([g1_output, g2_output], dim=0)

class WrappedC(torch.nn.Module):
# same idea as WrappedG
rtaiello commented 1 year ago

Many thanks, you helped me a lot!