boschresearch / torchphysics

https://boschresearch.github.io/torchphysics/
Apache License 2.0
389 stars 41 forks source link

Randomly choose the trunknet input points #18

Closed JianfengNing closed 2 years ago

JianfengNing commented 2 years ago

def _slice_points(self, points, out_points, out_axis, batch_size, idx): a = (idxbatch_size) % len(points) b = ((idx+1)batch_size) % len(points) if a < b:

points = points[a:b]

        if out_axis == 0:
            points = points[a:b]
            out_points = out_points[a:b, :]
        elif out_axis == 1:
            INDEX = np.random.choice([i for i in range(len(points))], batch_size, replace=False)
            points = points[INDEX]
            out_points = out_points[:, INDEX]
            #out_points = out_points[:, a:b]
        else:
            raise ValueError
    else:
        points = torch.cat([points[a:], points[:b]], dim=0)
        if out_axis == 0:
            out_points = torch.cat([out_points[a:,:], out_points[:b,:]], dim=0)
        elif out_axis == 1:
            out_points = torch.cat([out_points[:,a:], out_points[:,:b]], dim=1)
        else:
            raise ValueError
    return points, out_points
nheilenkoetter commented 2 years ago

Hi, thanks for your contribution. Why would you want to choose the points randomly in every step, wouldn't it be more appropriate to choose them subsequently from a shuffled list, similar to how it is done currently when you set shuffle=True? To obtain this feature, I would suggest to divide the shuffle flag into 2 flags: shuffle_trunk and shuffle_branch which can then be set seperately. What do you think?

In general, I'd suggest to move this discussion to the tomf98 fork and set the pull request (maybe also including a commit) there, in this repo, we can set up a pull request when everything is ready.

Best regards, Nick

JianfengNing commented 2 years ago

Hello,

In your version, the points for each iterations are points[a:b], thus the points are some continuous points. If the batch size is 33 for 33*33 data set. then in each iteration, the sampled points would be lie in a line for each iteration, so that they are not dense in the domain. The diversity of the points is important. Similar ideas can be seen in PINN and DeepRitz, we randomly sample points for each iterations. And I have tested it for Poisson problem. If not choose the points randomly, almost not converge. choosing points randomly can get better results. Dividing the shuffle flag into 2 flags seems to be good, as long as we can obtain the feature. I am not very good at code, I can only give some suggestions for the algorithm.

Best regards, Jianfeng

nheilenkoetter commented 2 years ago

Hi, could you try using the changes I commited in the fork? Just let me know whether this solves the mentioned problem. Otherwise, we could also take your version, but I believe its downside is that it does not guarantee that every point will be used.

JianfengNing commented 2 years ago

Hi,

When I use the new version, the following error occured.

Traceback (most recent call last): File "D:/Users/20191/Desktop/SurveyPaper/Poisson/jianfeng_deeponet.py", line 156, in dataloader = tp.utils.DeepONetDataLoader(branch_data=Input, trunk_data=X, TypeError: init() got an unexpected keyword argument 'branch_space'

nheilenkoetter commented 2 years ago

Sorry, for some reason not all of my changes were included in the last commit. Please pull and try again (I hope this should be solved by now, I will also do some small test). Sorry!

JianfengNing commented 2 years ago

Hello,

can you add a loss function which is defined as Loss = loss ** 0.5, where loss is the loss function we are using now. Using Loss sometimes can achieve better results.

JianfengNing commented 2 years ago

Hello

it seems that the learning rate is actually fixed in solver.py ? I found that only I change the learning rate in solver.py, the learning for training is changed.

nheilenkoetter commented 2 years ago

Hello,

in my experiments, setting the lr seems to work properly. Could you provide the code you use to set the learning rate? Or some minimal not-working example?

JianfengNing commented 2 years ago

Hello,

I have fixed the problem. In my previous code, I used "solver = tp.solver.Solver([cond])", so the defult optimizer was used.

Now I have changed it to solver = tp.solver.Solver([cond],val_conditions=(),optimizer_setting=optim)

nheilenkoetter commented 2 years ago

Hi,

Hello,

can you add a loss function which is defined as Loss = loss ** 0.5, where loss is the loss function we are using now. Using Loss sometimes can achieve better results.

I've added this option now in the fork, you can set the parameter root=2. Please also keep in mind to run using use_full_dataset=False in the DataCondition.