gicLAB / tvm-GSPC

Apache License 2.0
7 stars 2 forks source link

where can I find the pytorch version of optimizing grouped convolutions #1

Open lmomoy opened 4 years ago

lmomoy commented 4 years ago

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :)

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

For bug reports, to help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.

For feature proposals, list clear, small actionable items so we can track the progress of the change.

Wheest commented 4 years ago

Hi there, thanks for the interest.

This codebase has the TVM implementation of the GSPC algorithm.

For PyTorch, we just use the stock implementation. You can find all of our models in the Onnx format here.

You could have inference code that looks something like this (feeding in a dictionary of PyTorch models):

def pytorch_bench(models, input_shape, runs=100):
    model_times = {}
    j = 1
    dataset_dict, shape_dict, _ = get_dataset_dict()
    for name, model in models.items():
        print(name, '(', j, '/', len(models.keys()), ')')
        times = []
        model = model.eval()
        for i in range(0, runs):
            x = Variable(torch.rand(input_shape))
            start = time.time()
            model(x)
            times.append(time.time() - start)

        model_time = {}
        medtime = np.median(times)
        model_time['median_inf_time'] = medtime * 1000
        model_time['mean_inf_time'] = np.mean(times) * 1000
        model_time['std'] = np.std(times) * 1000
        model_time['runs'] = runs

        model_times[name] = model_time

        j += 1
        print(f"{name}: {medtime}ms (std:{np.std(times) * 1000})")
    return model_times

You can see a full PyTorch inference benchmark workflow here.