aime-team / pytorch-benchmarks

A benchmark framework for Pytorch
MIT License
22 stars 4 forks source link

Bugs in utils/utils.py #3

Open zkghost opened 1 month ago

zkghost commented 1 month ago

Running the benchmark twice I've gotten failures:

utils/utils.py", line 685, in remove_punc
    exclude = set(string.punctuation)
                  ^^^^^^
NameError: name 'string' is not defined

and here:

    def evaluate_step(self, model_output, example_indices):
        for i, example_index in enumerate(example_indices[0]):
            if not self.args.synthetic_data:
                eval_features = self.data.preprocessed_data.eval_features[example_index.item()]
                unique_id = torch.tensor(eval_features.unique_id).to(args.device)
            else:
                unique_id = torch.randint(low=0, high=self.args.num_synth_data, size=[1], dtype=torch.long).to(args.device)

both references to args.device should be prefixed with self.

am I doing something wrong? How were these ever run successfully?

carlovogel commented 1 month ago

Hi zkghost, you're right and you indeed found a bug. We hardly used our tool for evaluation benchmarks especially of the bert models being mainly focused on training benchmarks. Sorry for the inconvenience and thank you for pointing it out. We just committed a bugfix solving these issues.

Best regards

Carlo Vogel Software Developer @ AIME - HPC Cloud & Hardware