DeepRNN / image_captioning

Tensorflow implementation of "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"
MIT License
785 stars 354 forks source link

AttributeError. Dataset attribute has no attribute: #41

Open AnwarUllahKhan opened 6 years ago

AnwarUllahKhan commented 6 years ago

def train(self, sess, train_data): """ Train the model using the COCO train2014 data. """ print("Training the model...") config = self.config if not os.path.exists(config.summary_dir): os.mkdir(config.summary_dir) train_writer = tf.summary.FileWriter(config.summarydir, sess.graph) for in tqdm(list(range(config.numepochs)), desc='epoch'): for in tqdm(list(range(train_data.num_batches)), desc='batch'): batch = train_data.nextbatch() image_files, sentences, masks = batch images = self.image_loader.load_images('./train/images') feeddict = {self.images: images, self.sentences: sentences, self.masks: masks} , summary, global_step = sess.run([self.opt_op, self.summary, self.global_step], feed_dict=feed_dict) if (global_step + 1) % config.save_period == 0: self.save() train_writer.add_summary(summary, global_step) train_data.reset() self.save() train_writer.close() print("Training complete.")

File "H:\First Neural Network\image_captioning-master\base_model.py", line 44, in train batch = train_data.nextbatch()

AttributeError: 'DataSet' object has no attribute '_BaseModelnextbatch'