Closed M-Melodious closed 2 years ago
Hi,
What version are you using ? Use pip show HugsVision
and put here the output.
Hi there.
Sorry I forgot to mention the version. Here's the screenshot
I am suffering from same error
I tried from a new environment :
conda create -y --name issue_41 python=3.6
conda activate issue_41
pip install hugsvision
cd "./binary_classification/"
pip install -r requirements.txt
pip install pandas
pip install seaborn
The code :
import argparse
from hugsvision.dataio.VisionDataset import VisionDataset
from hugsvision.nnet.VisionClassifierTrainer import VisionClassifierTrainer
from transformers import ViTFeatureExtractor, ViTForImageClassification
import pandas as pd
import seaborn as sn
from sklearn.metrics import confusion_matrix
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
parser = argparse.ArgumentParser(description='Image classifier')
parser.add_argument('--name', type=str, default="MyVitModel", help='The name of the model')
parser.add_argument('--imgs', type=str, default="/mnt/d/Projects/Datasets/IMAGE/Pneumothorax Binary Classification task/data/", help='The directory of the input images')
parser.add_argument('--output', type=str, default="./out/", help='The output directory of the model')
parser.add_argument('--epochs', type=int, default=1, help='Number of Epochs')
args = parser.parse_args()
# Load the dataset
train, test, id2label, label2id = VisionDataset.fromImageFolder(
args.imgs,
test_ratio = 0.15,
balanced = False,
augmentation = False,
)
huggingface_model = 'google/vit-base-patch16-224-in21k'
# Train the model
trainer = VisionClassifierTrainer(
model_name = args.name,
train = train,
test = test,
output_dir = args.output,
max_epochs = args.epochs,
cores = 4,
batch_size = 12,
model = ViTForImageClassification.from_pretrained(
huggingface_model,
num_labels = len(label2id),
label2id = label2id,
id2label = id2label,
),
feature_extractor = ViTFeatureExtractor.from_pretrained(
huggingface_model,
),
)
ref, hyp = trainer.evaluate_f1_score()
cm = confusion_matrix(ref, hyp)
labels = list(label2id.keys())
df_cm = pd.DataFrame(cm, index = labels, columns = labels)
plt.figure(figsize = (10,7))
sn.heatmap(df_cm, annot=True, annot_kws={"size": 8}, fmt="")
plt.savefig("./imgs/" + str(args.name) + "_conf_matrix_" + str(args.epochs) + ".jpg", bbox_inches = 'tight')
It's work perfectly :
My env :
hugsvision - Version: 0.75.3
transformers - Version: 4.18.0
python - Python 3.6.13 :: Anaconda, Inc.
Do you have your environment created with Python 3.6 ?
I was working with 3.7 Plus this error didnt occur till yesterday As soon as I tried it again this happened
I am working on Colab @qanastek By deafault the hugsvision download transformers==4.19.0 Should I change it if yes then where can I mention it
I tried to force the 4.19.0
but my environment doesn't want to do it. Can you try transformers==4.18.0
like you mention it ?
Should I mention it in requirements because if I do pip install hugsvision it causes transformers to download 4.19.0
It's seem to be caused to python 3.7.
I can try a python 3.7 setup locally.
Ok sure actually I panicked today (heheh) because the code was running smoothly till yesterday and then suddenly this happened
My py3.7 setup give me the following dependecies :
hugsvision - Version: 0.75.3
transformers - Version: 4.19.0
Python 3.7.13
And it crashed :
Got it Then I would change the environment and get it running thank you so so much for helping me out your library is a real boon
Thank you very much for your support!
I've been trying to finetune the vision transformer on custom dataset. I followed the steps from one of the tutorial notebook and ran into the following error:
AttributeError: tuple object has no attribute 'keys'
I thought I did something wrong so, I decided to try the demo tutorial. But the same error shows up. It seems like a bug. Please see the attached screenshot.