Open marvision-ai opened 4 years ago
Hi again,
I did something similar and what I had to do was to write a parser-writer that would convert the json to the respective objects in the package (in my case, bounding boxes represented as x1, x2, y1,y2 to the BoundingBoxes object). Then after the augmentation I would write the augmented data back to disk.
Let me know if you have an example of one of the json entries that you would like help with.
Best, Sebastian
Hi @jspaezp ,
Good to see you again :smile: . So i did the same as you when it came to object detection and found it not too hard. Segmentation seems like a different beast. I was hoping not to have to reinvent the wheel... but here we are.
I am currently training instance segmentation networks (MaskRCNN-resnet50) on a small dataset. This dataset is labelled with the program labelme.
If I use the following example image of a quokka and then annotation it, I get the following.
When saved, it will have a json file with polygons. I have attached a zip
that has the original image and the json
annotation.
Once I have this augmentation pipeline created, I train a new network with the augmented images.
Let me know if you have other questions.
Hey there, sorry for the late reply.
I noticed that the easier way to handle this segmap is to use the Polygon augmentable. This function seems to work fine for me:
def make_polys(json_file):
with open(json_file, "r") as js:
json_data = json.load(js)
polys = []
for shape in json_data['shapes']:
# This assert might be overkill but better safe that sorry ...
assert shape['shape_type'] == "polygon"
polys.append(Polygon(shape['points'], label=shape['label']))
img_shape = (json_data['imageHeight'], json_data['imageWidth'], 3)
polys_oi = PolygonsOnImage(polys, shape=img_shape)
return(polys_oi)
and using it in an example:
import json
import numpy as np
import cv2
import imgaug as ia
from imgaug.augmentables.polys import Polygon, PolygonsOnImage
def make_polys(json_file):
with open(json_file, "r") as js:
json_data = json.load(js)
polys = []
for shape in json_data['shapes']:
# This assert might be overkill but better safe that sorry ...
assert shape['shape_type'] == "polygon"
polys.append(Polygon(shape['points'], label=shape['label']))
img_shape = (json_data['imageHeight'], json_data['imageWidth'], 3)
polys_oi = PolygonsOnImage(polys, shape=img_shape)
return(polys_oi)
quokka_img = cv2.imread("./quokka/quokka.jpg")
polys_oi = make_polys("./quokka/quokka.json")
# This is just to plot it ...
overlay_quokka = polys_oi.draw_on_image(quokka_img)
cv2.imwrite("overlaid_quokka.png", overlay_quokka)
for i, p in enumerate(polys_oi):
overlay_quokka = p.draw_on_image(quokka_img, color=(0, 0, 255))
cv2.imwrite(f"over_p{i}_quokka.png", overlay_quokka)
would produce these 6 images (made a montage for ease of upload)
Hope it helps!
Best, Sebastian
@jspaezp Thats great!
One further question: To then apply augmentation to the image, would I just augment the base image image, and then loop through each polygon and augment it the same way?
heu @marvision-ai
I think the best way is to pass the image and polygons to the augmenter at the same time. Small example that would be run after the code in the former section.
from imgaug import augmenters as iaa
my_augmenter = iaa.Sequential([
iaa.GaussianBlur((0.1, 5)),
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Rotate((-45,45))])
# If you pass the arguments, it returns 2 elements
# - The augmented Image
# - The augmented polygons
augmented = my_augmenter(image = quokka_img, polygons = polys_oi)
[type(x) for x in augmented]
# [<class 'numpy.ndarray'>, <class 'imgaug.augmentables.polys.PolygonsOnImage'>]
# So you can make a bunch of augmented image/polygon pairs
augmented_list = [my_augmenter(image = quokka_img, polygons = polys_oi) for _ in range(10)]
# Now we just make the overlay for viz purposes
overlaid_images = [polys.draw_on_image(img) for img, polys in augmented_list]
cv2.imwrite("augmented_quokka_polys.png", cv2.hconcat(overlaid_images))
and as usual... the output of the overlays for visualization.
@jspaezp Alrightly, I think I get it. I will certainly try it tomorrow and let you know if I run into any other weird issues :smile:
Thanks a bunch! Your help is always well recieved and super clear. 👍
@jspaezp I have a similar issue. Your code helped already to augment my images and polygons.
How would I now be able to save the augmented polygons back to json labelme annotations format?
(or directly into COCO, would achieve a similar result)
ended up using https://github.com/joheras/CLoDSA , which allows data augmentation on polygon annotated images and directly returns a coco format json
ended up using https://github.com/joheras/CLoDSA , which allows data augmentation on polygon annotated images and directly returns a coco format json
There is nothing in your provided link. Could you please share info on how to do that? I`m facing the exact problem, I would appreciate your help
@Shawn94 I've corrected the link. I used the first tutorial, for instance segmentation. Let me know if it works.
I faced a small issue, the library expects the COCO dictonary to include an 'info' and 'licenses' key. You can manually edit those in. (don't think it matters what values you use)
path = "PATH"
f = open(path, "r")
contents = f.readlines()
f.close()
text = '''"info": { \n
"description": "XXX",\n
"url": "XXX",\n
"version": "0.1.0",\n
"year": 2021,\n
"contributor": "Jazzzzie",\n
"date_created": "2021-03-31 03:25:06.134418"\n
},\n
"licenses": [\n
{\n
"id": 1,\n
"name": "Attribution-NonCommercial-ShareAlike License",\n
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/"\n
}\n
],\n'''
contents.insert(1, text)
f = open(path, "w")
contents = "".join(contents)
f.write(contents)
f.close()
Hi @Jazzzzie Have you tried to get the annotation for each image i tried with the notebook tutorial but it gives the entire coco format with bounding boxes and the polygons.
But for each frame i need a indivual json file which we can upload in labelme software and for further modification to be done? Have you tried that,if so can you just send me the piece of code ?
Thanks
Does anyone know how to save the polygons from the segmentation from imgaug back to the points system that labelme expects?
How would I now be able to save the augmented polygons back to json labelme annotations format? Have you solved the problem @marvision-ai @Jazzzzie @Latha-13
Thank you for your code Multiple labels output one? @jspaezp
heu @marvision-ai
I think the best way is to pass the image and polygons to the augmenter at the same time. Small example that would be run after the code in the former section.
from imgaug import augmenters as iaa my_augmenter = iaa.Sequential([ iaa.GaussianBlur((0.1, 5)), iaa.Fliplr(0.5), iaa.Flipud(0.5), iaa.Rotate((-45,45))]) # If you pass the arguments, it returns 2 elements # - The augmented Image # - The augmented polygons augmented = my_augmenter(image = quokka_img, polygons = polys_oi) [type(x) for x in augmented] # [<class 'numpy.ndarray'>, <class 'imgaug.augmentables.polys.PolygonsOnImage'>] # So you can make a bunch of augmented image/polygon pairs augmented_list = [my_augmenter(image = quokka_img, polygons = polys_oi) for _ in range(10)] # Now we just make the overlay for viz purposes overlaid_images = [polys.draw_on_image(img) for img, polys in augmented_list] cv2.imwrite("augmented_quokka_polys.png", cv2.hconcat(overlaid_images))
and as usual... the output of the overlays for visualization.
https://github.com/rdt2992/Resnet/blob/master/data_augmentation_final
from imgaug import augmenters as iaa
import numpy as np
import cv2
import os
iname = []
folder = 'C:\\Users\\com\\PycharmProjects\\untitled\\test\\auto_test\\aug'
def load_images_from_folder(folder):
images = []
global iname
for root, dirs,fname in os.walk(folder):
for j in fname:
if 'jpg' in j:
img = cv2.imread(os.path.join(root, j))
images.append(img)
iname.append(j)
return images
def write_images(images, iname, name,wpath):
for i in range(0,len(images)):
cv2.imwrite(wpath + "%d_%s"%(name,iname[i]), images[i])
def augmentations1(images):
seq1 = iaa.Sequential([iaa.AverageBlur(k=(2.7)),iaa.MedianBlur(k=(3,11))])
#seq2 = iaa.AddToHueAndSaturation(15)
#seq3 = iaa.Dropout((0.05,0.1), per_channel=0.5)
seq4 = iaa.Sequential([iaa.Add((-15,15)),iaa.Multiply((0.3,1.5))])
#seq5 = iaa.AddToHueAndSaturation(-15)
img1 = seq1.augment_images(images)
#img2 = seq2.augment_images(images)
#img3 = seq3.augment_images(images)
img4 = seq4.augment_images(images)
#img5 = seq5.augment_images(images)
#list = [img1,img3,img4,img2,img5]
list = [img1,img4]
return list
def rotate(images):
seq1 = iaa.Sequential([iaa.Rot90(1, False)])
seq2 = iaa.Sequential([iaa.Rot90(2, False)])
seq3 = iaa.Sequential([iaa.Rot90(3, False)])
img1 = seq1.augment_images(images)
img2 = seq2.augment_images(images)
img3 = seq3.augment_images(images)
return [img1,img2,img3]
def augResize(images):
seq1 = iaa.Affine(scale=(0.5,0.5))
seq2 = iaa.Affine(scale=(1.5,1.5))
img1 = seq1.augment_images(images)
img2 = seq2.augment_images(images)
return [img1, img2]
def flip(images):
seq1 = iaa.Sequential([iaa.Fliplr(1.0)])
seq2 = iaa.Sequential([iaa.Flipud(1.0)])
img1 = seq1.augment_images(images)
img2 = seq2.augment_images(images)
return [img1, img2]
def jsonall(folder):
import os
import json
import copy
#folder = 'C:\\Users\\AAA\\Desktop\\resnet\\labelme-master\\examples\\semantic_segmentation\\data_annotated_290part2'
for root, dirs,fname in os.walk(folder):
#print(fname)
iname = fname
for j in fname:
#print(j)
if 'json' in j:
#print(j)
with open(folder + '\\'+j, 'r') as f2:
data = json.load(f2)
f2.close()
#str(data).replace(j,'0_'+j)
data2 = copy.deepcopy(data)
#data2['imagePath'] = '0_'+data['imagePath']
##print(data)
#with open(folder + '\\'+data2['imagePath'].replace('jpg','json'), 'w') as f:
# json.dump(data2, f)
#f.close()
""" data2['imagePath'] = '1_' + data['imagePath']
#print(data)
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()"""
data2['imagePath'] = '2_' + data['imagePath']
#print(data)
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
"""
data2['imagePath'] = '3_' + data['imagePath']
# print(data)
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
data2['imagePath'] = '4_' + data['imagePath']
# print(data)
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()"""
def json_rotate90(folder):
import os
import json
import copy
import math
#folder = 'C:\\Users\\AAA\\Desktop\\resnet\\labelme-master\\examples\\semantic_segmentation\\data_annotated_290part2'
for root, dirs, fname in os.walk(folder):
iname = fname
cy = 0
cx = 0
for j in fname:
if 'json' in j:
with open(folder + '\\' + j, 'r') as f2:
data = json.load(f2)
data2 = copy.deepcopy(data)
data2['imagePath'] = '90_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = math.cos(math.pi / 2) * (
data['shapes'][j]['points'][i][0] - cx) - math.sin(math.pi / 2) * (
data['shapes'][j]['points'][i][1] - cy) + cx+720
data2['shapes'][j]['points'][i][1] = math.sin(math.pi / 2) * (
data['shapes'][j]['points'][i][0] - cx) + math.cos(math.pi / 2) * (
data['shapes'][j]['points'][i][1] - cy) + cy
f2.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
data2 = copy.deepcopy(data)
data2['imagePath'] = '180_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = math.cos(math.pi) * (
data['shapes'][j]['points'][i][0] - cx) - math.sin(math.pi) * (
data['shapes'][j]['points'][i][1] - cy) + cx+1280
data2['shapes'][j]['points'][i][1] = math.sin(math.pi) * (
data['shapes'][j]['points'][i][0] - cx) + math.cos(math.pi) * (
data['shapes'][j]['points'][i][1] - cy) + cy+720
f.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
data2 = copy.deepcopy(data)
data2['imagePath'] = '270_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = math.cos(3 * math.pi / 2) * (
data['shapes'][j]['points'][i][0] - cx) - math.sin(3 * math.pi / 2) * (
data['shapes'][j]['points'][i][1] - cy) + cx
data2['shapes'][j]['points'][i][1] = math.sin(3 * math.pi / 2) * (
data['shapes'][j]['points'][i][0] - cx) + math.cos(3 * math.pi / 2) * (
data['shapes'][j]['points'][i][1] - cy) + cy+1280
f.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
def json_rescale(folder, scale1, scale2):
import os
import json
import copy
#folder = 'C:\\Users\\AAA\\Desktop\\resnet\\labelme-master\\examples\\semantic_segmentation\\data_annotated_290part2'
cx = 640
cy = 360
for root, dirs, fname in os.walk(folder):
for j in fname:
if 'json' in j:
num = int(scale1*100)
scale = scale1
with open(folder + '\\' + j, 'r') as f2:
data = json.load(f2)
data2 = copy.deepcopy(data)
data2['imagePath'] = str(num)+'_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = data['shapes'][j]['points'][i][0]*scale + (cx - scale*cx)
data2['shapes'][j]['points'][i][1] = data['shapes'][j]['points'][i][1]*scale + (cy - scale*cy)
f2.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
scale = scale2
num = int(scale2*100)
data2 = copy.deepcopy(data)
data2['imagePath'] = str(num) + '_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = data['shapes'][j]['points'][i][0] * scale + (
cx - scale * cx)
data2['shapes'][j]['points'][i][1] = data['shapes'][j]['points'][i][1] * scale + (
cy - scale * cy)
f2.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
def json_flip(folder):
import os
import json
import copy
#folder = 'C:\\Users\\AAA\\Desktop\\resnet\\labelme-master\\examples\\semantic_segmentation\\data_annotated_290part2'
cx = 640
cy = 360
for root, dirs, fname in os.walk(folder):
for j in fname:
if 'json' in j:
num = 46
with open(folder + '\\' + j, 'r') as f2:
data = json.load(f2)
data2 = copy.deepcopy(data)
data2['imagePath'] = str(num)+'_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][0] = data['shapes'][j]['points'][i][0] - 2*(data['shapes'][j]['points'][i][0]-cx)
f2.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
num = 28
data2 = copy.deepcopy(data)
data2['imagePath'] = str(num) + '_' + data['imagePath']
for j in range(len(data2['shapes'])):
for i in range(len(data2['shapes'][j]['points'])):
data2['shapes'][j]['points'][i][1] = data['shapes'][j]['points'][i][1] - 2*(data['shapes'][j]['points'][i][1]-cy)
f2.close()
with open(folder + '\\' + data2['imagePath'].replace('jpg', 'json'), 'w') as f:
json.dump(data2, f)
f.close()
def aug_filter(photos1, wpath, iname):
photo_aug = augmentations1(photos1)
#write_images(photo_aug[0], iname, 0, wpath) #blur
#write_images(photo_aug[1], iname, 1, wpath) #dropout
write_images(photo_aug[1], iname, 2, wpath) #light
#write_images(photo_aug[3], iname, 3, wpath) #hue
#write_images(photo_aug[4], iname, 4, wpath) #hue
jsonall(folder)
def aug_rotate(photos1, wpath, iname):
photo_aug = rotate(photos1)
write_images(photo_aug[0], iname, 90, wpath) # 90
write_images(photo_aug[1], iname, 180, wpath) # 180
write_images(photo_aug[2], iname, 270, wpath) # 270
json_rotate90(folder)
def aug_rescale(photos1, wpath, iname, scale1, scale2):
photo_aug = augResize(photos1)
write_images(photo_aug[0], iname, 50, wpath) # small
write_images(photo_aug[1], iname, 130, wpath) # large
json_rescale(folder,scale1, scale2)
def aug_flip(photos1, wpath, iname):
photo_aug = flip(photos1)
write_images(photo_aug[0], iname, 46, wpath) # lr
write_images(photo_aug[1], iname, 28, wpath) # ud
json_flip(folder)
imgpath = folder
wpath = folder + "\\"
photos1 = load_images_from_folder(imgpath)
#aug_filter(photos1, wpath, iname)
#aug_rotate(photos1, wpath, iname)######90도 단위로만 가능
#aug_rescale(photos1, wpath, iname, 0.5, 1.3)
aug_flip(photos1, wpath, iname)
@jspaezp Can the code be repaired
https://github.com/guchengxi1994/mask2json
https://github.com/guchengxi1994/mask2json/blob/master/test_scripts/test_imgAug.py
imgaug
https://github.com/pureyangcry/tools
https://github.com/pureyangcry/tools/blob/master/DataAugForObjectSegmentation/DataAugmentforLabelMe.py
opencv
@monkeycc would you mind elaborating on your request?
imgaug data augmentation Output LabelMe JSON Enable offline data augmentation
I think you are very technical If you have time and are interested Whether to consider Create a new repository Imgaug_LabelMe_Augmentation I can advertise @jspaezp
final effect display:
Has anyone tried to augment segmentation json annotations from labelme?
If so, could you please show how this was done?
Thanks!