joheras / CLoDSA

123 stars 33 forks source link

How to augment already annotated dataset #11

Closed VincentDuf closed 4 years ago

VincentDuf commented 4 years ago

Hi, I have a little trouble understanding how to augment my dataset. It is clear how to use the classes to create the augmentor but I can't find anywhere how to apply the process on a dataset that has been already annotated (I mean I have the images and a json file containing the boxes in COCO format). I definitely don't want to do another round of labelling because the process is only applied on the image and not on the json file.

Thanks

joheras commented 4 years ago

Hi, are you following the instructions provided in https://github.com/joheras/CLoDSA/blob/master/notebooks/CLODSA_Instance_Segmentation.ipynb?

You can interact with that notebook in https://colab.research.google.com/github/joheras/CLoDSA/blob/master/notebooks/CLODSA_Instance_Segmentation.ipynb

Do you have a unique file with the annotations for all the images? or Do you have an annotation file per image? Currently, CLoDSa only supports the former, so that might be the problem.

Best

VincentDuf commented 4 years ago

Hi, I have a unique json file for all the images...it is in coco-format so that should do it I guess.

If I get it, I have to create a new json file containing the list of transformation to be applied to the images and the "input_path" parameter has to point to a directory containing the images and the annotation json file ? Cause I also see that you use xml file.. as you have several examples and way to do it it's a little confusing.

Thanks

joheras commented 4 years ago

Hi,

In the link that I provided you in my previous answer it is explain how to work with a json file in the coco format.

The input file must contain the images and the json file with the annotations.

You can create a json file with the list of transformations, but it is easier if you follow the steps provided in the notebook that I sent you.

Best, Jónathan

El lun., 16 mar. 2020 17:42, VincentDuf notifications@github.com escribió:

Hi, I have a unique json file for all the images...it is in coco-format so that should do it I guess.

If I get it, I have to create a new json file containing the list of transformation to be applied to the images and the "input_path" parameter has to point to a directory containing the images and the annotation json file ? Cause I also see that you use xml file.. as you have several examples and way to do it it's a little confusing.

Thanks

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599638944, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG5CBOBXRDFAOHFHGRDRHZJIPANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

I didn't follow this one because it seems to be about segmentation, my json file contains bbox annotations not segmentation annotation... is it working for bbox annotations ? I really don't want to deal with segmentation annotations in COCO format cause my training process next needs bbox annotations.. Thanks

VincentDuf commented 4 years ago

Ok from what I can get, it seems it's not possible to use "coco" as a format for a PROBLEM defined as "detection". I suppose there is a way to implement the "coco" format to be used with "detection" problems right ?

joheras commented 4 years ago

I see now, so you are using Coco format for detection. That feature is not implemented in clodsa, but can be easily programmed. I hopefully have it in a couple of days and let you know.

Just to be sure, boxes in coco format (bbox annotations) are a list with 4 elements that indicate the top left corner and the bottom right corner, am I right?

Best,

Jónathan

El lun., 16 mar. 2020 18:03, VincentDuf notifications@github.com escribió:

I didn't follow this one because it seems to be about segmentation, my json file contains bbox annotations not segmentation annotation... is it working for bbox annotations ? I really don't want to deal with segmentation annotations in COCO format cause my training process next needs bbox annotations.. Thanks

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599652249, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG4M3U3BNQYO4G5FLCTRHZLU5ANCNFSM4LMLJTSQ .

joheras commented 4 years ago

Yes, that is the problem. I will include that feature ASAP.

Jónathan

El lun., 16 mar. 2020 18:11, VincentDuf notifications@github.com escribió:

Ok from what I can get, it seems it's not possible to use "coco" as a format for a PROBLEM defined as "detection". I suppose there is a way to implement the "coco" format to be used with "detection" problems right ?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599657132, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAGZ4HUB7DMSL3MPMRW3RHZMT7ANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Alright we finally get it !

Here is a link to COCO official website: http://cocodataset.org/#format-data There is a description of their format.

Thank you very much to take this matter quickly.

joheras commented 4 years ago

Hi,

I just checked, and the current implementation of CLoDSA can already be employed for detection using the COCO format.

This is explained in this new notebook: https://colab.research.google.com/github/joheras/CLoDSA/blob/master/notebooks/CLODSA_COCO_Detection.ipynb

Let me know if you have any questions.

Best, Jónathan

El lun., 16 mar. 2020 a las 18:25, VincentDuf (notifications@github.com) escribió:

Alright we finally get it !

Here is a link to COCO official website: http://cocodataset.org/#format-data http://url There is a description of their format.

Thank you very much to take this matter quickly.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599664314, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAGZYXDGAXPPNZLYGWTLRHZOI7ANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Hi, thanks for promptness but I already tested putting PROBLEM = "detection" along with ANNOTATION_MODE = "coco" and it tells me there is nothing in "segmentation" parameter in my json file, which is logical as my json file is filled with bbox annotations.

What I need is to put PROBLEM = "detection" and ANNOTATION_MODE = "coco".

Thanks

joheras commented 4 years ago

Hi,

just put PROBLEM = "instance_segmentation" and it should work.

Does your json file does not include the annotations for instance segmentation? If that is the case, I need to change the code, but otherwise, just putting PROBLEM = "instance_segmentation" should work.

Best

El mar., 17 mar. 2020 a las 10:01, VincentDuf (notifications@github.com) escribió:

Hi, thanks for promptness but I already tested putting PROBLEM = "detection" along with ANNOTATION_MODE = "coco" and it tells me there is nothing in "segmentation" parameter in my json file, which is logical as my json file is filled with bbox annotations.

What I need is to put PROBLEM = "detection" and ANNOTATION_MODE = "coco".

Thanks

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599955718, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG2FTQ5QGUTLPMG4NPLRH435JANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Hey, no and that's the problem I'm talking about, my segmentation annotations are empty there is nothing in them, I only have annotations for bbox. As I said I already tried what you advised. That's why the "segmentation_instances" mode doesn't work for me, it needs to be implemented.

Thanks

joheras commented 4 years ago

Hi,

I finally understood the problem, sorry about that.

I have uploaded a new version of Clodsa with the new functionality. You can try it in the following notebook: https://github.com/joheras/CLoDSA/blob/master/notebooks/CLODSA_COCO_Detection.ipynb

Let me know if that works for you or if there is any problem with it.

Best, Jónathan

El mar., 17 mar. 2020 a las 10:57, VincentDuf (notifications@github.com) escribió:

Hey, no and that's the problem I'm talking about, my segmentation annotations are empty there is nothing in them, I only have annotations for bbox. As I said I already tried what you advised. That's why the "segmentation_instances" mode doesn't work for me, it needs to be implemented.

Thanks

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-599980692, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG5QDQ5LRKZF7FFOCQLRH5CPRANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Thanks for quick changes. It seems I have issues with certain techniques, here is my script:

from clodsa.augmentors.augmentorFactory import createAugmentor
from clodsa.transformers.transformerFactory import transformerGenerator
from clodsa.techniques.techniqueFactory import createTechnique
import xml.etree.ElementTree as ET
import cv2

PROBLEM  = 'detection'
ANNOTATION_MODE = 'coco'
INPUT_PATH = 'train'
GENERATION_MODE = 'linear'
OUTPUT_MODE ='coco'
OUTPUT_PATH ='augment_train'

augmentor = createAugmentor(PROBLEM,ANNOTATION_MODE,OUTPUT_MODE,GENERATION_MODE,INPUT_PATH,{"outputPath":OUTPUT_PATH})

transformer = transformerGenerator(PROBLEM)

hflip = createTechnique("flip",{'flip':1})
hue = createTechnique("raise_hue",{"power":2})
sat = createTechnique("raise_saturation",{"power":2})
val = createTechnique("raise_value",{"power":2})
hue_less = createTechnique("raise_hue",{"power":0.5})
sat_less = createTechnique("raise_saturation",{"power":0.5})
val_less = createTechnique("raise_value",{"power":0.5})
avg_blur = createTechnique("average_blurring",{"kernel":5})
none = createTechnique("none",{})
gauss_noise = createTechnique('gaussian_noise',{"mean":0,"sigma",1})

list_transform = [hflip,hue,sat,val,hue_less,sat_less,val_less,avg_blur,none,gauss_noise]

for x in list_transform:
    augmentor.addTransformer(transformer(x))

augmentor.applyAugmentation()

The error is raised if the image length is different from 3 but I just checked some images and they have 3 channels... is there something wrong in my code ?

[EDIT]: does your program create a json file with updated annotations for the augmented images ? Or does it only apply augmentation techniques on the dataset ?

Thank you

joheras commented 4 years ago

Hi,

The problem is that you are working with grayscale images and the methods: hue = createTechnique("raise_hue",{"power":2}) sat = createTechnique("raise_saturation",{"power":2}) val = createTechnique("raise_value",{"power":2}) hue_less = createTechnique("raise_hue",{"power":0.5}) sat_less = createTechnique("raise_saturation",{"power":0.5}) val_less = createTechnique("raise_value",{"power":0.5})

can only be applied to colour images.

Best, Jónathan

El mar., 17 mar. 2020 a las 13:27, VincentDuf (notifications@github.com) escribió:

Thanks for quick changes. It seems I have issues with certain techniques, here is my script: `from clodsa.augmentors.augmentorFactory import createAugmentor from clodsa.transformers.transformerFactory import transformerGenerator from clodsa.techniques.techniqueFactory import createTechnique import xml.etree.ElementTree as ET import cv2

PROBLEM = 'detection' ANNOTATION_MODE = 'coco' INPUT_PATH = 'train' GENERATION_MODE = 'linear' OUTPUT_MODE ='coco' OUTPUT_PATH ='augment_train'

augmentor = createAugmentor(PROBLEM,ANNOTATION_MODE,OUTPUT_MODE,GENERATION_MODE,INPUT_PATH,{"outputPath":OUTPUT_PATH})

transformer = transformerGenerator(PROBLEM)

hflip = createTechnique("flip",{'flip':1}) hue = createTechnique("raise_hue",{"power":2}) sat = createTechnique("raise_saturation",{"power":2}) val = createTechnique("raise_value",{"power":2}) hue_less = createTechnique("raise_hue",{"power":0.5}) sat_less = createTechnique("raise_saturation",{"power":0.5}) val_less = createTechnique("raise_value",{"power":0.5}) avg_blur = createTechnique("average_blurring",{"kernel":5}) none = createTechnique("none",{}) gauss_noise = createTechnique('gaussian_noise',{"mean":0,"sigma",1})

list_transform = [hflip,hue,sat,val,hue_less,sat_less,val_less,avg_blur,none,gauss_noise]

for x in list_transform: augmentor.addTransformer(transformer(x))

augmentor.applyAugmentation()`

The error is raised if the image length is different from 0 but I just checked some images and they have 3 channels... is there something wrong in my code ?

Thank you

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600043266, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG62ANS6UNOHEHDKTGDRH5UETANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

No it's not, I checked I have 3 channels images I'm sure of it. Actually it worked for some images so probably I have an issue with some of them. Thanks for the work !

joheras commented 4 years ago

If you execute the following code in the path where you have the images, it will show you the problematic images.

import cv2 from imutils import paths

for image in paths.list_images('.'): im = cv2.imread(image) if(len(im.shape)!=3): print(image)

Best, Jónathan

El mar., 17 mar. 2020 a las 20:06, VincentDuf (notifications@github.com) escribió:

No it's not, I checked I have 3 channels images I'm sure of it. Actually it worked for some images so probably I have an issue with some of them. Thanks for the work !

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600246763, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAGYCDSXD3ESR66RILMLRH7C5BANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Hi joheras, it's not the problem, I just checked, my images are all color images. I'll continue investigating and thank you again for the work !

joheras commented 4 years ago

Let me know if I can further help you. I will close this issue. You can contact me on joheras@gmail.com, or open a new issue if needed. Best, Jónathan

El mié., 18 mar. 2020 a las 10:07, VincentDuf (notifications@github.com) escribió:

Hi joheras, it's not the problem, I just checked, my images are all color images. I'll continue investigating and thank you again for the work !

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600504779, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAGZOQ3OXBLPDEWBSOATRICFNXANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Here is the recurrent error I get...

Traceback (most recent call last):
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/externals/loky/process_executor.py", line 418, in _process_worker
    r = call_item()
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/externals/loky/process_executor.py", line 272, in __call__
    return self.fn(*self.args, **self.kwargs)
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/_parallel_backends.py", line 608, in __call__
    return self.func(*args, **kwargs)
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 256, in __call__
    for func, args, kwargs in self.items]
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 256, in <listcomp>
    for func, args, kwargs in self.items]
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/augmentors/cocoLinearDetectionAugmentor.py", line 22, in readAndGenerateInstanceSegmentation
    (newimage, newboxes) = transformer.transform(image, boxes,True)
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/transformerForImageDetection.py", line 15, in transform
    newBoxes = detectBoxes(image.shape[:2], boxes, self.technique)
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 26, in detectBoxes
    return [detectBox(imageShape,box,technique) for box in boxes if detectBox(imageShape,box,technique) is not None]
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 26, in <listcomp>
    return [detectBox(imageShape,box,technique) for box in boxes if detectBox(imageShape,box,technique) is not None]
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 15, in detectBox
    newmask = technique.apply(*[mask])
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/techniques/raiseHueAugmentationTechnique.py", line 23, in apply
    raise NameError("Not applicable technique")
NameError: Not applicable technique
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "augment_test.py", line 34, in <module>
    augmentor.applyAugmentation()
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/augmentors/cocoLinearDetectionAugmentor.py", line 65, in applyAugmentation
    for x in self.dictImages.keys())
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 1017, in __call__
    self.retrieve()
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 909, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/_parallel_backends.py", line 562, in wrap_future_result
    return future.result(timeout=timeout)
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
    return self.__get_result()
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
    raise self._exception
NameError: Not applicable technique
joheras commented 4 years ago

Could you send me the dataset of images and the annotation to reproduce the problem?

El mié., 18 mar. 2020 a las 10:30, VincentDuf (notifications@github.com) escribió:

Here is the recurrent error I get...

Traceback (most recent call last): File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/externals/loky/process_executor.py", line 418, in _process_worker r = call_item() File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/externals/loky/process_executor.py", line 272, in call return self.fn(*self.args, *self.kwargs) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/_parallel_backends.py", line 608, in call return self.func(args, *kwargs) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 256, in call for func, args, kwargs in self.items] File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 256, in for func, args, kwargs in self.items] File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/augmentors/cocoLinearDetectionAugmentor.py", line 22, in readAndGenerateInstanceSegmentation (newimage, newboxes) = transformer.transform(image, boxes,True) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/transformerForImageDetection.py", line 15, in transform newBoxes = detectBoxes(image.shape[:2], boxes, self.technique) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 26, in detectBoxes return [detectBox(imageShape,box,technique) for box in boxes if detectBox(imageShape,box,technique) is not None] File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 26, in return [detectBox(imageShape,box,technique) for box in boxes if detectBox(imageShape,box,technique) is not None] File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/transformers/detection.py", line 15, in detectBox newmask = technique.apply([mask]) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/techniques/raiseHueAugmentationTechnique.py", line 23, in apply raise NameError("Not applicable technique") NameError: Not applicable technique """

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "augment_test.py", line 34, in augmentor.applyAugmentation() File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/clodsa/augmentors/cocoLinearDetectionAugmentor.py", line 65, in applyAugmentation for x in self.dictImages.keys()) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 1017, in call self.retrieve() File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/parallel.py", line 909, in retrieve self._output.extend(job.get(timeout=self.timeout)) File "/home/greenshield/.virtualenvs/cv/lib/python3.5/site-packages/joblib/_parallel_backends.py", line 562, in wrap_future_result return future.result(timeout=timeout) File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result return self.get_result() File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in get_result raise self._exception NameError: Not applicable technique

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600515276, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG5XRRSTJCMPWVR7CY3RICID3ANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Hey, here is the link to get the images and the json file associated: https://fromsmash.com/m4PD8VTIe_-ct

I tried to use the json annotations along with the command clodsa file.json, but I ran into the same error about "Not applicable technique" which is usually the "raise_hue". Here is my config file:

{
  "augmentation_techniques":[
    [
      "flip",
      {
        "flip":1
      }
    ],
    [
      "raise_hue",
      {
        "power":2
      }
    ],
    [
      "raise_saturation",
      {
        "power":2
      }
    ],
    [
      "raise_value",
      {
        "power":2
      }
    ],
    [
      "raise_hue",
      {
        "power":0.5
      }
    ],
    [
      "raise_saturation",
      {
        "power":0.5
      }
    ],
    [
      "raise_value",
      {
        "power":0.5
      }
    ],
    [
      "average_blurring",
      {
        "kernel":5
      }
    ],
    [
      "none",
      {
      }
    ]
  ],
  "generation_mode":"linear",
  "problem":"detection",
  "output_mode":"coco",
  "parameters":{
    "outputPath":"augment_train/"
  },
  "annotation_mode":"coco",
  "input_path":"train"
}

Thanks

joheras commented 4 years ago

It seems that I introduced a bug when adding the functionality for supporting the COCO detection. Sorry about that.

If you install the new version of CLODSA (pip install clodsa==1.2.39) the code that you sent me yesterday works properly and the new images and annotations are generated.

Be careful since one of your images seems to be missing (315.jpg) and this produces an error when applying the process.

Hope this helps, and let me know if you have any questions or if I can help you with your project.

Best, Jónathan

El mié., 18 mar. 2020 a las 12:10, VincentDuf (notifications@github.com) escribió:

Hey, here is the link to get the images and the json file associated: https://fromsmash.com/m4PD8VTIe_-ct

I tried to use the json annotations along with the command clodsa file.json, but I ran into the same error about "Not applicable technique" which is usually the "raise_hue". Here is my config file:

{ "augmentation_techniques":[ [ "flip", { "flip":1 } ], [ "raise_hue", { "power":2 } ], [ "raise_saturation", { "power":2 } ], [ "raise_value", { "power":2 } ], [ "raise_hue", { "power":0.5 } ], [ "raise_saturation", { "power":0.5 } ], [ "raise_value", { "power":0.5 } ], [ "average_blurring", { "kernel":5 } ], [ "none", { } ] ], "generation_mode":"linear", "problem":"detection", "output_mode":"coco", "parameters":{ "outputPath":"augment_train/" }, "annotation_mode":"coco", "input_path":"train" }

Thanks

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600561953, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG3QP7GLIJNKYA5KT7TRICTYTANCNFSM4LMLJTSQ .

VincentDuf commented 4 years ago

Alright thanks a lot for all the effort you put in my problem, I just tested and it worked perfectly ! Cheers

joheras commented 4 years ago

Great. Good luck with your project. Jónathan

El mié., 18 mar. 2020 a las 17:35, VincentDuf (notifications@github.com) escribió:

Alright thanks a lot for all the effort you put in my problem, I just tested and it worked perfectly ! Cheers

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/11#issuecomment-600735001, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG7TMZZDYE6PEVGRAYLRIDZ33ANCNFSM4LMLJTSQ .