Open piercus opened 4 years ago
hello Pierre,
please see the results below:
tested on a 10 img
shift+enter
to submit the result10 images
not very effective
errored Gif is not visible correctly in an annotating tool
error: We found some potential errors with your usage of Crowd HTML Elements. (Element type CROWD-IMAGE-CLASSIFIER): attributes.header should NOT be shorter than 1 characters
The annotator is able to see only first frame of a gif
Hello @TaniaPash here are my remarks :
output
[{"datasetObjectId":"1","consolidatedAnnotation":{"content":{"test-4-metadata":{"job-name":"labeling-job/test-4","class-map":{"1":"player","0":"ball"},"human-annotated":"yes","objects":[{"confidence":0.09},{"confidence":0.09}],"creation-date":"2019-11-21T10:02:59.134427","type":"groundtruth/object-detection"},"test-4":{"annotations":[{"class_id":0,"width":17,"top":191,"height":15,"left":980},{"class_id":1,"width":60,"top":167,"height":175,"left":237}],"image_size":[{"width":1280,"depth":3,"height":720}]}}}}]
confidence: 0.09
means Can you try to use crowd-classifier sage macker custom html tag to annotate gifs update tmpl looks like:
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<crowd-form>
<crowd-classifier name="crowd-classifier" header="gifs " categories="{{ task.input.labels | to_json | escape }}">
<classification-target>
<img src="{{task.input.taskObject | grant_read_access }}">
</classification-target>
<full-instructions header="Image classification instructions">
<ol>
<li><strong>Read</strong> the task carefully and inspect the image.</li>
<li><strong>Read</strong> the options and review the examples provided to understand more about the labels.</li>
<li><strong>Choose</strong> the appropriate label that best suits the image.</li>
</ol>
</full-instructions>
<short-instructions>
<h3><span style="color: rgb(0, 138, 0);">Good example</span></h3>
<p>Enter description to explain the correct label to the workers</p>
<p><img
src="https://dfgej61ul7ygg.cloudfront.net/d042d135-8a04-4e18-96d4-d635f5e826e1/src/images/quick-instructions-example-placeholder.png"
style="max-width:100%"></p>
<h3><span style="color: rgb(230, 0, 0);">Bad example</span></h3>
<p>Enter description of an incorrect label</p>
<p><img
src="https://dfgej61ul7ygg.cloudfront.net/d042d135-8a04-4e18-96d4-d635f5e826e1/src/images/quick-instructions-example-placeholder.png"
style="max-width:100%"></p>
</short-instructions>
</crowd-classifier>
</crowd-form>
please try to understand what confidence: 0.09 means
https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data-output.html#sms-output-confidence
Can you test if it's possible to start bounding box annotations from a previously annotated image (for example the result from misclassified scripts)
i'm able to load created boxes from output folder (edited) using
initial-value="[{% for box in task.input.manifestLine.test-4.annotations %}
{% capture class_id %}{{ box.class_id }}{% endcapture %}
{% assign label = task.input.manifestLine.test-4-metadata.class-map[class_id] %}
{
label: {{label | to_json}},
left: {{box.left}},
top: {{box.top}},
width: {{box.width}},
height: {{box.height}},
},
{% endfor %}
]"
task.input
{"labels":["ball","player"],"manifestLine":{"source-ref":"s3://tra-bri-bucket-rush-eu-west-1/sagemaker-test/images2/seg-30-00000000-frame-00000010.png","test-4":{"annotations":[{"class_id":0,"height":13,"left":584,"top":128,"width":14},{"class_id":1,"height":183,"left":275,"top":183,"width":79},{"class_id":1,"height":130,"left":672,"top":139,"width":52}],"image_size":[{"depth":3,"height":720,"width":1280}]},"test-4-metadata":{"class-map":{"0":"ball","1":"player"},"creation-date":"2019-11-21T10:04:03.636716","human-annotated":"yes","job-name":"labeling-job/test-4","objects":[{"confidence":0.09},{"confidence":0.09},{"confidence":0.09}],"type":"groundtruth/object-detection"}},"taskObject":"s3://tra-bri-bucket-rush-eu-west-1/sagemaker-test/images2/seg-30-00000000-frame-00000010.png"}
What happens if you put only custom html inside the liquid template ? (and no form at all)
I can add html
, but only outside of crowd-form
hi @piercus
I was able to add to the template crowd-icon-button
with custom width and height (2 images above Instruction section)
html
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<crowd-form>
<crowd-icon-button src="https://tra-bri-bucket-rush-eu-west-1.s3-eu-west-1.amazonaws.com/sagemaker-test/images2/seg-30-00000000-frame-00000003.png" style="width:100px;height:100px !important;">TEST2</crowd-icon-button>
<crowd-icon-button src="https://tra-bri-bucket-rush-eu-west-1.s3-eu-west-1.amazonaws.com/sagemaker-test/images2/seg-30-00000000-frame-00000002.png" style="width:100px;height:100px !important;">TEST3</crowd-icon-button>
<crowd-bounding-box name="boundingBox" src="{{ task.input.taskObject | grant_read_access }}" header="test-4-vatic"
labels="{{ task.input.labels | to_json | escape }}">
<crowd-radio-button>TEST</crowd-radio-button>
<crowd-icon-button>TEST2</crowd-icon-button>
<crowd-radio-group>TEST3</crowd-radio-group>
<full-instructions header="Bounding box instructions">
<ol>
<li><strong>Inspect</strong> the image</li>
<li><strong>Determine</strong> if the specified label is/are visible in the picture.</li>
<li><strong>Outline</strong> each instance of the specified label in the image using the provided “Box” tool.
</li>
</ol>
..................
Hello Tania,
I would like you to test AWS SageMaker labelling tool.
can you test the following :
Please keep me updated on this point :
In the future we may try to use a custom taks for this