Closed medphisiker closed 1 year ago
Hi,
You can use "Merge" feature. On frame one enable merging mode and click the polygon shape. Go to frame seven and click the polygon there. Finish merging by clicking Merge button again. Go to frame two and remove outside keyframe there. (Outside keyframes are automatically added if there is not shape selected on frame N+1 after selected shape on frame N)
Now you get polygon track on frames 1-7. To correct interpolation set similar start point and direction using these buttons:
About merge you can find here: https://opencv.github.io/cvat/docs/manual/basics/track-mode-basics/
Thank you very much for fast answer. I try it and it works ) I also try with bounding boxes and and it works too. It may be good to add a small addition to the docs that combining tracks works with both tracking polygons and polygon shape (and with boxes too).
An interesting thing when exporting to MOTv1.1 I have tracks only from tracking bounding boxes. I thought that according to the existing tracking polygons, tracking bounding boxes would be created and they would be in the file gt.txt.
When I export it to COCO dataset I get bounding boxes automatically from tracking polygons (information about the tracks is lost because it detection dataset).
@zhiltsov-max
Do you have any suggestions about different behaviour in MOT and COCO?
We had an idea of always trying to convert the annotations to the compatible form for the format, but we've decided that it would create more unexpected behavior than help. Currently, we minimize subtle annotation conversions only to the cases we sure it really makes sense to do always. COCO formats naturally support polygons and boxes simultaneously, that's why we use them there. MOT, in turn, is purely bbox tracks format, so no conversions are done.
Please consider using Datumaro for extra annotation conversions. For instance, the conversion from polygons to boxes can be done this way:
# > export to the Datumaro format, download & unzip first
pip install datumaro[default]
datum transform -t shapes_to_boxes -o output_dir extracted_dir/
datum convert -i output_dir/ -if datumaro -f mot_seq -o mot_dataset
information about the tracks is lost because it detection dataset
CVAT keeps the source track ids in the attributes
section of the exported annotations.
Hope you got your answers. I will close the issue. Do not hesitate to reopen if you still have questions
Thank you for fast answering and help )
I tried to follow your advice and I get some following problems. I create a project and a annotation task for it from 10 images. I annotate ine tracking box and one tracking polygon to test export to MOT dataset. I go to my project and I export it as Datumaro dataset zip archive without images. I unzip it first to the same folder. I have a "annotations" folder with "default.json" file inside.
I work with anaconda python. To guarantee a clean experiment, I created a new working environment specifically for the package datumaro. I run command inside my python 3.10.8 virtual environment for datumaro:
pip install datumaro[default]
I have virtual environment with these modules:
# packages in environment at /home/admin-gpu/anaconda3/envs/datumaro:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 pypi_0 pypi
aiohttp-retry 2.8.3 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
amqp 5.1.1 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
asyncssh 2.12.0 pypi_0 pypi
atpublic 3.1.1 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
billiard 3.6.4.0 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.10.11 h06a4308_0
celery 5.2.7 pypi_0 pypi
certifi 2022.9.24 py310h06a4308_0
cffi 1.15.1 pypi_0 pypi
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.3 pypi_0 pypi
click-didyoumean 0.3.0 pypi_0 pypi
click-plugins 1.1.1 pypi_0 pypi
click-repl 0.2.0 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
commonmark 0.9.1 pypi_0 pypi
configobj 5.0.6 pypi_0 pypi
contourpy 1.0.6 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
datumaro 0.3.1 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
dictdiffer 0.9.0 pypi_0 pypi
diskcache 5.4.0 pypi_0 pypi
distro 1.8.0 pypi_0 pypi
dpath 2.1.1 pypi_0 pypi
dulwich 0.20.50 pypi_0 pypi
dvc 2.36.0 pypi_0 pypi
dvc-data 0.28.3 pypi_0 pypi
dvc-http 2.27.2 pypi_0 pypi
dvc-objects 0.14.0 pypi_0 pypi
dvc-render 0.0.14 pypi_0 pypi
dvc-task 0.1.6 pypi_0 pypi
dvclive 1.1.0 pypi_0 pypi
filelock 3.8.0 pypi_0 pypi
flatten-dict 0.4.2 pypi_0 pypi
flufl-lock 7.1.1 pypi_0 pypi
fonttools 4.38.0 pypi_0 pypi
frozenlist 1.3.3 pypi_0 pypi
fsspec 2022.11.0 pypi_0 pypi
funcy 1.17 pypi_0 pypi
future 0.18.2 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.29 pypi_0 pypi
grandalf 0.6 pypi_0 pypi
h5py 3.7.0 pypi_0 pypi
hydra-core 1.2.0 pypi_0 pypi
idna 3.4 pypi_0 pypi
iterative-telemetry 0.0.6 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
kombu 5.2.4 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lxml 4.9.1 pypi_0 pypi
matplotlib 3.6.2 pypi_0 pypi
multidict 6.0.3 pypi_0 pypi
nanotime 0.5.2 pypi_0 pypi
ncurses 6.3 h5eee18b_3
networkx 2.8.8 pypi_0 pypi
nibabel 4.0.2 pypi_0 pypi
numpy 1.23.5 pypi_0 pypi
omegaconf 2.2.3 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 1.1.1s h7f8727e_0
orjson 3.8.3 pypi_0 pypi
packaging 21.3 pypi_0 pypi
pandas 1.5.2 pypi_0 pypi
pathspec 0.9.0 pypi_0 pypi
pillow 9.3.0 pypi_0 pypi
pip 22.2.2 py310h06a4308_0
prompt-toolkit 3.0.33 pypi_0 pypi
protobuf 3.20.1 pypi_0 pypi
psutil 5.9.4 pypi_0 pypi
pycocotools 2.0.6 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pydot 1.4.2 pypi_0 pypi
pygit2 1.11.1 pypi_0 pypi
pygments 2.13.0 pypi_0 pypi
pygtrie 2.5.0 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
python 3.10.8 h7a1cb2a_1
python-dateutil 2.8.2 pypi_0 pypi
pytz 2022.6 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.2 h5eee18b_0
requests 2.28.1 pypi_0 pypi
rich 12.6.0 pypi_0 pypi
ruamel-yaml 0.17.21 pypi_0 pypi
ruamel-yaml-clib 0.2.7 pypi_0 pypi
scipy 1.9.3 pypi_0 pypi
scmrepo 0.1.4 pypi_0 pypi
setuptools 65.5.0 py310h06a4308_0
shortuuid 1.0.11 pypi_0 pypi
shtab 1.5.8 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
sqlite 3.40.0 h5082296_0
tabulate 0.9.0 pypi_0 pypi
tensorboardx 2.5.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tomlkit 0.11.6 pypi_0 pypi
tqdm 4.64.1 pypi_0 pypi
typing-extensions 4.4.0 pypi_0 pypi
tzdata 2022f h04d1e81_0
urllib3 1.26.13 pypi_0 pypi
vine 5.0.0 pypi_0 pypi
voluptuous 0.13.1 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.8 h5eee18b_0
yarl 1.8.2 pypi_0 pypi
zc-lockfile 2.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
Then I run command
datum transform -t shapes_to_boxes -o output_dir annotations
# or datum transform -t shapes_to_boxes -o output_dir annotations/
And I get this error
$ datum transform -t shapes_to_boxes -o output_dir annotations/
ERROR:root:'NoneType' object has no attribute 'commit'
Traceback (most recent call last):
File "/home/admin-gpu/anaconda3/envs/datumaro/bin/datum", line 8, in <module>
sys.exit(main())
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/cli/__main__.py", line 184, in main
retcode = args.command(args)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/util/scope.py", line 158, in wrapped_func
ret_val = func(*args, **kwargs)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/cli/contexts/project/__init__.py", line 645, in transform_command
dataset, _project = parse_full_revpath(args.target, project)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/cli/util/project.py", line 136, in parse_full_revpath
return parse_revspec(s, ctx_project=ctx_project)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/util/scope.py", line 158, in wrapped_func
ret_val = func(*args, **kwargs)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/cli/util/project.py", line 100, in parse_revspec
if project.is_ref(proj_path):
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/project.py", line 2555, in is_ref
return self._git.is_ref(ref)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/project.py", line 1174, in is_ref
self.repo.commit(rev)
AttributeError: 'NoneType' object has no attribute 'commit'
For what reason does it occur? =)
Seems strange, could you please check with the previous version of Datumaro (v0.3)? You can install it with pip install 'datumaro==0.3'
. If not works, it deserves a bug report to the Datumaro repo. In both cases, you can use the following python script as a solution:
import datumaro as dm
dataset = dm.Dataset.import_from('annotations/', 'datumaro')
dataset.transform('shapes_to_boxes')
dataset.export('output_dataset', 'mot_seq') # add ", save_images=True" if you exported with images from CVAT
There seems to be some kind of error either in saving the file or in its path. For some reason, the script that you have given gives out such a strange error in my environment with datumaro == 0.3.1
.
Traceback (most recent call last):
File "/home/admin-gpu/Downloads/cvat_datasets/datumaro_convert.py", line 3, in <module>
dataset = dm.Dataset.import_from('annotations/', 'datumaro')
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/dataset.py", line 1165, in import_from
detected_sources = importer(path, **kwargs)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/extractor.py", line 440, in __call__
raise DatasetNotFoundError(path)
datumaro.components.errors.DatasetNotFoundError: Failed to find dataset at 'annotations/'
I have tried both relative and absolute paths. Seems strange. Just in case, I attached a file "annotations/default.json"
to this message (default.zip).
Just in case, I saved my test dataset to datumaro format
again and extracted it. But nothing has changed.
Now I will create a new virtual environment with 'datumaro==0.3'
and I'll try both ways with it.
Ah, sorry. The path to the dataset should either be a path to the json
file, or it should be a directory containing the annotations/
directory. Please try dataset = dm.Dataset.import_from('annotations/default.json', 'datumaro')
or dataset = dm.Dataset.import_from("<the parent dir of annotations/>', 'datumaro')
.
I create new virtual environment and install datumaro==0.3
. When running in console mode, I was asked to install git python, then the same error returned as before.
I also ran the python script and got the same error as before. In general, the behavior of version 3.0 repeats the behavior of the new version 3.1.
Python скрипт tells that there is not dataset at 'annotations/'
. Perhaps for the same reason, when reading a folder 'annotations/'
through the console, the code gets a NoneType and it does not have the necessary'commit'
method that should be used further.
There is a suspicion that I seem to be getting the wrong zip archive of the datumaro dataset
. And can I export with the same option of mask to shapes conversion from the cvat video data set format
?
Below I give a list of installed packages in a new virtual environment with python 3.10
and datumaro 3.0
.
# packages in environment at /home/admin-gpu/anaconda3/envs/datumaro:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 pypi_0 pypi
aiohttp-retry 2.8.3 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
amqp 5.1.1 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
asyncssh 2.12.0 pypi_0 pypi
atpublic 3.1.1 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
billiard 3.6.4.0 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.10.11 h06a4308_0
celery 5.2.7 pypi_0 pypi
certifi 2022.9.24 py310h06a4308_0
cffi 1.15.1 pypi_0 pypi
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.3 pypi_0 pypi
click-didyoumean 0.3.0 pypi_0 pypi
click-plugins 1.1.1 pypi_0 pypi
click-repl 0.2.0 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
commonmark 0.9.1 pypi_0 pypi
configobj 5.0.6 pypi_0 pypi
contourpy 1.0.6 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
datumaro 0.3 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
dictdiffer 0.9.0 pypi_0 pypi
diskcache 5.4.0 pypi_0 pypi
distro 1.8.0 pypi_0 pypi
dpath 2.1.2 pypi_0 pypi
dulwich 0.20.50 pypi_0 pypi
dvc 2.36.0 pypi_0 pypi
dvc-data 0.28.3 pypi_0 pypi
dvc-http 2.27.2 pypi_0 pypi
dvc-objects 0.14.0 pypi_0 pypi
dvc-render 0.0.14 pypi_0 pypi
dvc-task 0.1.6 pypi_0 pypi
dvclive 1.1.0 pypi_0 pypi
filelock 3.8.1 pypi_0 pypi
flatten-dict 0.4.2 pypi_0 pypi
flufl-lock 7.1.1 pypi_0 pypi
fonttools 4.38.0 pypi_0 pypi
frozenlist 1.3.3 pypi_0 pypi
fsspec 2022.11.0 pypi_0 pypi
funcy 1.17 pypi_0 pypi
future 0.18.2 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.29 pypi_0 pypi
grandalf 0.6 pypi_0 pypi
hydra-core 1.2.0 pypi_0 pypi
idna 3.4 pypi_0 pypi
iterative-telemetry 0.0.6 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
kombu 5.2.4 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lxml 4.9.1 pypi_0 pypi
matplotlib 3.6.2 pypi_0 pypi
multidict 6.0.3 pypi_0 pypi
nanotime 0.5.2 pypi_0 pypi
ncurses 6.3 h5eee18b_3
networkx 2.8.8 pypi_0 pypi
numpy 1.23.5 pypi_0 pypi
omegaconf 2.2.3 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 1.1.1s h7f8727e_0
orjson 3.8.3 pypi_0 pypi
packaging 21.3 pypi_0 pypi
pandas 1.5.2 pypi_0 pypi
pathspec 0.9.0 pypi_0 pypi
pillow 9.3.0 pypi_0 pypi
pip 22.2.2 py310h06a4308_0
prompt-toolkit 3.0.33 pypi_0 pypi
protobuf 3.20.1 pypi_0 pypi
psutil 5.9.4 pypi_0 pypi
pycocotools 2.0.6 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pydot 1.4.2 pypi_0 pypi
pygit2 1.11.1 pypi_0 pypi
pygments 2.13.0 pypi_0 pypi
pygtrie 2.5.0 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
python 3.10.8 h7a1cb2a_1
python-dateutil 2.8.2 pypi_0 pypi
pytz 2022.6 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.2 h5eee18b_0
requests 2.28.1 pypi_0 pypi
rich 12.6.0 pypi_0 pypi
ruamel-yaml 0.17.21 pypi_0 pypi
ruamel-yaml-clib 0.2.7 pypi_0 pypi
scipy 1.9.3 pypi_0 pypi
scmrepo 0.1.4 pypi_0 pypi
setuptools 65.5.0 py310h06a4308_0
shortuuid 1.0.11 pypi_0 pypi
shtab 1.5.8 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
sqlite 3.40.0 h5082296_0
tabulate 0.9.0 pypi_0 pypi
tensorboardx 2.5.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tomlkit 0.11.6 pypi_0 pypi
tqdm 4.64.1 pypi_0 pypi
typing-extensions 4.4.0 pypi_0 pypi
tzdata 2022f h04d1e81_0
urllib3 1.26.13 pypi_0 pypi
vine 5.0.0 pypi_0 pypi
voluptuous 0.13.1 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.8 h5eee18b_0
yarl 1.8.2 pypi_0 pypi
zc-lockfile 2.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
I understand =) I tested this code and it didn't run I get new error. As I understand there is no
mot_seq
key word for MOT sequence dataset format. I went to read the documentation link but I did not find a description of keywords corresponding to the formats of datasets. On this page link MOT challenge dataset have name as you noted above'mot_seq'
.
Traceback (most recent call last):
File "/home/admin-gpu/Downloads/cvat_datasets/datumaro_convert.py", line 6, in <module>
dataset.export('output_dataset', 'mot_seq')
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/util/scope.py", line 135, in wrapped_func
ret_val = func(*args, **kwargs)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/dataset.py", line 914, in export
converter = self.env.converters[format]
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/environment.py", line 37, in __getitem__
return self.get(key)
File "/home/admin-gpu/anaconda3/envs/datumaro/lib/python3.10/site-packages/datumaro/components/environment.py", line 34, in get
return self.items[key]
KeyError: 'mot_seq'
Perhaps this keyword 'mot_seq'
for MOT challenge dataset appeared in version 3.1?
Update: Test with datumaro==3.1
, everything is the same as in version 3.0, the same error with the keyword 'mot_seq'
.
Ok, I've tested the pipeline. It seems, there are several problems along the path, which need to be fixed. A quick workaround will be to modify the script above this way:
import datumaro as dm
dataset = dm.Dataset.import_from('path/to/default.json', 'datumaro')
dataset.transform('shapes_to_boxes')
dataset.init_cache()
dataset._data._media_type = dm.Image
dataset.export('output_dataset', 'mot_seq_gt')
Here is the output_dataset.zip that I've got.
Ok, I've tested the pipeline. It seems, there are several problems along the path, which need to be fixed. A quick workaround will be to modify the script above this way:
import datumaro as dm dataset = dm.Dataset.import_from('path/to/default.json', 'datumaro') dataset.transform('shapes_to_boxes') dataset.init_cache() dataset._data._media_type = dm.Image dataset.export('output_dataset', 'mot_seq_gt')
Here is the output_dataset.zip that I've got.
Thank you very much, I also test it and it works =)
I also wanted to ask where to see which keywords correspond to the names of datasets?
How to find that keyword for intresting dataset ? (MOT challenge is ''mot_seq_gt'') ?
I search for MOT challenge dataset on documentation page link, but there is not MOT.
But I find some other and find that for
KITTI dataset we have keyword 'kitty'
COCO dataset we have keyword 'coco'
.
How to find that keyword for interesting dataset ? (MOT challenge is ''mot_seq_gt'') ?
For many datasets there is documentation at link you attached, but not for all. For others, we can check in the format tests and in the full lists available in the datum import --help
and datum export --help
CLI output. Dataset examples are available in the test resources here, they should have their names aligned, for the most part.
datum import --help
Thank you for information and your help ) I have no more questions on this topic.
Hi,
You can use "Merge" feature. On frame one enable merging mode and click the polygon shape. Go to frame seven and click the polygon there. Finish merging by clicking Merge button again. Go to frame two and remove outside keyframe there. (Outside keyframes are automatically added if there is not shape selected on frame N+1 after selected shape on frame N)
Now you get polygon track on frames 1-7. To correct interpolation set similar start point and direction using these buttons:
About merge you can find here: https://opencv.github.io/cvat/docs/manual/basics/track-mode-basics/
Is there CVAT API for doing this?
@gwengusc , it's a UI feature. Server API allows to download annotations and change them.
My actions before raising this issue
It is upload a special docker image with DEXTR, run it, and then CVAT can interact with it.
My idea and its description: I want to track some object, for example car1. I got polygon shape from DEXTR for car1 object on frame 1. Then I got polygon shape from DEXTR for car1 object on frame 7. I want to track car1 from frame 1 to frame 7. But as I understand I can't because DEXTR give me polygon shapes.
Some other intresting example. I have dataset with polygon shapes and I want to change it to tracking objects. It would be great to be able to simply combine polygons of the same object into one track. For example set to first polygon on frame1 you are belong to track1, then go to polygon on frame 2 set the same you are belong to track1 and ect.
Expected Behaviour
There is no bug in this case, there is a request to add additional functionality.
Current Behaviour
We can use integrators like DEXTR only for instance segmentation, but not for tracking. The dataset in which the data is labeled for instance segmentation, as I understand, cannot be converted to tracking.
Possible Solution
Steps to Reproduce (for bugs)
There is no bug in this case, there is a request to add additional functionality.
Context
This will allow the use of interactors such as DEXTR for tracking objects. This will allow to combine different polygons on consecutive frames in one track. I think converting the dataset from instance segmentation(poly shapes) will be faster than labeling it from scratch from source images.
Your Environment
I have everything working according to the instructions. I'm just offering a new functionality.