konacurrents / SemanticMarkerAPI

Kona Currents, LLC offers downloading of the API for the Semantic Marker®️ System. The descriptions below and code samples allow for our trademarked Semantic Marker®️ creation and interaction especially with Internet of Things (IoT) devices and infrastructure.
MIT License
1 stars 0 forks source link

New Paradigm: Point at things PAT and they do their IoT thing securely (turn on/ play music, feed dog) #2

Open konacurrents opened 11 months ago

konacurrents commented 11 months ago

The Semantic Marker™ when combined with SMART - the Semantic Marker™ Augmented Reality of Things, supports a new programming paradigm:

Pointing at things to securely invoke their functionality

PAT = Point at Things

This includes scanning and searching (ie. scanning in-situ in the physical world, or searching among virtual items). And the 2d Optical Vision Marker - the Semantic Marker™ is just the tool: precise naming using an image.

And these don't involve special language, or spoken word. For example you walk into a room and there are 100's of lights. What is the voice command for turning on the 50th light, and which one is the 50th? How is the naming convention arranged (column or row major, big or little indian, etc).

Now if a user could somehow just point at the item desired, this 50th light, they wouldn't need to know the naming convention; Just turn on the one I'm pointing at. Much like the physical light switches of old, a direct connection to the light is performed (once the appropriate light switch for the 50th light is found.)

  1. Aside from putting an image on all the items, it's possible a printout is available that shows all the switches and their corresponding Semantic Marker™. So pointing at the 50th light switch is really pointing at the printout (or on-line) denoting that 50th light.
  2. Using context of multiple optical markers is also valuable. This means that if two markers are seen, describing that the left or right marker is more valuable, or when combined they unlock the key, etc.
  3. Using internal memory of previous scanning (pointing) events. This means scanning a mode optical marker, and then scanning another generic marker. The mode is used to instantiate that generic Semantic Marker™
  4. Security is vital. Bottom line, the username and password should be hidden (even from a tool like wireshark that decodes internet messages). SMART Buttons usually require end users to instantiate with their own credentials, the username and password. But in an friendly environment (no intruders, etc) new SMART buttons can be created that include the instantiated values (eg., the username and password).

[!IMPORTANT] Because of the unique Language Addressability of SMART, it enables a powerful Inheritance capability for extensible and adaptable applications; all based on pictorial images. Security is supported since not all the secrets are out in the open - such as encoded in the optical marker. Instead, additional parameters are used to instantiate the SMART button.

Messaging is the key

SMART relies on a robust and extensible internet messaging capability. These are describe throughout the Semantic Marker API document. This includes the following:

References to real-world examples

The concepts described above can be found in everyday use:

[!TIP] Other than a known common entity (and ad avatar) across services, such as AcuWeather for the weather station, how else is a user to find the weather station? Aside from reading words throughout the guide, what if there were images that could be used. These could be similar to the European Sinage, or other common images (stop sign, etc). Or there could be actual Photographic images that denote the concept of the weather station. This might be an image with lightning or rain, with a question mark? The Semantic Marker™ provides for these human recognized images, or Photo Avatar.

konacurrents commented 11 months ago

Perfect mapping

Versus almost every other paradigm today, especially AI.

There are no partial text mapping, wrong speech recognition, wrong face or object recognition.

attack at dawn

vs

attract at dawn, Monday

Perfect recognition

The Semantic Marker™️ optical visual marker is perfectly recognized, or nothing is recognized; no partial results.

Text links are exact but limited

Current hyper links to other endpoints have been useful - and are exact. The tool that references the link and the tool that can be invoked traveling to the marked location.

But the calling tool has to infrastructure to support them describing this link. Thus web pages have href, word processors have hyperlink metadata, pdf has a hyperlink, etc.

There is no text recognition as these tools have a special hyperlink design (one that hides the metadata, the hyperlink, but if touched can (usually) invoke that link.

Outside of 1988 Hone grown Hypermedia - without an indirect mediator - all links will go to the specified endpoint.

SMART buttons of 2023 support that customized indirection. Be it a changing web page address, or presentation language, or a full IoT messaging capability - it requires this indirect mediator.

konacurrents commented 11 months ago

PAT uses a Light Saber

https://github.com/konacurrents/SemanticMarkerAPI/assets/5956266/bcc1e79c-b0d5-453a-8da6-406a71fc3948

konacurrents commented 11 months ago

Brainstorm on 3D holder of scanner with ATOM

image

Maybe with an M5 display.

image

konacurrents commented 11 months ago

Blind users

The PAT Light Saber could be used by blind (sight challenged) Semantic Marker™️ users:

konacurrents commented 9 months ago

JiffySoft 017

konacurrents commented 8 months ago

3 shows 3D printed enclosure