I've identified 3 classes (along with their attributes) that we will require:
SharingImages (This should actually be SharingObjects to make it more generalizable)
Attributes
Name: Some identifier
Description: Some description
(Optional) Category: If this was an object use CoreML library to figure out the object's category (locket, spoon,...)
Location: Need to check what ARKit returns for a detected image (x,y,z) coordinates of the corners in some generated map, pixel coordinates, ... . This would be useful when relative location wrt other images is required (for example, in sequential action inference). This also doesn't need to be an attribute of the SharingImage class.
Type: Could be an action object (to open a certain app), an audience identifier (grandma, friends,...), freeform input (text, painting,...)
Methods
Some getter and setter methods
Some image editing action like crop, filter, ...
SharingAudience
Attributes
Name
Maybe another image (optional)
Contact Info: Could just be the email address, or a user handle (whatsapp, facebook, ...)
Type: Some category label like friend, son, relative, ...
State: Some label like active, needs to refer to another SharingAudience type contact, passive ...
Methods
Some getter and setter methods
SharingActions
Attributes
Name
Description
Uses another app: A boolean which indicates if this will trigger a 3rd party app.
(optional) RelatedApp
(optional) RelatedAppsAction
Other data required: You might require some other data to perform this action. You might need to record audio/video or get an image of some text, or a painting. You might also need to specify a receiver (someone already present as a SharingAudience). You might need input from another person.
Identify the minimal classes we'll need to open a certain app and provide input via images