data-collectors (3D scanners , LIDAR or even off the shelf compute resources) in order to then be processed by our algorithms to generate good quality format compliant with the current geo-spatial standards . this will be done by the container that is hosted on the docker hub and is passed as compute job during the reconstruction algorithm computation for the given input.
consumers of the data : that will pay for the fees to access this data, which in turn will be used to reimburse the treasury financing the creators as well as infrastructure providers on bacalau.
Current Workflow:
sequenceDiagram
user->>ipfs-storage: 1. Stores the raw 3D scan file (in .ply format)
ipfs-storage-->>bacalau: 2. gets the CID of the given image and then passes as the parameter for the reconstruction container
docker-reconstruction-->> bacalau: 3.runs the compute with the input file parameter, and
bacalau-->>ipfs-storage: after the compute job is complete, it then fetches output reconstructed file and stores to ipfs
Here we wanted to have need possiblity for the user that wants to understand how much cost (based on compute requirements) will need to be paid in order to support the hosting services of the infrastructure.
Issue
So in order for this architecture to be commercially viable, we needed feature in the cli that determines the compute cost for the given job (for eg of type bacalau docker estimate <<image:tag>> ) .
This I think will change the whole workflow for both the resource providers (i.e those hosting the cluster of compute/requester node ) as either they can define during the deployment , how much cost they want the users to charge for the given compute instance.
@aronchick happy to share further details / task list based on the feedback of the community.
Context
My team @ extra is building a marketplace for:
data-collectors (3D scanners , LIDAR or even off the shelf compute resources) in order to then be processed by our algorithms to generate good quality format compliant with the current geo-spatial standards . this will be done by the container that is hosted on the docker hub and is passed as compute job during the reconstruction algorithm computation for the given input.
consumers of the data : that will pay for the fees to access this data, which in turn will be used to reimburse the treasury financing the creators as well as infrastructure providers on bacalau.
Current Workflow:
Here we wanted to have need possiblity for the user that wants to understand how much cost (based on compute requirements) will need to be paid in order to support the hosting services of the infrastructure.
Issue
So in order for this architecture to be commercially viable, we needed feature in the cli that determines the compute cost for the given job (for eg of type
bacalau docker estimate <<image:tag>>
) .This I think will change the whole workflow for both the resource providers (i.e those hosting the cluster of compute/requester node ) as either they can define during the deployment , how much cost they want the users to charge for the given compute instance.
@aronchick happy to share further details / task list based on the feedback of the community.
thanks in advance