ori-drs / spot_ros

Other
2 stars 0 forks source link

Cannot have both the controller and the ros estop running at the same time #9

Open heuristicus opened 3 years ago

heuristicus commented 3 years ago

The estop is initialised at https://github.com/ori-drs/spot_ros/blob/master/spot_driver/src/spot_driver/spot_wrapper.py#L403-L407 and forces dissociation of the estop of the controller. This is not ideal because we want to be able to estop from multiple places.

There are details on estop functionality at https://dev.bostondynamics.com/docs/concepts/estop_service but it isn't really a tutorial.

Perhaps trying to slightly modify the force_simple_setup at https://github.com/boston-dynamics/spot-sdk/blob/7ce5c5f31f4e1e45e9ff4be29fb097e258b75919/python/bosdyn-client/src/bosdyn/client/estop.py#L291-L300 will help fix the issue. We just want to add an endpoint to the existing configuration.

heuristicus commented 3 years ago

Tried

    def resetEStop(self):
        """Get keepalive for eStop"""
        self._estop_endpoint = EstopEndpoint(self._estop_client, 'ros', 9.0, role="ros_estop")
        #self._estop_endpoint.force_simple_setup()  # Set this endpoint as the robot's sole estop.
        from bosdyn.api import estop_pb2
        new_config = self._estop_client.get_config()
        driver_endpoint = new_config.endpoints.add()
        driver_endpoint.CopyFrom(self._estop_endpoint.to_proto())

        active_config = self._estop_client.set_config(new_config, new_config.unique_id)

        for endpoint in active_config.endpoints:
            self._logger.info(type(endpoint))
            self._logger.info(endpoint)
            self._logger.info(endpoint.unique_id)
        self._unique_id = active_config.endpoints[0].unique_id
        self._estop_endpoint.register(active_config.unique_id)
        self._estop_keepalive = EstopKeepAlive(self._estop_endpoint)

But the new estop doesn't show up in the estop states output in /spot/states/estop.

The new_config thing seems to basically instantiate a new endpoint? Doing the above resulted in the following output from the endpoints

[INFO] [1623929399.627386]: <class 'bosdyn.api.estop_pb2.EstopEndpoint'>
[INFO] [1623929399.628420]: role: "PDB_rooted"
name: "ros"
timeout {
  seconds: 9
}
cut_power_timeout {
  seconds: 13
}

[INFO] [1623929399.629124]: 
[INFO] [1623929399.629789]: <class 'bosdyn.api.estop_pb2.EstopEndpoint'>
[INFO] [1623929399.630374]: role: "ros_estop"
name: "ros"
timeout {
  seconds: 9
}
cut_power_timeout {
  seconds: 13
}

Also this seems to have messed up the software estop somehow because it's stuck in the 1 state.

heuristicus commented 3 years ago

With the controller on, and control released on it, but with cut motor power authority, if I start the driver and call the claim service with the default implementation using the force simple setup, the controller displays an error in the comms status:

PXL_20210617_121106521

This is expected since the forcing kicks off any other endpoints that previously existed.

After stopping the driver, relinquishing motor cut authority and then re-acquiring it fixes the problem and the controller stop button no longer shows (ERROR).

The same applies to the robot itself. If you reacquire the cut motor power authority on the controller it will kick off the driver.

heuristicus commented 3 years ago

Made some slight changes to the previous attempt, it seems like it should work but doesn't

    def resetEStop(self):
        """Get keepalive for eStop"""
        active_conf = self._estop_client.get_config()
        print("before adding to active conf")
        print(active_conf)
        self._estop_endpoint = EstopEndpoint(self._estop_client, 'ros', 9.0, role="clearpath spot driver")
        new_endpoint = active_conf.endpoints.add()
        new_endpoint.CopyFrom(self._estop_endpoint.to_proto())
        print("after adding to active conf")
        print(active_conf)

        self._estop_client.set_config(active_conf, active_conf.unique_id)
        self._estop_endpoint.register(active_conf.unique_id)

When trying to register the endpoint, it fails with

[ERROR] [1623933797.137281]: Failed to initialize robot communication: bosdyn.api.RegisterEstopEndpointResponse (ConfigMismatchError): Registered to the wrong configuration.

The output of the prints is

before adding to active conf
endpoints {
  role: "PDB_rooted"
  name: "bosdyn.android.spotapp 5b647b795a0a5c01"
  timeout {
    seconds: 9
  }
  cut_power_timeout {
    seconds: 13
  }
}
unique_id: "10"

after adding to active conf
endpoints {
  role: "PDB_rooted"
  name: "bosdyn.android.spotapp 5b647b795a0a5c01"
  timeout {
    seconds: 9
  }
  cut_power_timeout {
    seconds: 13
  }
}
endpoints {
  role: "clearpath spot driver"
  name: "ros"
  timeout {
    seconds: 9
  }
}
unique_id: "10"

Which looks like it should be correct. The unique IDs are the same but maybe this isn't the right mechanism for overriding? We don't want to override anyway, we want to just add this endpoint.

heuristicus commented 3 years ago

Made a post on the spot discussion forum at https://support.bostondynamics.com/s/question/0D54X00006gosEySAI/using-controller-estop-along-with-sdk-software-estops asking about how it should be done.