Currently, each server will have to check for clusters, and access all the model collections through said cluster to be able to use the cluster in question. Otherwise collections will only ever return what's running on the exact node you've connected to, which wouldn't be helpful if running the Hyper-V provider against a cluster management node.
A better solution might be to run some kind of query on connection, delegate it to a is_clustered? method on the compute connection, and then default to reading cluster information if the machine is part of a cluster.
Or possibly provide a hyperv_default_cluster(?) parameter that can be provided to the connection, so that collections default to reading clustered information from said cluster.
Of course, this would be best done in a way that doesn't hide the implementation of said feature from the user, so that they know why they get the data they actually receive.
Currently, each server will have to check for clusters, and access all the model collections through said cluster to be able to use the cluster in question. Otherwise collections will only ever return what's running on the exact node you've connected to, which wouldn't be helpful if running the Hyper-V provider against a cluster management node.
A better solution might be to run some kind of query on connection, delegate it to a
is_clustered?
method on the compute connection, and then default to reading cluster information if the machine is part of a cluster. Or possibly provide ahyperv_default_cluster
(?) parameter that can be provided to the connection, so that collections default to reading clustered information from said cluster.Of course, this would be best done in a way that doesn't hide the implementation of said feature from the user, so that they know why they get the data they actually receive.