and I achieved it for this approach taking a small proportion in CPU runtime, I don't want to burden the CPU runtime without the need to detect the optimal model after training runtime.
I don't think the underlined function is a quick response code because it selects the optimal model at least O(n) runtime from all trained candidate ensembles only by loss evaluation. It's an implementation based on the tf API, but it doesn't take into account how users can get the ideal combination based on the strategy they are using in as few trials as possible.
I have designed some operators for model loss monitoring, like the following continuous fragments:
https://github.com/linjing-lab/easy-pytorch/blob/9651774dcc4581104f914980baf2ebc05f96fd85/released_box/perming/_utils.py#L269-L281
and I achieved it for this approach taking a small proportion in CPU runtime, I don't want to burden the CPU runtime without the need to detect the optimal model after training runtime.
https://github.com/tensorflow/adanet/blob/0364cc46810ff3831b3e4a37125de862a28da9bd/adanet/core/iteration.py#L743-L747
I don't think the underlined function is a quick response code because it selects the optimal model at least O(n) runtime from all trained candidate ensembles only by loss evaluation. It's an implementation based on the tf API, but it doesn't take into account how users can get the ideal combination based on the strategy they are using in as few trials as possible.
https://github.com/tensorflow/adanet/blob/0364cc46810ff3831b3e4a37125de862a28da9bd/adanet/core/iteration.py#L1089-L1109