Closed littlepiper closed 1 month ago
Hey there! @littlepiper,
Normally, the time spent on batch inference for supporting models is negligible compared to manual inspection.
However, if your use case is limited to inference services without inspection, i recommend that you perform offline inference directly using the original training framework. This approach can avoid unnecessary interface overhead and allow for more efficient use of resources.
Since the interface seems unnecessary for batch automatic labeling, could you provide a way to operate without a GUI?