Closed shubham-s-agarwal closed 2 months ago
We could write a test, but then it would make sense to perhaps test every combination (of config variations). Do we want to do that now?
We could write a test, but then it would make sense to perhaps test every combination (of config variations). Do we want to do that now?
You can do that if you like. But if you do, then I wouldn't recommend creating a separate test method for each configuration, but rather generate the various combinations of config values automatically and go through them one by one, probably with a subTest
.
But then again, if you don't believe it's something we need to test, then feel free to omit.
I went through the combinations we would test, and I feel we can omit the testing as previously we tested the main module (two phase learning)
Fix issues with compute_class_weights JSON serialization and enforce fc2 usage when fc3 is enabled
Resolved an issue where compute_class_weights returns a NumPy array, causing an error when saving the configuration as JSON (since JSON does not support NumPy arrays). The fix ensures compatibility by converting the NumPy array to a JSON-serializable format.
Added a safeguard in the model_architecture_config for meta_cat_config. The current architecture assumes fc3 is only used when fc2 is enabled. If fc2 is set to False and fc3 is True, the model would fail due to a mismatch in hidden layer sizes. The fix automatically enables fc2 if fc3 is set to True, preventing potential errors.