Closed Simple2Sample closed 2 years ago
I think I found the issue. I thought vis_encode_type was convoluting all of the observations, not just the visual observations. I also don't have any visual observations in my project at this time. Does this mean it just defaults to "simple" because I have no visual observations?
Follow-up question: Are there any ways to add convolution layers or change the width of individual layers in the NN? I assume the NN is fully connected.
Hi @Simple2Sample
The available types for vis_encode_type are documented here: https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Training-Configuration-File.md#common-trainer-configurations
The default is 'simple'. It seems that we did not clean up our documentation though, as fully_connected
is listed as a potential option in the above documentation but is not supported in the code. I will clean up the documentation and add a catch so that specifying an unsupported visual encoder type will raise a warning.
That being said, a visual encoder will only be instantiated if your agent has a camera sensor. Unfortunately, it's not possible to modify the convolution hyperparameters from the yaml, however, if you are comfortable you can modify the encoders directly here: https://github.com/Unity-Technologies/ml-agents/blob/main/ml-agents/mlagents/trainers/torch/encoders.py#L177
Ah thanks a lot for the clarification!
This issue has been automatically marked as stale because it has not had activity in the last 28 days. It will be closed in the next 14 days if no further activity occurs. Thank you for your contributions.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Describe the bug I'm having issues where I the vis_encode_type defaults to simple regardless of what option or size of the neural network I use. I'm trying to make it create a fully_connected network, but ML-agents simply ignores the parameter for some reason. As fully_connected does not have a minimum size, it should not in my opinion override my vis_encode_type setting
The ML.yaml file I used:
What the terminal outputs. Notice the change in vis_enode_type:
Environment: