Closed tomatenbrei closed 2 years ago
hi @tomatenbrei - it sounds like you are trying to use a model that is not an ml-agents model. unfortunately, the inference engine is not setup to support other models. cc: @mantasp
Hi @unityjeffrey, thanks for your response! I am aware that Barracuda is primarily made for the ml-agents model and only supports a limited set of layers/operations.
Yes, I extended the ml-agents model to allow variable input sizes with custom observations (e.g. for detecting a variable amount of objects - see my old issue) and I am able to train it. The only new operation I use is tf.reduce_max
, which seems to boil down to supported operations.
The only thing which seems to hinder Barracuda from executing my modified ml-agents model is the mentioned dimensionality problem. This problem already occurs when using a model only consisting of the "supported" dense layer, as shown in the example.
My question is basically whether the described problem could simply be an user error, whether it requires minimal change in the Barracuda codebase and could be added in the (near) future or whether it is caused by an architectural choice and cannot be easily solved with the current implementation.
It would be really awesome getting this to work with Barracuda!
@mantasp: I'd be happy to provide additional verbose output or minimalistic python and C# example scripts, if that helps.
thanks @tomatenbrei. @mantasp would be the best to answer this. may I ask what your use case is for doing this?
may I ask what your use case is for doing this?
Sure! My original problem is the selection of a target in a RPG-like scenario.
Imagine having an environment with a variable amount of entities (enemies and player (agent) characters). When an agent casts a skill, it first needs to determine which entity to focus. For this, the agent needs observations describing the environment accurately enough such that a good decision can be made.
As the number of entities in the scene is variable and player characters can be focused completely position-independent, using fixed-size observations has a lot of drawbacks. Inspired by the model used in OpenAI Five, I therefore wanted to try using some sort of pooling/reduction technique to allow the agent to simply acquire observations about every relevant entity (of some specific kind) in the scene and to build its own internal representation based on that information.
@tomatenbrei could you please send us your model? Thanks!
@mantasp: Unfortunately I just realized that I forgot to merge ml-agents 0.8.2 until now. In this version, the error already occurs in the conversion process.
I guess it's easier to first address the problem regarding the dimensionality, so I am attaching a very stripped-down model generation code. You will find the model and the logs in the attached .zip file.
Calling simple_dense_model.py
with ml-agents 0.8.1 does not throw errors and produces model_081.nn
from frozen_graph_081.pb
with the output log_081.txt
(but recall that inference inside Unity fails with this model). In version 0.8.2, I get the following error during the conversion process (see log_082.txt
):
Traceback (most recent call last):
File "[...]/ml-agents/ml-agents/mlagents/sandbox/simple_dense_model.py", line 37, in <module>
export_model(sess)
File "[...]/ml-agents/ml-agents/mlagents/sandbox/simple_dense_model.py", line 25, in export_model
tf2bc.convert('log/frozen_graph_def.pb', 'log/model.nn', verbose=True)
File "[...]\ml-agents\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1537, in convert
i_model, args
File "[...]\ml-agents\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1376, in process_model
process_layer(l, o_context, args)
File "[...]\ml-agents\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1207, in process_layer
assert all_elements_equal(input_ranks)
AssertionError
Attachment: barracuda_3d_dense.zip
@tomatenbrei thanks!
Hey @mantasp, have you had a chance to investigate this issue yet? I would greatly appreciate if you could estimate how serious this error is and whether it could be fixed in the upcoming releases.
If you are aware of any Barracuda compatible alternatives or workarounds for using tensors with three dimensions (of which the first two have variable size) as inputs for a dense layer, please let me know.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
When I create a model using 3D input for a
tf.dense
layer in python, convert it to the Barracuda model format usingtf2bc.convert
and try to start inference in Unity, I get the following error:The input has shape
(-1, -1, 2)
. I use a(2, 3, 2)
tensor during inference, which seems to be internally converted into shape(1, 2, 3, 2)
. However, the loaded Barracuda model seems to expect shape(1, 1, 1, 4)
instead.Using inputs of shape
(-1, 2)
for the dense layer works as expected and does not throw any errors in Unity.In other words:
This model works
and this model does not work
Could you give me some advice on how I can further debug this issue? Or is it not possible to feed 3D input into a dense layer in Barracuda yet? Could I somehow get around that limitation?