mottosso / maya-corollary

Client + Server model of a Maya deformer via ZeroMQ
MIT License
2 stars 2 forks source link

Minimum Viable Product #1

Open mottosso opened 8 years ago

mottosso commented 8 years ago

Motivation

The act of developing plug-ins for Maya can be improved.

Today, developing a plug-in involves:

  1. Compiling and distribute your plug-in for each version of Maya
  2. Having Maya running while developing and testing, hindering continuous integration
  3. Sticking to a single programming language, i.e. C++
  4. Using an outdated version of said programming language
  5. Using a outdated compiler for said programming language
  6. Limiting the use of your labour to Maya
  7. Ensuring that your plug-in does not cause Maya to crash
  8. Limiting yourself to resources available to the main Maya process

This MVP will explore the possibility of a server/client implementation of high-performance computation to combat these disadvantages, with the goal of reaching a performance level of "good enough".

To some extent, performance may very well be outperform a native plug-in via simplified maintenance, testing and better library and language support (e.g. C++11, 14, 17).

If successful, it will allow you to:

  1. Compile and distribute a single version of your plug-in
  2. Develop and test without Maya; including continuous integration through e.g. Travis-CI
  3. Develop with any programming language, including Javascript, Go, Rust and Perl
  4. Use any version of any programming language
  5. Utilise the most recent advancements in technology for your programming environment
  6. Use the same compiled plug-in outside of Maya; in any software (even simultaneously).
  7. Develop a plug-in that may crash, without affecting Maya
  8. Take advantage of the full hardware of your, and external, computers

    Goal

This MVP should provide an externally running process with raw vertex POSITIONS, modify this data in some way and subsequently feed it back into Maya.

Implementation

A node acting as a TCP endpoint from within Maya takes as input the .outMesh of another node of type mesh and provides subsequent node with an .inMesh.

image

The endpoint transmits the information to an externally running process within which it is somehow being modified and later returned. The endpoint node then outputs the result of this transformation, like any other deformer would.

Todo

  1. [x] Develop a Maya Python plug-in to transmit vertices as an array of vec3, suitable for GLM.
  2. [x] Develop a Python equivalent, receiving this data
  3. [x] Somehow modify this data
  4. [x] Return data to Maya Python plug-in
  5. [x] Replace xmlrpclib with zeromq
  6. [x] Repeat steps 2-5 with C++11
  7. [ ] Replace Maya Python plug-in with a C++ ZeroMQ server
  8. [ ] Replace serialisation method (json) with cap'n'proto
  9. [ ] Profit

Measure

With the three IPC libraries, (1) xmlrpc, (2) pyzmq and (3) zmq, measure the following.

The end result is a compiled application of high-performance, communicating with a ZeroMQ server running within Maya. The default implementation within Maya will be a Python plug-in, with an optional compiled component for higher performance. Estimating a 5% cost for IPC compared with natively developed plug-ins, excluding potential performance gain mentioned in Motivation above.

If successful, the technique may then also apply to other applications, including standalone use.

mottosso commented 8 years ago

Initial results, using Python for both server and client, xmlrpclib for IPC.

untitled

mayaCorollaryPlugin.py

# ...
proxy = xmlrpclib.ServerProxy("http://127.0.0.1:7070")
positions = proxy.compute(positions, {
    "envelope": envelope.asFloat(),
    "amplitude": amplitude.asDouble(),
    "offset": offset.asDouble(),
})

deformer.py

# ...
def compute(positions, data):
    for pos in positions:
        value = math.sin(pos[0] * data["amplitude"] + data["offset"])
        value *= data["envelope"]
        pos[1] += value  # modify Y-coordinate
    return positions

server = SimpleXMLRPCServer(("127.0.0.1", 7070))
server.register_function(compute)
print("Listening on 127.0.0.1:7070")
server.serve_forever()

Thoughts

First of all, xmlrpc is doing implicit serialisation to XML and by the looks of it, that serialisation is not fast. Unsure of what else could be the bottleneck in this scenario. How can I find out?|

mottosso commented 8 years ago

Initial results with ZeroMQ, same algorithm. Notice the performance difference!

untitled

mayaCorollaryPlugin.py

# ...
self.socket.send(json.dumps({
    "positions": positions,
    "data":  {
        "envelope": envelope.asFloat(),
        "amplitude": amplitude.asDouble(),
        "offset": offset.asDouble(),
    }
}))

deformer_zmq.py

# ...
while True:
    #  Wait for next request from client
    data = json.loads(socket.recv())

    #  Send reply back to client
    socket.send(json.dumps(compute(**data)))

Thoughts

ZeroMQ is obviously built for speed, but we are still doing JSON serialisation/deserialisation through Python and it's still passing through TCP. It may be compiled on both ends, but so is xmlrpclib(?).

We could likely increase performance further by serialising via something like msgpack, protobuf or cap'n'proto.

But the primary bottleneck is (likely) Python itself.

Once we have a C++ implementation, we can have a look at rigid performance comparisons!

mottosso commented 8 years ago

Initial results with C++11 deformer and Python plug-in.

untitled

Using the cppzmq bindings and nlohmann/json library.

void compute(json &j)
{
    auto data = j["data"];
    for (auto &pos: j["positions"])
    {
        auto frequency = data["frequency"].get<double>();
        auto offset = data["offset"].get<double>();
        auto amplitude = data["amplitude"].get<double>();
        auto envelope = data["envelope"].get<double>();
        auto x = pos[0].get<float>();
        auto y = pos[1].get<float>();
        float value = sin(x * frequency + offset) * amplitude * envelope;
        pos[1] = y + value;
    }
}

Quite remarkable performance penalty; slower than Python-to-Python! The JSON library was admittedly excluding performance from his list of requirements, but I wouldn't have thought it'd be that affected. Could it be the ZeroMQ library? How? It's the same library as the one used through Python, but without marshalling its way through an inferior interpreter.