Writing parallel and distributed programs is often challenging and requires a lot of time to deal with concurrency issues. Actor model provides a high-level, scalable and robust abstraction for building distributed applications. It provides several benefits:
Xoscar implements the actor model in Python and provides user-friendly APIs that offer significant benefits for building applications on heterogeneous hardware:
Xoscar allows you to create multiple actor pools on each worker node, typically binding an actor pool to a CPU core or a GPU card. Xoscar provides allocation policies so that whenever an actor is created, it will be instantiated in the appropriate pool based on the specified policy.
When actors communicate, Xoscar will choose the optimal communication mechanism based on which pools the actors belong to. This allows Xoscar to optimize communication in heterogeneous environments with multiple processing units and accelerators.
Binary installers for the latest released version are available at the Python Package Index (PyPI).
# PyPI
pip install xoscar
The source code is currently hosted on GitHub at: https://github.com/xorbitsai/xoscar .
Building from source requires that you have cmake and gcc installed on your system.
# If you have never cloned xoscar before
git clone --recursive https://github.com/xorbitsai/xoscar.git
cd xoscar/python
pip install -e .
# If you have already cloned xoscar before
cd xoscar
git submodule init
git submodule update
cd python && pip install -e .
Here are basic APIs for Xoscar.
import xoscar as xo
# stateful actor, for stateless actor, inherit from xo.StatelessActor
class MyActor(xo.Actor):
def __init__(self, *args, **kwargs):
pass
async def __post_create__(self):
# called after created
pass
async def __pre_destroy__(self):
# called before destroy
pass
def method_a(self, arg_1, arg_2, **kw_1): # user-defined function
pass
async def method_b(self, arg_1, arg_2, **kw_1): # user-defined async function
pass
import xoscar as xo
actor_ref = await xo.create_actor(
MyActor, 1, 2, a=1, b=2,
address='<ip>:<port>', uid='UniqueActorName')
import xoscar as xo
actor_ref = await xo.actor_ref(address, actor_id)
# send
await actor_ref.method_a.send(1, 2, a=1, b=2)
# equivalent to actor_ref.method_a.send
await actor_ref.method_a(1, 2, a=1, b=2)
# tell, it sends a message asynchronously and does not wait for a response.
await actor_ref.method_a.tell(1, 2, a=1, b=2)
Xoscar provides a set of APIs to write batch methods. You can simply add a @extensible
decorator to your actor method
and create a batch version. All calls wrapped in a batch will be sent together, reducing possible RPC cost.
import xoscar as xo
class ExampleActor(xo.Actor):
@xo.extensible
async def batch_method(self, a, b=None):
pass
Xoscar also supports creating a batch version of the method:
class ExampleActor(xo.Actor):
@xo.extensible
async def batch_method(self, a, b=None):
raise NotImplementedError # this will redirect all requests to the batch version
@batch_method.batch
async def batch_method(self, args_list, kwargs_list):
results = []
for args, kwargs in zip(args_list, kwargs_list):
a, b = self.batch_method.bind(*args, **kwargs)
# process the request
results.append(result)
return results # return a list of results
In a batch method, users can define how to more efficiently process a batch of requests.
Calling batch methods is easy. You can use <method_name>.delay
to make a batched call and use <method_name>.batch
to send them:
ref = await xo.actor_ref(uid='ExampleActor', address='127.0.0.1:13425')
results = await ref.batch_method.batch(
ref.batch_method.delay(10, b=20),
ref.batch_method.delay(20),
)