The Tensor class in Glow is used extensively as an abstraction for typed memory on the host and is used in the compiler, runtime interfaces and backend specific execution code. It’s able to serve a wide range of uses in the code by being fundamentally a thin wrapper around an underlying buffer and adding extra functionality through helper functions (such the TensorHandle).
As we get closer to production with Glow (and especially the runtime components) we have encountered some use cases that our Tensor abstraction does not handle well, or at all. This document addresses a few categories of data that we have no good abstraction for in Glow and need a solution for: Tensors with data that exist on the device but not the host (Device Resident) and Tensors which exist on the host in a device-specific format (e.g. aligned or pinned for a specific accelerator). Currently we handle these cases implicitly (e.g. we just know that this tensor must be aligned correctly), or we transform the Tensor before and after each usage (e.g. in DeviceManager::execute).
The solution we find to handle these issues may affect other open questions about how to transfer data between devices (e.g. P2P) and how to represent non-standard Tensors (e.g. large tensors we can’t fit in Host memory).
This is a large document, but I believe the implementation can be fairly uncomplicated (so feel free to skip to the end).
Use Cases
Training (Device Resident)
When training weights we have a situation where there are mutable tensors which are modified on every execution, but are not inputs or outputs (until the run is complete). Our current DeviceManager implementations cannot handle this optimally, and must transfer the weights tensor from host to device at the start of each execution (minibatch) and back to the host at the end of each execution. This is a substantial overhead which can dominate training speed.
To support training we need to find a solution where trainable tensors are initialized before the first run, retrieved after the last epoch but on all runs in between only the inputs (for the forward pass) and outputs (for loss calculation) are transferred to/from the device.
The above example demonstrates a typical training workflow where epochs are marked in green and each minibatch in purple. The weights are initialized at the beginning of the run, and are modified by the backward pass of each minibatch but reading and interpretation of the results is done only at the very end of the run. Every minibatch needs new inputs loaded from the training set, and the user may want to pull down the outputs of each minibatch to do loss calculation on the host.
For this simple (and common) example use case we could encode this information into the graph itself. In fact we may not even need any changes to Glow core to do so, simply mark the weights as constants and manually retrieve them from the card at the end of the training run (assuming the device does not prevent mutation of glow Constants).
We can also handle Tensor location dynamically, by transferring each Tensor to the location we need it at the time that we need it, which would allow us to do the least transfers without any special knowledge or annotation required in the user’s graph.
Device Pinned Tensors (Device Specific)
Some vendor’s backend libraries require host memory to be in a particular format before initiating DMA or transfer to the device - most commonly with a defined alignment scheme. Some backends are “helpful” and provide a Host Tensor object that abstracts these details, the consequence being that we must use their allocator for Tensors or suffer an unnecessary copy for each Tensor that is transferred to the device (the NNPI Backend demonstrates a particularly good example of this issue).
The diagram below is a summary of Tensor memory copies done in glow for each request:
From this you can see that on every request we copy each input Tensor at least twice (The input copy in Onnxifi is not always necessary but IIRC we do it most of the time currently), and each output tensor three times (including DMA).
The grey sections in bold refer to copies that are done to transform the input Tensor into the format required by the Vendor’s runtime stack. We can eliminate these copies by creating glow::Tensors with the correct format in the Onnxifi layer. Doing this requires two changes to glow:
HostManager needs to provide the information about what format each input should be in to the Onnxifi layer, allowing it to create the appropriate Tensor. (TBD: how to memcpy into a Tensor with a non-default layout).
We need to be able to tell inside the runtime which format a particular Tensor is in, or rather just if it is or isn’t correct for the current execution device. This requires annotation on a per-Tensor level, since this applies to both Inputs and sub-partition activations.
This may not immediately seem related to the Device Residency example above, but both requirements can be conceptualized around the idea of opaque Device Specific metadata attached to a Tensor.
Colocated Partitions (Device Resident)
After partitioning a graph it’s possible for multiple partitions to be assigned to the same device (e.g. on two devices if we parallelize the inputs to a DAGNode). In this case if the output of one node was the input of another we would not need move the Tensor between partitions, but we currently transfer all outputs back to the host and all inputs up the the device. This is unnecessary overhead we can remove to improve latency and throughput.
In the example below a network with four inputs and one output tensor is partitioned into three components: two of which are run in parallel on different devices and the third combines their activations and produces the output:
Clearly all four inputs must be transferred to a device (1 and 2 to Device 1, 3 and 4 to Device 2), and the Output must be transferred back to the Host if we’re going to do anything with it. The two activations are different: Activation 1 must be transferred from Device 1 to Device 2, either via the Host or directly via a P2P transfer. Activation 2 is interesting in that nothing must be done to it at all, in fact anything we do to Activation 2 is work that we can optimize away.
To support the most efficient flow here we need to be able to leave Activation 2 on the device intentionally at the end of Partition B, and then when executing Partition C know that it’s already loaded so we don’t overwrite this with another Host Tensor. This requires the Glow runtime’s Executor to be able to manipulate Tensors which do not have data present on the Host (ie. just the Tensor metadata).
In the DeviceManager we need to know a) if the Tensor is Device Resident, b) which Device it is resident on, and c) where it actually is on the device (ie. the pointer to pass into execute). We’ll need to synchronize this information from one partition to the next.
This is similar to the first training use case but more complex, because it is not known at compile time what to do with the output activations of each partition. Consider the above example in the case that Partition C is the most expensive operator, and the idea provisioning strategy is to assign Partition A to Device 1, Partition B to Device 2 and Partition C to both devices. In that case the actual device used to execute Partition C (and therefore what to do about the activations) is decided by the Executor at the time of scheduling C. To support this optimally we will need to transfer the activations at that time (i.e. only when the destination device is known).
Enabling Peer-to-Peer DAta Transfer (Device Resident)
Peer-to-Peer transfer is another case where Tensors may be manipulated while resident on a device. If we think about DMA transfers to a device being an operation on a Tensor (or DeviceManager) then we could trigger P2P transfers in a similar way.
Assuming that we can model transferring data to the device as converting a Host Tensor to a Device Resident Tensor, and saving data back from the device as the the inverse then we can extend this to modeling P2P transfers by converting one Device Resident Tensor into a different Device Resident Tensor. It’s worth noting that this approach can only express P2P data transfer (“send” side), not P2P control flow (“receive” side).
Even if we do want the performance, or interoperability, of supporting Send and Receive nodes in the Glow graph, modeling P2P via Device Resident Transfers (which we need a solution for anyway) may allow us to support P2P faster and provide support while we work out the best thing to do.
Tensor Streaming (Non Resident!)
Another related and potentially overlapping feature we need to consider is the case of Tensors that are not resident on the Host or on any Device. Support for this will be necessary in the case of very large networks where all Tensors cannot fit on the Host at the same time. In this case Glow will need to be able to manipulate Tensors without access to their data, which is similar to when the Tensor is resident on a Device.
We could model this as a Device Resident Tensor where the device is a network device rather than one of the managed accelerators. They would likely share the same kinds of methods (Create, Load, Release) and metadata (Type, location and graph mapping). For most of Glow which cares only if the Tensor contents are manipulable we could handle the different kinds of Device Resident Tensor in the same manner, which could simplify handling code.
There are also some differences between the use cases which may make a compiled implementation introduce complexity rather than reduce it:
Streamed Tensors are largely expected to be used for Constants (learned weights, embedding tables, etc) while Device Resident tensors are useful particularly for mutable Tensors (inputs/outputs/training) which are transferred to and from the device regularly. For this reason there may be limited, or no, code that could handle them interchangeably.
Streamed Tensors may not be able to be accessed out of order: ie. transferring data may not be possible on a per-Tensor basis. This would remove the benefit of lazily evaluating non-Resident Tensors.
Streamed Tensors do not exist on a glow “Device” and so the abstractions/interfaces we would make use of may not exist, or even make sense, for a Streamed Tensor. Particularly I’m thinking of the DeviceManager, but there may be others. We may find that a surface similarity between the cases breaks down once we get into the nuts and bolts.
Existing Runtime Functionality
Before developing an implementation approach it’s worth remembering what Glow can already do, because any approach we take here should not restrict currently supported functionality.
Specifically, Glow supports these features we should take into account:
Both the Glow compiler and the Partitioner modify and transform the input Graph. In particular we must be able to do the same kinds of graph level optimizations - particularly relevant are Constant optimizations which modify Tensor contents. Support for Device Residency or Specificity should not prevent us from any optimizations of the graph or DAG.
The Glow HostManager supports many partitioned graphs loaded at once, each representing multiple CompiledFunctions on multiple devices. We may need to automatically load balance how much memory is available to each graph based on usage.
The Glow runtime supports requests executed concurrently or in parallel. There are multiple kinds of parallelism supported within the runtime including cross device parallelism and parallelism within a single device (such as pipelining). The exact specifics of this behavior is determined by the capabilities of each device.
The Glow runtime supports heterogeneous partitioning, meaning that different inputs, outputs and activations of the same input Function may be used by not only different devices but different kinds of devices with very different semantics.
The Glow Provisioner will attempt to make best use of available hardware by assigning a single partition to multiple devices and allowing the Executor to load balance between them at run time. At compile time we cannot know with certainty which device all partitions will run on, making any static compile-time implementation particularly difficult.
What other Frameworks Do
XLA
XLA has the TransferManager interface that is subclassed by each backend (somewhat analogous to our DeviceManager, but does not handle execution) which provides a common interface for converting Literal objects (analogous to our Tensor) into ShapedBuffer objects (like a simplified Tensor, just buffer ptr, Type, Backend kind and Device ID).
In TVM device transfers are part of the DeviceAPI (again similar to DeviceManager, but does not handle execution) which is subclassed by each backend and provides a common interface for copying data from one address with an attached TVMContext to another address with an attached context. Each DLTensor (analogous to our Tensor) holds a DLContext (typedefed to TVMContext) which contains the Device type and Device ID.
In this example all Tensors are necessarily bound to a device and it is up to the caller to manage device tensors.
In Pytorch torch.Device is a required property of torch.Tensor: every Tensor is bound to a specific Device (“cpu”, “cuda:0”, etc). Transferring a Tensor to a device is equivalent to copy constructing a new Tensor with a different Device. Deciding how and when to transfer from host to device and back is up to the user in their program.
Each framework has a mechanism of denoting the device associated with a Tensor (or Tensor-like object) by additional state within the Tensor. In each case the transfer to device is done via the creation of a new object with a different Device.
There are a few reasons why we may want to consider something different:
Generally these systems were designed (like us) with Host Resident Tensors first and Device Resident Tensors (perhaps just GPU) later. In Glow we anticipate a wider range of Device types and want to support flexible Heterogenous partitioning.
Unlike any of the frameworks above the Glow runtime is built around the Partitioner, which introduces additional complexity to our model. Other frameworks can get away with considering Device Residency primarily at the beginning and end of execution, but we must also consider mid graph activations.
Glow has unusually sophisticated support for multi-device systems. In fact, we are unusual for providing any mechanism for automatically scheduling non-homogenous networks across multiple devices. Much of the decision making we need to do here is left for the user of other frameworks.
Proposed Implementation
My proposal is to separate how and when transfers are done from where Device Residency information is stored. The DeviceManager handles the transfer logic and stores Device Residency information in the Tensor directly.
Storing Device Residency INformation
After copying a Tensor’s contents to the device we need to store two pieces of information to retrieve it: which device it resides on, and where on that device it is.
Which device is easy: we have an existing object that corresponds to a specific device - the DeviceManager.
Where on the device is more complicated: The representation of a device tensor on the host is specific to that device, it may be a pointer in the device address space, some kind of named handle or a compound object containing information that is opaque to Glow. The safest and most flexible thing we can do is to keep the context completely opaque and not attempt to interpret it.
Then we need to associate that Info with a Tensor and store it. There are a few options here:
Store the DeviceResidencyInfo in the DeviceManager itself. This is convenient because it isolates everything about Device Residency below the line of the Device related API. This approach has the benefit that any Device Residency logic is isolated to the runtime path and introduces no additional complexity to the Optimizer or Compiler (this may not be true if we support Device Specificity in this manner, since we may want that information during Constant optimization).
The problem with this approach is that outside of the DM there’s no way to tell whether a Tensor is Device Resident or which device it’s on. The biggest risk that may cause is that Tensors that are resident and have state in the DeviceManager are passed to the user who doesn’t have the right information to free them appropriately, leaking device memory.
Store the DeviceResidencyInfo in a new mapping inside the ExecutionContext (keyed by Tensor). Similar to the previous approach, this also has the benefit that Device Residency information can be limited to the runtime path. This does not suffer from the issue of code outside the DeviceManager being able to tell if a Tensor is Device Resident or not since all layers of the runtime have access to the ExecutionContext.
However, this separates the Tensor itself from the DeviceResidencyInfo, if you have only the Tensor available and it is Device Resident there is no way to know. This is particularly relevant if the Tensor goes out of scope while resident there is no way to safely free its device memory. This creates another place where Tensor metadata must be synchronized (the Tensor, the PlaceholderBindings, this new mapping and the Device stack), which increases bookkeeping.
Store the DeviceResidencyInfo directly in the Tensor. The key benefit this provides is that the DeviceResidencyInfo is always alongside its associated Tensor and so it’s always possible to know if a given Tensor is Device Resident or not. This greatly simplifies the leak problem as well since Tensors can free their device memory in the destructor when they go out of scope (just as we do with Host memory now).
The downside is adding complexity to a Tensor object we use extensively throughout glow - particularly in places that do not require Device Residency as a consideration.
With that said, I think the best overall approach is including DeviceResidencyInfo in the Tensor class:
class Tensor {
private:
DeviceResidencyInfo residency_;
public:
/* \returns true if this Tensor's most up to date data exists on a device */
bool isDeviceResident();
/* Sets DeviceResidencyInfo in the Tensor, registering it as existing on a device */
void setDeviceResidency(const DeviceResidencyInfo& info);
/* \returns the DeviceResidencyInfo associated with this Tensor, its fields may
be null */
const DeviceResidencyInfo& getDeviceResidency();
/* Removes any associated DeviceResidencyInfo. Note that this does not affect the
device in anyway, in particular this does not free associated memory on the
device. */
bool clearDeviceResidency();
}
It’s worth repeating that only the metadata is in the Tensor object, any logic regarding transfers should exist elsewhere.
Another advantage of using the existing Tensor object to represent Device Resident Tensors is that we can use common infrastructure for pooling and organisation: for example it would allow us to replace the HabanaIOBufferPool with a standard glow::TensorPool which manages Tensors with DeviceResidencyInfo.
Transferring To and From the Device
The DeviceManager is the right place for methods that transfer Tensor data because transfer logic is inherently device specific and the DeviceManager is the runtime’s backend-specific component. It’s possible that we could use what we know about the various backend implementations to implement common code with backend specific hooks, but this would increase complexity - particularly if we don’t get assumptions about future devices right.
I propose the following API:
class DeviceManager {
...
/* Copies the contents of \p tensor to the device. If \p location is set
it represents a device specific location for the tensor. */
bool transferToDevice(Tensor& tensor, void *location = nullptr);
/* Copies the device buffer associated with \p tensor to the host. If \p
release is true, frees the device memory */
bool transferFromDevice(Tensor& tensor, bool release = true);
/* Releases the device buffer associated with \p tensor without copying
it's contents, frees device memory */
bool releaseDeviceTensor(Tensor& tensor);
};
These three methods would need to be implemented in every DeviceManager as the Executor (at least) will need a consistent way of managing Tensor location. In the case of devices that don’t support Resident Tensors or where that concept doesn’t make sense (e.g. CPU and Interpreter backends) transferToDevice and transferFromDevice can be no ops.
One decision that is important to be consistent about is what to do about the Tensor’s host memory while it is Device Resident. We could potentially free it, reducing Host memory pressure, and reallocate it on the transfer back to the host. Not certain if the memory win would be worth the cpu cost of the free+malloc (I think its not), but I am sure we’d complicate a lot of calling code (specifically in unit tests which expect to be able to verify the output buffer defined before execute). I propose we don’t do anything to the Tensor’s memory in transferToDevice and simply overwrite it in transferFromDevice.
We also need to decide on what to do about the case where a user attempts to modify a Tensor while it is Device Resident. It’s convenient for many reasons to store DeviceResidencyInfo in the Tensor, but it’s also awkward that a Device Resident Tensor is not a valid object for almost all of a Tensor’s method since the data buffer isn’t available. We could try something like automatically transferring the Tensor back to the Host before operating on it but that could have unintended consequences. I suggest we just assert in these cases.
For a more detailed example, take a device that exposes 3 methods for managing device memory: deviceMalloc, deviceFree and deviceMemcpy and represents Device Tensors via pointers in the device’s address space. We can implement Device Residency with the following code:
In this example we store the device address pointer in DeviceResidencyInfo.context and use the deviceMemcpy function to move data into and out of that buffer. This concept maps pretty well to how memory is managed in GPU backends, as well as the Habana backend.
When to Transfer Tensors
It would be valuable also to have a common scheme or ruleset for when to transfer Tensors to and from the device. This is complicated, since the optimal thing to do in each situation will depend on both the use case and details of the device. There are couple of high level options here:
Decide at compile time (statically) and copy eagerly. At the time we compile the network we know which placeholders map to Tensors that should or shouldn’t stay on the device, we know how the input graph will be partitioned and we know which device types they will execute on. If we bake this information into the CompiledFunction then it (or the DeviceManager) are able to make the right decision about what to do with outputs on the device.
The diagram above demonstrates this on a graph with two partitions. In our current runtime the “Copy To” and “Copy From” blocks refer to copying all Tensors, but they could be modified to copy only the Tensors we did not want to remain on the device.
There are two complexities here: how to encode this Tensor transfer information at compile time (which is not graph based, at least currently) and what to do in the case where a partition could run on multiple devices (the best thing is probably to always copy to host in that instance, which is suboptimal).
Decide at runtime (dynamically) and copy eagerly. Decide at runtime which Tensors to copy and where. In this approach the CompiledFunction and DeviceManager do not have the information they need to make good decisions about what and where to copy (e.g. what the inputs to the next partition are, what device it will run on) and so this logic must be pushed into the runtime’s Executor.
This will increase complexity at the cost of simplifying DeviceManagers - which is a bad trade for two reasons: firstly the Executor is only used when using the full HostManager based runtime, which increases the amount of extra work the user must do outside of that environment, and secondly the Executor is not a good fit for any device specific information. An example may be that while we generally want to leave outputs on the device if they are used again there, a certain device may have extremely constrained memory and want to get all outputs off as a rule - this distinction is difficult to achieve systematically with this approach and would require special cases in the Executor code.
Decide at runtime and copy lazily. In this approach we leave all tensors on the device by default, and the DeviceManager initiates whichever copies or transfers are required before execution.
Similar to how in the current implementation we loop over all placeholders in the PlaceholderBindings and copy the associated Tensor from the Host to the Device, we can loop over the same map and depending on the current Residency either copy it from Host to Device, from another Device to Host and then to this Device (or initiate a P2P transfer if that is supported) or do nothing. Putting this code in the DeviceManager means we could have common code to do this through DM interfaces which could be overridden in the event a particular backend needs to do something else.
Copying lazily means that we only attempt to copy Tensors that are used, and so always do the least amount of work no matter the situation. By allowing the DeviceManager to handle any Residency for input Tensors we also unlock the ability to do the transfer early if it is more efficient to do so. One example of this would in the case of a device that can do PCI transfers concurrently with execution (this is common) we could transfer inputs of Partition B to the device while Partition A is executing (or not, depending on what the runtime situation happens to be).
I think the lazy solution provides us the best combination of simplicity, performance and extensibility. The implementation might look like this (assuming P2P is not supported):
// In DeviceManager::execute()
{
TRACE_EVENT_SCOPE(context.get(), TraceLevel::RUNTIME, "loadPlaceholders");
// Copy over placeholders.
for (auto PH : bindings->pairs()) {
auto it = symbolTable.find(PH.first->getName());
if (it == symbolTable.end()) {
continue;
}
auto &symbolInfo = it->second;
auto *hostTensor = PH.second;
const DeviceResidencyInfo &info = hostTensor->getDeviceResidency();
if (!info.deviceManager) {
// If resident on no device, make it resident on this device.
transferToDevice(*hostTensor);
} else if (info.deviceManager != this) {
// If resident on another device transfer it via the host (no p2p).
info.deviceManager->transferFromDevice(*hostTensor);
transferToDevice(*hostTensor);
}
// else do nothing, it's in the right place.
}
}
Obtaining final outputs
We have concentrated so far on input and activation tensors, with the plan to leave outputs on the device as long as possible. We still need to do the savePlaceholders stage to pull results down from the device, but potentially only once; at the end of the execution. The implementation would be the inverse of the code above, but perhaps in the Executor or HostManager.
This covers existing behaviour but one of the key use cases of Device Residency is when you want to keep outputs of a network on the device between runs (such as weights in the training case). We don’t always want to transfer the outputs back to the host.
In the same way we have the choice to transfer eagerly or lazily:
The eager approach would involve marking certain Placeholders as “Copy-back” or “Leave-on-device” and having a separate Finalize method which would obtain the data left on the device. This works quite well for the simple training case.
The lazy approach would be to leave all final outputs on the device and let the user transfer any results back they want manually. This provides the user a greater ability to control when they have access to a Tensor without any wasted copies, at the cost of increasing the amount of code required by the caller. In this case I think we could provide a helper function on the HostManager or ExecutionEngine which would ensure all outputs are available on the host, or a configuration flag in the ExecutionContext which would do this automatically (for simple runs, unit tests, etc).
I like the lazy approach here as well for the same reasons. It makes decisions simpler inside the runtime (by removing decision making) and allows a wider range of configurations.
Take for example a training set up where some results are used every iteration (e.g. forward pass outputs for loss calculation), some results are used every N iterations (e.g. combining certain weights at the end of each epoch in a multi-device training run), and some results are used only at the end of the full run (e.g. the final trained weights). This is complex to express statically which would be needed for the eager approach.
Of course, it’s not strictly necessary that the “Copy-back” flag on Placeholders be static, it could be modified by the caller before each run. However, since Placeholders are held by a pointer and used for multiple pipelined (or in the case of multiple devices, concurrent iterations), this would create a great deal of extra complexity (at least) and would require synchronized storage inside the Placeholder.
Laziness does introduce a further burden on the user of Glow: they must now consider whether the Tensors they receive back from a run are resident on the Host or a Device. We can’t automatically transfer back to the Host in all cases to support situations like the previous training example, but we can provide helpers to make this easier:
class HostManager {
...
void ensureOutputsAvailable(ExecutionContext& context) {
for (const auto& PH : context.getPlaceholderBindings()) {
const DeviceResidencyInfo &info = PH.second->getDeviceResidency();
if (info.deviceManager != nullptr) {
info.deviceManager->transferToHost(PH.second);
}
}
}
}
This would make transferring all results back to the host a one liner (we’d need this call on the ExecutionEngine as well). Since this is likely the most common case (especially for simple examples and unit tests) we could leave it the default behavior and add a configuration option to the ExecutionContext to disable the final transfer.
Pinned Tensors
Moving on from Tensors that exist on the device, we have the case of Pinned Tensors: Tensors that are resident on the Host, but in a potentially opaque Device-specific format. The information we’d need to store for these tensors is very similar to that we need for Resident Tensors:
Which Backend (potentially Device) the Tensor is Specific for.
Location and formatting information, which may need to exist in a Device Specific format.
Clearly, this could map directly to the DeviceResidencyInfo structure defined above. This could increase complexity, since we’ll need additional state to know whether a Device Specific is actually on the device or not. On the other hand, perhaps the distinction is not useful to us since we don’t (or can’t) manipulate the contents of pinned Tensors (e.g. because we don’t understand their alignment).
If all Backends need their Pinned Tensors configured from some small set of parameters then we may be able to represent this by a combination of core Glow features (e.g. by adding alignment and stride to Tensors, as most frameworks have). This could be simpler and more powerful in the short term, but adding a new parameter type causes much more churn.
In particular, if a Backend requires Pinned Tensors to be stored with some implementation specific metadata (such as a HostResource or BackendTensorInfo, etc) then an attempt by Glow to manage these Device Specific Tensors will result in duplication, at least in code but also potentially in work on the CPU.
Ultimately the most flexible and future proof infrastructure we can build is the ability for each Backend to allocate and free Pinned Tensors via whatever mechanism they use, and determine whether or not a Tensor is correctly pinned at the time they initiate a DMA transfer.
Proposing we can add the following to DeviceManager:
class DeviceManager {
...
/* Allocates a pinned Tensor on the host ready for DMA to this device. */
Tensor allocatePinnedTensor(TypeRef type);
/* Releases a pinned host Tensor, freeing its memory to be reused. */
void releasePinnedTensor(Tensor& tensor);
}
Similar to Device Resident Tensors above, these interfaces support an implementation that uses object pooling via the standard glow::TensorPool if that’s what we want to do for that particular DeviceManager.
The second component required to realize the performance win is the ability for the user of the Runtime to allocate Device Specific Tensors up front, avoiding a copy from a standard Host based Tensor into a Pinned Tensor. This could be achieved by a simple interface on the HostManager:
class HostManager {
...
/* Allocates each Placeholder in the \p context's PlaceholderBindings with a
correctly pinned Tensor for the Function corresponding to \p networkName.
If that Placeholder is already allocated (but not correctly pinned) it's
content will be copied to the new Tensor. */
void allocatePinned(`llvm::StringRef networkName, `ExecutionContext& context);
}
In this way, the user does not need to know what the specifics of their pinned Tensors are and can trust the Glow runtime to do the right thing under the hood.
The DeviceManager’s transferToDevice will need to handle pinned and unpinned Tensors appropriately of course, but that is already true.
Interaction with Peer-to-Peer
The diagram below lays out three approaches to handling P2P data transfers on an input graph that is partitioned into two pieces (P1 and P2) on two devices:
Note: size of areas is related primarily to how much text had to go in them, not intended to show representative time costs.
The leftmost sequence shows the current approach without P2P (and without Device Residency). The Executor (here as Host) triggers each partition, and inside the DeviceManager we allocate any device tensors, copy Host Tensors to the device, execute the function and finally copy Device Tensors back to the host.
Pro: we’re already doing it...
Con: additional latency, cannot support P2P.
The middle sequence shows a P2P design involving send (and potentially recv) Nodes. In this case both devices allocate and load their Device Tensors at the start of the run, then the P1 Function initiates a data transfer into the previously allocated Device Tensor on Device 2. Execution of P2 may be triggered by the host or directly as a result of the P2P transfer.
Pro: Best mapping to P2P designs in other frameworks, best future latency potentiality (since we can support triggering execution from the Device rather than the Host).
Con: Most complex, Scheduling of P1 and P2 (and associated DM code) is very different. P2 Tensors must be allocated early, consuming Device memory for a longer time.
The rightmost sequence shows a P2P design using Device Transfers. In this case the DeviceManager does the same thing in each case in it’s setup/loadTensors phase: for each Tensor it does either a Host-to-Device transfer, a Device-to-Device transfer (P2P) or nothing depending on the current location of that Tensor. Since we no longer need Tensors to exist on the host the saveTensors phase is omitted llvm::StringRef networkName the Host transfers back any Tensors it needs at the end of the run (ie. the networks outputs).
Pro: allows reusing of concepts & infrastructure for P2P and non-P2P Tensor transfers, removes special P2P handling logic from Executor and DeviceManager::execute. (🚨opinion🚨) Much simpler to reason about.
Con: Doesn’t lend itself to support of blocking Recv Node implementation, and so probably requires host scheduling of dependent partitions.
I like this last approach (at least in the short/medium term) because it removes the need to distinguish between devices that support P2P and those that don’t, we can just “do the right thing” inside the DeviceManager and everything works out.
Interaction with Tensor Streaming
Similarly to how we could model pinning via a Device Specific component, we could model Tensors that require streaming from the Network as being resident on a special kind of device (that can’t execute graphs, of course).
I don’t have a strong sense of whether this will work, or make sense in our system. Device Residency is a concept that is relevant only at run time, whereas streamed Tensors are currently only a compile time concern. It may be that they never overlap in handling code and so would benefit from a completely separate infrastructure.
Conclusion
By designing good abstractions I believe we can support Device Residency and Device Specificity (pinning) of Tensors with limited changes to core Glow. Regardless of the details, I’m reasonably confident that any approach we take here must be both dynamic and provide the ability to include device specific logic or configuration.
I’ve prototyped a (very) rough version of this approach with Lazy transfers and metadata stored in the Tensor. This is functional and in the DLRM tracing example does greatly reduce the amount of transfers to and from the device without affecting the result.
Streaming large Tensors and support for Peer to Peer are currently less well understood, and while these changes may give us good tools for implementing that functionality they may also not be a good fit.
Device Resident and Specific Tensors
The Tensor class in Glow is used extensively as an abstraction for typed memory on the host and is used in the compiler, runtime interfaces and backend specific execution code. It’s able to serve a wide range of uses in the code by being fundamentally a thin wrapper around an underlying buffer and adding extra functionality through helper functions (such the
TensorHandle
).As we get closer to production with Glow (and especially the runtime components) we have encountered some use cases that our Tensor abstraction does not handle well, or at all. This document addresses a few categories of data that we have no good abstraction for in Glow and need a solution for: Tensors with data that exist on the device but not the host (Device Resident) and Tensors which exist on the host in a device-specific format (e.g. aligned or pinned for a specific accelerator). Currently we handle these cases implicitly (e.g. we just know that this tensor must be aligned correctly), or we transform the Tensor before and after each usage (e.g. in DeviceManager::execute).
The solution we find to handle these issues may affect other open questions about how to transfer data between devices (e.g. P2P) and how to represent non-standard Tensors (e.g. large tensors we can’t fit in Host memory).
This is a large document, but I believe the implementation can be fairly uncomplicated (so feel free to skip to the end).
Use Cases
Training (Device Resident)
When training weights we have a situation where there are mutable tensors which are modified on every execution, but are not inputs or outputs (until the run is complete). Our current DeviceManager implementations cannot handle this optimally, and must transfer the weights tensor from host to device at the start of each execution (minibatch) and back to the host at the end of each execution. This is a substantial overhead which can dominate training speed.
To support training we need to find a solution where trainable tensors are initialized before the first run, retrieved after the last epoch but on all runs in between only the inputs (for the forward pass) and outputs (for loss calculation) are transferred to/from the device.
The above example demonstrates a typical training workflow where epochs are marked in green and each minibatch in purple. The weights are initialized at the beginning of the run, and are modified by the backward pass of each minibatch but reading and interpretation of the results is done only at the very end of the run. Every minibatch needs new inputs loaded from the training set, and the user may want to pull down the outputs of each minibatch to do loss calculation on the host.
For this simple (and common) example use case we could encode this information into the graph itself. In fact we may not even need any changes to Glow core to do so, simply mark the weights as constants and manually retrieve them from the card at the end of the training run (assuming the device does not prevent mutation of glow Constants).
We can also handle Tensor location dynamically, by transferring each Tensor to the location we need it at the time that we need it, which would allow us to do the least transfers without any special knowledge or annotation required in the user’s graph.
Device Pinned Tensors (Device Specific)
Some vendor’s backend libraries require host memory to be in a particular format before initiating DMA or transfer to the device - most commonly with a defined alignment scheme. Some backends are “helpful” and provide a Host Tensor object that abstracts these details, the consequence being that we must use their allocator for Tensors or suffer an unnecessary copy for each Tensor that is transferred to the device (the NNPI Backend demonstrates a particularly good example of this issue).
The diagram below is a summary of Tensor memory copies done in glow for each request:
From this you can see that on every request we copy each input Tensor at least twice (The input copy in Onnxifi is not always necessary but IIRC we do it most of the time currently), and each output tensor three times (including DMA).
The grey sections in bold refer to copies that are done to transform the input Tensor into the format required by the Vendor’s runtime stack. We can eliminate these copies by creating glow::Tensors with the correct format in the Onnxifi layer. Doing this requires two changes to glow:
This may not immediately seem related to the Device Residency example above, but both requirements can be conceptualized around the idea of opaque Device Specific metadata attached to a Tensor.
Colocated Partitions (Device Resident)
After partitioning a graph it’s possible for multiple partitions to be assigned to the same device (e.g. on two devices if we parallelize the inputs to a
DAGNode
). In this case if the output of one node was the input of another we would not need move the Tensor between partitions, but we currently transfer all outputs back to the host and all inputs up the the device. This is unnecessary overhead we can remove to improve latency and throughput.In the example below a network with four inputs and one output tensor is partitioned into three components: two of which are run in parallel on different devices and the third combines their activations and produces the output:
Clearly all four inputs must be transferred to a device (1 and 2 to Device 1, 3 and 4 to Device 2), and the Output must be transferred back to the Host if we’re going to do anything with it. The two activations are different: Activation 1 must be transferred from Device 1 to Device 2, either via the Host or directly via a P2P transfer. Activation 2 is interesting in that nothing must be done to it at all, in fact anything we do to Activation 2 is work that we can optimize away.
To support the most efficient flow here we need to be able to leave Activation 2 on the device intentionally at the end of Partition B, and then when executing Partition C know that it’s already loaded so we don’t overwrite this with another Host Tensor. This requires the Glow runtime’s Executor to be able to manipulate Tensors which do not have data present on the Host (ie. just the Tensor metadata).
In the DeviceManager we need to know a) if the Tensor is Device Resident, b) which Device it is resident on, and c) where it actually is on the device (ie. the pointer to pass into
execute
). We’ll need to synchronize this information from one partition to the next.This is similar to the first training use case but more complex, because it is not known at compile time what to do with the output activations of each partition. Consider the above example in the case that Partition C is the most expensive operator, and the idea provisioning strategy is to assign Partition A to Device 1, Partition B to Device 2 and Partition C to both devices. In that case the actual device used to execute Partition C (and therefore what to do about the activations) is decided by the Executor at the time of scheduling C. To support this optimally we will need to transfer the activations at that time (i.e. only when the destination device is known).
Enabling Peer-to-Peer DAta Transfer (Device Resident)
Peer-to-Peer transfer is another case where Tensors may be manipulated while resident on a device. If we think about DMA transfers to a device being an operation on a Tensor (or DeviceManager) then we could trigger P2P transfers in a similar way.
Assuming that we can model transferring data to the device as converting a Host Tensor to a Device Resident Tensor, and saving data back from the device as the the inverse then we can extend this to modeling P2P transfers by converting one Device Resident Tensor into a different Device Resident Tensor. It’s worth noting that this approach can only express P2P data transfer (“send” side), not P2P control flow (“receive” side).
Even if we do want the performance, or interoperability, of supporting Send and Receive nodes in the Glow graph, modeling P2P via Device Resident Transfers (which we need a solution for anyway) may allow us to support P2P faster and provide support while we work out the best thing to do.
Tensor Streaming (Non Resident!)
Another related and potentially overlapping feature we need to consider is the case of Tensors that are not resident on the Host or on any Device. Support for this will be necessary in the case of very large networks where all Tensors cannot fit on the Host at the same time. In this case Glow will need to be able to manipulate Tensors without access to their data, which is similar to when the Tensor is resident on a Device.
We could model this as a Device Resident Tensor where the device is a network device rather than one of the managed accelerators. They would likely share the same kinds of methods (Create, Load, Release) and metadata (Type, location and graph mapping). For most of Glow which cares only if the Tensor contents are manipulable we could handle the different kinds of Device Resident Tensor in the same manner, which could simplify handling code.
There are also some differences between the use cases which may make a compiled implementation introduce complexity rather than reduce it:
Existing Runtime Functionality
Before developing an implementation approach it’s worth remembering what Glow can already do, because any approach we take here should not restrict currently supported functionality.
Specifically, Glow supports these features we should take into account:
What other Frameworks Do
XLA
XLA has the
TransferManager
interface that is subclassed by each backend (somewhat analogous to our DeviceManager, but does not handle execution) which provides a common interface for convertingLiteral
objects (analogous to our Tensor) intoShapedBuffer
objects (like a simplified Tensor, just buffer ptr, Type, Backend kind and Device ID).See:
TVM
In TVM device transfers are part of the
DeviceAPI
(again similar to DeviceManager, but does not handle execution) which is subclassed by each backend and provides a common interface for copying data from one address with an attachedTVMContext
to another address with an attached context. EachDLTensor
(analogous to our Tensor) holds aDLContext
(typedefed toTVMContext
) which contains the Device type and Device ID.In this example all Tensors are necessarily bound to a device and it is up to the caller to manage device tensors.
See:
Pytorch
In Pytorch torch.Device is a required property of torch.Tensor: every Tensor is bound to a specific Device (“cpu”, “cuda:0”, etc). Transferring a Tensor to a device is equivalent to copy constructing a new Tensor with a different Device. Deciding how and when to transfer from host to device and back is up to the user in their program.
See:
Summary
Each framework has a mechanism of denoting the device associated with a Tensor (or Tensor-like object) by additional state within the Tensor. In each case the transfer to device is done via the creation of a new object with a different Device.
There are a few reasons why we may want to consider something different:
Proposed Implementation
My proposal is to separate how and when transfers are done from where Device Residency information is stored. The DeviceManager handles the transfer logic and stores Device Residency information in the Tensor directly.
Storing Device Residency INformation
After copying a Tensor’s contents to the device we need to store two pieces of information to retrieve it: which device it resides on, and where on that device it is.
DeviceResidencyInfo could look like this:
Then we need to associate that Info with a Tensor and store it. There are a few options here:
Store the DeviceResidencyInfo in the DeviceManager itself. This is convenient because it isolates everything about Device Residency below the line of the Device related API. This approach has the benefit that any Device Residency logic is isolated to the runtime path and introduces no additional complexity to the Optimizer or Compiler (this may not be true if we support Device Specificity in this manner, since we may want that information during Constant optimization).
The problem with this approach is that outside of the DM there’s no way to tell whether a Tensor is Device Resident or which device it’s on. The biggest risk that may cause is that Tensors that are resident and have state in the DeviceManager are passed to the user who doesn’t have the right information to free them appropriately, leaking device memory.
Store the DeviceResidencyInfo in a new mapping inside the ExecutionContext (keyed by Tensor). Similar to the previous approach, this also has the benefit that Device Residency information can be limited to the runtime path. This does not suffer from the issue of code outside the DeviceManager being able to tell if a Tensor is Device Resident or not since all layers of the runtime have access to the ExecutionContext.
However, this separates the Tensor itself from the DeviceResidencyInfo, if you have only the Tensor available and it is Device Resident there is no way to know. This is particularly relevant if the Tensor goes out of scope while resident there is no way to safely free its device memory. This creates another place where Tensor metadata must be synchronized (the Tensor, the PlaceholderBindings, this new mapping and the Device stack), which increases bookkeeping.
Store the DeviceResidencyInfo directly in the Tensor. The key benefit this provides is that the DeviceResidencyInfo is always alongside its associated Tensor and so it’s always possible to know if a given Tensor is Device Resident or not. This greatly simplifies the leak problem as well since Tensors can free their device memory in the destructor when they go out of scope (just as we do with Host memory now).
The downside is adding complexity to a Tensor object we use extensively throughout glow - particularly in places that do not require Device Residency as a consideration.
With that said, I think the best overall approach is including DeviceResidencyInfo in the Tensor class:
It’s worth repeating that only the metadata is in the Tensor object, any logic regarding transfers should exist elsewhere.
Another advantage of using the existing Tensor object to represent Device Resident Tensors is that we can use common infrastructure for pooling and organisation: for example it would allow us to replace the HabanaIOBufferPool with a standard glow::TensorPool which manages Tensors with DeviceResidencyInfo.
Transferring To and From the Device
The DeviceManager is the right place for methods that transfer Tensor data because transfer logic is inherently device specific and the DeviceManager is the runtime’s backend-specific component. It’s possible that we could use what we know about the various backend implementations to implement common code with backend specific hooks, but this would increase complexity - particularly if we don’t get assumptions about future devices right.
I propose the following API:
These three methods would need to be implemented in every DeviceManager as the Executor (at least) will need a consistent way of managing Tensor location. In the case of devices that don’t support Resident Tensors or where that concept doesn’t make sense (e.g. CPU and Interpreter backends)
transferToDevice
andtransferFromDevice
can be no ops.One decision that is important to be consistent about is what to do about the Tensor’s host memory while it is Device Resident. We could potentially free it, reducing Host memory pressure, and reallocate it on the transfer back to the host. Not certain if the memory win would be worth the cpu cost of the free+malloc (I think its not), but I am sure we’d complicate a lot of calling code (specifically in unit tests which expect to be able to verify the output buffer defined before execute). I propose we don’t do anything to the Tensor’s memory in
transferToDevice
and simply overwrite it intransferFromDevice
.We also need to decide on what to do about the case where a user attempts to modify a Tensor while it is Device Resident. It’s convenient for many reasons to store DeviceResidencyInfo in the Tensor, but it’s also awkward that a Device Resident Tensor is not a valid object for almost all of a Tensor’s method since the data buffer isn’t available. We could try something like automatically transferring the Tensor back to the Host before operating on it but that could have unintended consequences. I suggest we just assert in these cases.
For a more detailed example, take a device that exposes 3 methods for managing device memory:
deviceMalloc
,deviceFree
anddeviceMemcpy
and represents Device Tensors via pointers in the device’s address space. We can implement Device Residency with the following code:In this example we store the device address pointer in DeviceResidencyInfo.context and use the
deviceMemcpy
function to move data into and out of that buffer. This concept maps pretty well to how memory is managed in GPU backends, as well as the Habana backend.When to Transfer Tensors
It would be valuable also to have a common scheme or ruleset for when to transfer Tensors to and from the device. This is complicated, since the optimal thing to do in each situation will depend on both the use case and details of the device. There are couple of high level options here:
The diagram above demonstrates this on a graph with two partitions. In our current runtime the “Copy To” and “Copy From” blocks refer to copying all Tensors, but they could be modified to copy only the Tensors we did not want to remain on the device.
There are two complexities here: how to encode this Tensor transfer information at compile time (which is not graph based, at least currently) and what to do in the case where a partition could run on multiple devices (the best thing is probably to always copy to host in that instance, which is suboptimal).
This will increase complexity at the cost of simplifying DeviceManagers - which is a bad trade for two reasons: firstly the Executor is only used when using the full HostManager based runtime, which increases the amount of extra work the user must do outside of that environment, and secondly the Executor is not a good fit for any device specific information. An example may be that while we generally want to leave outputs on the device if they are used again there, a certain device may have extremely constrained memory and want to get all outputs off as a rule - this distinction is difficult to achieve systematically with this approach and would require special cases in the Executor code.
Similar to how in the current implementation we loop over all placeholders in the PlaceholderBindings and copy the associated Tensor from the Host to the Device, we can loop over the same map and depending on the current Residency either copy it from Host to Device, from another Device to Host and then to this Device (or initiate a P2P transfer if that is supported) or do nothing. Putting this code in the DeviceManager means we could have common code to do this through DM interfaces which could be overridden in the event a particular backend needs to do something else.
Copying lazily means that we only attempt to copy Tensors that are used, and so always do the least amount of work no matter the situation. By allowing the DeviceManager to handle any Residency for input Tensors we also unlock the ability to do the transfer early if it is more efficient to do so. One example of this would in the case of a device that can do PCI transfers concurrently with execution (this is common) we could transfer inputs of Partition B to the device while Partition A is executing (or not, depending on what the runtime situation happens to be).
I think the lazy solution provides us the best combination of simplicity, performance and extensibility. The implementation might look like this (assuming P2P is not supported):
Obtaining final outputs
We have concentrated so far on input and activation tensors, with the plan to leave outputs on the device as long as possible. We still need to do the savePlaceholders stage to pull results down from the device, but potentially only once; at the end of the execution. The implementation would be the inverse of the code above, but perhaps in the Executor or HostManager.
This covers existing behaviour but one of the key use cases of Device Residency is when you want to keep outputs of a network on the device between runs (such as weights in the training case). We don’t always want to transfer the outputs back to the host.
In the same way we have the choice to transfer eagerly or lazily:
I like the lazy approach here as well for the same reasons. It makes decisions simpler inside the runtime (by removing decision making) and allows a wider range of configurations.
Take for example a training set up where some results are used every iteration (e.g. forward pass outputs for loss calculation), some results are used every N iterations (e.g. combining certain weights at the end of each epoch in a multi-device training run), and some results are used only at the end of the full run (e.g. the final trained weights). This is complex to express statically which would be needed for the eager approach.
Of course, it’s not strictly necessary that the “Copy-back” flag on Placeholders be static, it could be modified by the caller before each run. However, since Placeholders are held by a pointer and used for multiple pipelined (or in the case of multiple devices, concurrent iterations), this would create a great deal of extra complexity (at least) and would require synchronized storage inside the Placeholder.
Laziness does introduce a further burden on the user of Glow: they must now consider whether the Tensors they receive back from a run are resident on the Host or a Device. We can’t automatically transfer back to the Host in all cases to support situations like the previous training example, but we can provide helpers to make this easier:
This would make transferring all results back to the host a one liner (we’d need this call on the ExecutionEngine as well). Since this is likely the most common case (especially for simple examples and unit tests) we could leave it the default behavior and add a configuration option to the ExecutionContext to disable the final transfer.
Pinned Tensors
Moving on from Tensors that exist on the device, we have the case of Pinned Tensors: Tensors that are resident on the Host, but in a potentially opaque Device-specific format. The information we’d need to store for these tensors is very similar to that we need for Resident Tensors:
Clearly, this could map directly to the DeviceResidencyInfo structure defined above. This could increase complexity, since we’ll need additional state to know whether a Device Specific is actually on the device or not. On the other hand, perhaps the distinction is not useful to us since we don’t (or can’t) manipulate the contents of pinned Tensors (e.g. because we don’t understand their alignment).
If all Backends need their Pinned Tensors configured from some small set of parameters then we may be able to represent this by a combination of core Glow features (e.g. by adding alignment and stride to Tensors, as most frameworks have). This could be simpler and more powerful in the short term, but adding a new parameter type causes much more churn.
In particular, if a Backend requires Pinned Tensors to be stored with some implementation specific metadata (such as a HostResource or BackendTensorInfo, etc) then an attempt by Glow to manage these Device Specific Tensors will result in duplication, at least in code but also potentially in work on the CPU.
Ultimately the most flexible and future proof infrastructure we can build is the ability for each Backend to allocate and free Pinned Tensors via whatever mechanism they use, and determine whether or not a Tensor is correctly pinned at the time they initiate a DMA transfer.
Proposing we can add the following to DeviceManager:
Similar to Device Resident Tensors above, these interfaces support an implementation that uses object pooling via the standard glow::TensorPool if that’s what we want to do for that particular DeviceManager.
The second component required to realize the performance win is the ability for the user of the Runtime to allocate Device Specific Tensors up front, avoiding a copy from a standard Host based Tensor into a Pinned Tensor. This could be achieved by a simple interface on the HostManager:
In this way, the user does not need to know what the specifics of their pinned Tensors are and can trust the Glow runtime to do the right thing under the hood.
The DeviceManager’s transferToDevice will need to handle pinned and unpinned Tensors appropriately of course, but that is already true.
Interaction with Peer-to-Peer
The diagram below lays out three approaches to handling P2P data transfers on an input graph that is partitioned into two pieces (P1 and P2) on two devices:
Note: size of areas is related primarily to how much text had to go in them, not intended to show representative time costs.
The leftmost sequence shows the current approach without P2P (and without Device Residency). The Executor (here as Host) triggers each partition, and inside the DeviceManager we allocate any device tensors, copy Host Tensors to the device, execute the function and finally copy Device Tensors back to the host.
The middle sequence shows a P2P design involving send (and potentially recv) Nodes. In this case both devices allocate and load their Device Tensors at the start of the run, then the P1 Function initiates a data transfer into the previously allocated Device Tensor on Device 2. Execution of P2 may be triggered by the host or directly as a result of the P2P transfer.
The rightmost sequence shows a P2P design using Device Transfers. In this case the DeviceManager does the same thing in each case in it’s setup/loadTensors phase: for each Tensor it does either a Host-to-Device transfer, a Device-to-Device transfer (P2P) or nothing depending on the current location of that Tensor. Since we no longer need Tensors to exist on the host the saveTensors phase is omitted
llvm::StringRef networkName
the Host transfers back any Tensors it needs at the end of the run (ie. the networks outputs).I like this last approach (at least in the short/medium term) because it removes the need to distinguish between devices that support P2P and those that don’t, we can just “do the right thing” inside the DeviceManager and everything works out.
Interaction with Tensor Streaming
Similarly to how we could model pinning via a Device Specific component, we could model Tensors that require streaming from the Network as being resident on a special kind of device (that can’t execute graphs, of course).
I don’t have a strong sense of whether this will work, or make sense in our system. Device Residency is a concept that is relevant only at run time, whereas streamed Tensors are currently only a compile time concern. It may be that they never overlap in handling code and so would benefit from a completely separate infrastructure.
Conclusion
By designing good abstractions I believe we can support Device Residency and Device Specificity (pinning) of Tensors with limited changes to core Glow. Regardless of the details, I’m reasonably confident that any approach we take here must be both dynamic and provide the ability to include device specific logic or configuration.
I’ve prototyped a (very) rough version of this approach with Lazy transfers and metadata stored in the Tensor. This is functional and in the DLRM tracing example does greatly reduce the amount of transfers to and from the device without affecting the result.
Streaming large Tensors and support for Peer to Peer are currently less well understood, and while these changes may give us good tools for implementing that functionality they may also not be a good fit.