nicholas-leonard / equanimity

Experimental research for distributed conditional computation
4 stars 0 forks source link

Signal, Node and Chain #21

Closed nicholas-leonard closed 11 years ago

nicholas-leonard commented 11 years ago

Signal

A signal is passed along a Chain of responsibility (design pattern) as a request. It can be decorated (design pattern) by any Node in the chain. It can be composed of sub-signals. It has a log interface that can be updated by any Node. Decoration is performed by adding an object to the signal's data table. Data is structured as a hierarchy of tables and values. In this respect, the signal is like an SQL database where tables can be created, queried, updated, etc. Hence the structure of the Signal data can be different (a tree) than the structure of the Chain and Nodes (a cyclic graph).

Node

A node handles Signals by updating its state and passes it along to the calling Chain. A node is a component of a Chain, which is a composite (design pattern) of Nodes. Nodes cannot see each other. They communicate through the Signal.

Chain

A chain is also a Node, but it is composed of a list of Nodes to be iterated (composite design pattern). A node can be referenced many time in a chain.

nicholas-leonard commented 11 years ago

--The problem is that an object must know its location in the tree --SetNamespace ExtendNamespace (special nodes can change theses like validators, or just dedicated nodes) --Propagators just change the state (which is a position in the tree) -- but still have access to the remainder of the tree (current, parent, root, child) -- what about objects like models that can be in many places? -- they are to be stored in the highest level where it can be unique -- all nodes are like Commands that have access to there state and the -- state of the tree (root). The are called with execute() in order, -- which allows them to modify the state before it is executed by the next one. -- Nevertheless, each Node is oblivious to its predecessors in the chain -- it knows only of the current task or mode, which will determine its behavior, -- and of the state of the system, which it will use to perform its task. --An issue is how to limit the granularity of this task. For example, -- a propagator's doBatch and doEpoch could be divided into very small -- Nodes. Which would offer great flexibility in all, but this would -- eventually become analogous to a language, requiring an interpreter. --Building models would be very finegrained and ultimately feel like procedural or -- functional programming.

--So do we want OOP or a pipeline? --If we refactor our propagators and such sufficiently, and allow for --a means of decorating propagators, we will have the same result. --The problem is that we cannot serialize what we decorate. As soon as -- we change the metatable, the torch.factory cannot rebuild it during -- unserialization. For that we would have to build our own decorator -- factory. The typename is serialized and used to setup the metatable -- during serialization. We could modify the serializer to serialize -- all decorators, which would have to be made available globally. -- Decorators would not be torch classes. They would have a special type. -- But for now the easiest thing to do is to hard code what we need. -- Propagators can have a chain of responsibility to pass on various -- requests. For example, they can pass on a cost request holding -- targets and predictions.

--Modules can still be encapsulated by Layers that provide --additional facilities like logging, momentum, max col norm, etc. --These could also be decorated (not yet). --The model would be its own object of layers, and itself a layer. --It would be serializable.

--We still have the problem of learning rate modifications. --We build a StateOfTheEpoch and pass it along the tree and handlers -- to see if anyone needs it or needs to modify it. --So we still need a way of implementing a global state that we can -- pass along the tree. Every component, because the tree is just a -- composite pattern, gets called (state = onEpoch(state)) which they forward to -- there components in a logical order. State data is identifies as log / or not -- All extensions have name and decorate this function? That would be -- ideal.

-- The log is simply a matter of initializing objects with a -- namespace. This is used to insert their entry into the log. -- There is only one log each epoch, none for batches. -- Each log is a bunch of leafs identifies by a namespace and name. -- A namespace is a chain of names. -- The namespace must be standardized to facilitate analysis. -- A log_entry is not limited to any one namespace? So for example, -- I want to log layers as they are structured in the model (in which -- position), but also based on their Module class. Nah. The class of -- the layer should be logged such that we can later browse trees for -- class level statistics. So we should store the typename. As well -- as the parent typenames.

--Once the state is passed along the subtrees, it is used by the -- components to forward batches and so on. This state contains -- is easily namespaced, just like the log. Actually they are the same. -- Except that some variables are specified as not-for-logging. -- Objects may be initialized with a log-filter. A default is provided. -- Its basically a namespace tree whose leafs's childrent in the state -- tree are not logged. Or the filter can say which ones to log. --So the state and log are the same. A simple table that can be traversed. -- and serialized. It should contain all data. Some of it should be -- marked as unserializable.

--Objects should allow for being setup with such a state. But the -- datasource shouldn't be serialized. And the state shouldn't contain -- parameters, these are in the model. It should be divided into -- constructor, epoch, batch, destructor namespaces. The batch should -- not be accessible between propagators, it is only used as a convenience. -- The constructor contains the design (see Builder). And the epoch -- contains the data required for logging each epoch, and for each -- component. It keeps track of changes in the hyper-parameters, and -- also keeps a log of observations (error, col norm, etc). The -- destructor is for anything that occurs after all epochs are complete. -- We would also require the model be clonable for early-stopping. -- We would like to keep it memory for final testing. Another possibility, -- is to perform testing only when the validator finds a new minima. -- state.epoch.validator.minimum_error=4353 -- state.epoch.validator.current_error=2323 --new minima! do something...

nicholas-leonard commented 11 years ago

So obviously, the Signal Node Chain doesn't seem to be a good fit for our purposes. Its flexible, but it be too much of a pain to construct. It would be like coding in c instead of python ; micro-management. Anyway, I slept on it and came up with an idea which I hope will be the final one.

We use a Decorator design pattern for Propagators. So we have Propagator and PropagatorDecorator interfaces. The EarlyStopper would be a concrete PropagatorDecorator, etc. (We could also attempt to build a metatable that has a special __index able to forward requests to the next decorator in the chain. Not sure this would be serializable, but it would do away with the need for Decorator interfaces. )

We make use of a publish-subscribe design pattern, or mediator for logging and inter-object communications/dependencies. This package seems to provide all the functionality we need: https://github.com/Olivine-Labs/mediator_lua .

All objects are setup() with the mediator, which they use to publish or subscribe to. They subscribe to channels by creating closures containing themselves that are passed as callbacks to the mediator. They publish to the mediator by when they want to inform subscribers of a change. For example, learning rate is subscribed to by layers, model, optimizer and logger, and published by LearningRateDecorator. Ideally, only one object is responsible for publishing to the channel, else conflicts will arise. But this complicates things since normally it is the Optimizer that would publish the learning rate during construction/setup. In this respect, it would be seem much easier to store and communicate such values in a Signal that is passed around to all methods.

A more complex example would be a ClassificationDecorator which publishes non-standard channels like mean_classification_error, which are subscribed to by another decorator, say EarlyStopperDecorator. This fix the current problem of inter-extension/observer communication of the current model.

For all issues regarding serialization we can keep the unserializable objects in non-serializable (global) space, and pass these as function parameters down the method chain while having no object hold them in self. This could be the case of mediators, datasources and datasets.

The logging problem is solved by having loggers subscribe to root channels. The data is structured as a table in the same way as channels for serialization, but could also just be stored in files with channel names as column headers, and channel values as column values.