WordPoints / hooks-api

A basic API for hooking into user actions https://github.com/WordPoints/wordpoints/issues/321
GNU General Public License v2.0
0 stars 0 forks source link

Entity Storage API #124

Closed JDGrimes closed 8 years ago

JDGrimes commented 8 years ago

An addition to the entity API which disclosed how entities were stored is something that we have been contemplating since early on. It was a necessary dependency of the entity query API (#1). However, the decision was made not to pursue the query API in the initial version of this project, and so it didn't seem necessary to force entities to offer storage information either.

However, more recently (#119, #120, #122), we've seen again the need for this. And in #123 we realized that providing information about the entities is really a main focus of our API. And so to fail to provide this storage information, when it should be relatively easy, is a mistake.

The biggest issue is the fact that not all of the information about how entities are stored is really so simple. This is more of an issue when we begin talking about the entity attributes and especially the relationships. Entity relationships can be rather complex things in terms of how they are stored, and a uniform syntax for describing them must likewise be rather complex. This is the biggest hurdle that we have to overcome, but it would seem to go right along with #122 anyway.

JDGrimes commented 8 years ago

Storage Type

The first thing that entities would have to define is the storage type. How is the entity stored? In the database? On the filesystem? At a remote URL? In memory? Somewhere else?

Each of these will naturally have its own relevant information then.

JDGrimes commented 8 years ago

Storing Storage Info

Previously we were planning to just offer this information on the entity object itself, however, in #123 we've explored the possibility that some information like this might be better stored separately. The reason for this is that we could better use composition that way, because otherwise we'll run into conflicting inheritance.

If it were just properties with simple values that were at stake we'd have little problem, but unfortunately we also have to define methods to accompany some of these properties, which will vary from one storage type to another. Other than duplicating these methods, we don't have a good way to resolve inheritance tree conflicts with other entity features. The solution would be some form of composition. (Traits might actually work well for this, but I digress.)

But perhaps there is another option: moving the methods outside of the entity classes themselves. This would only be possible if we make the properties public though. Otherwise we'd have to introduce simple getters for each property, which would be beside the point.

I think that any other time I'd jump for composition and we'd be done here. The problem is that what we're talking about are classes that are primarily information stores. Which means that every one of them would need its own class to compose with itself to supply this information. There would be huge class multiplication.

The only way around that that I see (short of making properties public) is just to define these methods via an interface. So we'd end up implementing them separately on each class. This seems like duplication, but in the end it is only a single line in many of these classes, and it would not end up as an exact duplicate of the method body. Sometime in the future we might decide to come along and introduce some traits to fulfill some common patterns for how these methods work, but really they're not needed. (And I'm not sure that we'd actually want to use traits even then, because we'd have to reference $this from within the trait, which is generally a bad idea.)

JDGrimes commented 8 years ago

Attribute Storage Info

Currently attributes don't really need any added storage information, since we already supply the storage field. We had thought to supply the storage type for attributes as well before, but of course this is really unnecessary since attributes of this simple sort must be stored just as the parent entity is. (And if it wasn't, we'd need much more information than just the storage type from the attribute in order to be able to construct a query or even just retrieve the value for an entity.) I do wonder about how we would handle things like post meta though. It's possible that metadata will be made available through relationships instead of attributes though. Or perhaps they'd even be a new thing altogether? sort of like "foreign attributes". Anyway, the point is that introducing metadata or other foreign data as attributes will have other much bigger problems than just storage types to think about. And as relationships naturally handle this already, we'll probably just end up using relationships instead of attributes for that anyway.

JDGrimes commented 8 years ago

RE: Storing Storage Info

Then again, maybe it would be better to store this information entirely separate from the entity classes, like in their own registry as we said above. There is the possibility for example that we'll want to have the entities indexed by table name for #120. But then that's just for those entities that are stored in the database. And also, wouldn't we still need that information indexed by entity as well, for the query API? Yes, we would. So that's a separate thing I think. But still, that doesn't invalidate the thought that it might be best to store this information somewhere else instead of on the entity objects. Although, while we might gain some simplicity for the entity objects in that way, we still end up with a conundrum: how do we store this data? Do we load it all at once, or on a per-entity basis? (Will it ever actually be needed per-entity? yeah, I guess it would for the query API.) If we put it into classes, we're just going to end up right back where we started in terms of class multiplication.

We've talked in #123 about introducing a generic interface for handling this sort of thing. But I guess we sort of already have that in our class registries. The main thing that they lack is a query API, however, such an API would really be of little use anyway when we don't have public properties to easily query by. So except when we're dealing with pure information, what we have is probably sufficient. And really, I'm not sure that we have a need to query the entities in that way anyway.

JDGrimes commented 8 years ago

Array storage type vs enumerable interface

The array storage type seems to overlap with the enumerable interface. I'm not sure they are exactly the same thing though. The point of the enumerable interface is to mark an entity as being confined to a limited set (well, like user roles and post types, etc.). But then, anything that is stored as an array will necessarily be somewhat limited, because it has to be stored in memory. We're not likely to get into the hundreds or certainly not the thousands. And since the enumerable interface and surrounding code is really just intended to be a shim until the query API is introduced, I see no reason why we can't now remove it in favor of the array storage type. However, I suppose that the enumerable interface could have application beyond the array storage type, although perhaps this is unlikely. I guess maybe it would be best to keep it around then, just to separate storage implementation from storage features/attributes/whatever we'd call that.

JDGrimes commented 8 years ago

"Foreign" Attributes

Attributes technically don't have to be on the object at present, however. The get_field() method is protected, and retrieving the attribute value in this way is simply the default bootstrap. It is possible for attributes to override this. It is therefore techinically possible for the attributes to be stored anywhere, and in any manner. This still doesn't mean that the attributes are likely to ever be stored in a completely different way than the entities, but it may not always be as simple as just providing name of the field. Unless, of course, we force attributes to be on the entity objects. But in truth, I don't think that that is a good idea. Although we might decide to represent post meta as relationships, we might come across other situations with "foreign attributes" where that really doesn't make sense. We could, of course, just make those a completely different sort of thing, as we said above. They wouldn't be included in queries that didn't support them then (if we didn't introduce them until later when they were needed, that is), but then again, maybe that would be OK. In fact, I suppose that we could just have any such things be regular attributes now, but have them not support querying (maybe by returning false from get_field()).

Or, since we will be inventing some sort of complex syntax for the relationships anyway, perhaps we could do the same thing for the attributes. Then if the method returned an array instead of a raw string, we would know that we were dealing with a foreign field. I'm not sure if we'd still want to call the method get_field() then though. Maybe something like get_storage_info() instead.

JDGrimes commented 8 years ago

Relationships

For the relationships, sometimes the relationship is determined based on the value of an attribute. In that case, all we need to give is the name of that field. However, in other cases it is much more complex, and this is where the issues are. It isn't just coming up with a sane manner of making the information available. It is that we also have to consider how this meshes with what we're doing with the entities. Because to introduce these interfaces for the entities but not have anything like that for the attributes and relationships seems kind of strange. However, the point I think is that it isn't really needed for the attributes and relationships. They just have to describe themselves in terms of the entity involved, and if they are stored differently than the entity that is OK, but it doesn't require us to supply such drastically different information, necessarily.

Inherited Storage Type

So we really don't need to introduce a custom method to supply the type of storage the relationship is in, it should just automatically inherit from the entity. If the get_storage_info() method just returns a raw string, then we know it is the entity field, which naturally has to be stored the same way the entity is, since it is part of the entity. The only possible objection to that would be that this could make handling the data more complex since some info has to be inferred from the parents. If I just want to know "what type of storage medium does this relationship use", I can't just use the relationship object itself to get that information in most circumstances. Then again, can I even get the relationship object without getting the object of the parent anyway? Yes, I suppose that technically we could, straight from the entity children registry.

Data dispenser vs data container

So in some ways, it seems like we'd be better off getting rid of the interfaces and going back to the single get_storage_info() method all around, to supply both the storage type and the other information. I wanted to avoid this if possible because then the entities become just a housing for the information, they don't actually work as a convenient means of passing that information around. Instead, we have to call a method that produces an array, and then pass that array around, usually along with the entity object as well. Which is silly in terms of both memory and execution time.

But maybe what we should do is make the storage type separate from the get_storage_info() method. That we we could get the storage type without having to pull out all of this information. Then we could wait to get that info until we really needed it. But still, we'd be in the same place at that point, we'd have to pull out an array that would have to hang around alongside the entity as long as we needed it.

So, all micro-optimization aside, placing this information behind methods in this way really seems to cramp the object's utility. On the other hand, we can't always supply the information just in a property because of database table names needing to come from $wpdb, etc. But maybe it would still make sense to split the storage type from this other information (if we supply it at all), since it should always be able to be in a property (maybe public). On the other hand, if we are going to just have the storage type be implied, then the only time we'd need to supply the storage type would be when we'd be supplying an array through get_storage_info() anyway.

Parent-relative Interfaces

Maybe it would make the most sense to have the attributes and relationships implement storage related interfaces based on how they are stored as well, but maybe on how they are stored relative to the parent rather than based on storage type. For example, an attribute could implement an interface that indicated that it was stored on an entity field (which would include a get_field() method), or else it could implement one of perhaps several interfaces that indicated that the attribute was stored apart from the entity.

instanceof

The issue I see with that though is that it means that we'd have to run a bunch of instanceof checks, one for each type of attribute/relationship storage. It would be better to be able to just have a string slug or something to indicate this, because then it would be possible to switch or check against a list of supported types. On the other hand, it seems silly that we actually have to call the get_storage_info() method in order to determine whether we can support a particular attribute for a feature or not. So maybe it would make more sense to provide that info via its own method, and the other info via the generic method (or even some other way, with an interface or something).

JDGrimes commented 8 years ago

Context API

I think the most important thing here is that how exactly an entity is stored can change with context. Perhaps that is an edge-case, but we should at least understand what the implications are, especially when an entity from a particular context is loaded as the_value. The table name, for example, often depends upon what site context we are in. Should we then define it in a manner that is context agnostic, or should we determine context based on the loaded entity, if present, falling back to the current context? Or should changing contexts just be undefined behavior?

We already define what context an entity exists in, and the particular context ID when an entity is loaded. The latter is currently just assumed to be the ID of the current context at the time that set_the_value() is called. So I suppose to match that we should just return the storage info relative to the current context at the time that it is called. Really, there isn't much else that we can do, whether we have the the_context set or not, because context switching isn't currently supported. So there would be no way for us to switch to the_context to return the storage info anyway. I suppose that we should return info for the current context by default then, but behavior when the_context is set should be considered undefined for now (though it will likely still follow this behavior). But then, we'd never be able to change define this behavior one way or the other in the future anyway, since it would be spread out across all of the entities. I suppose then that the storage info should always be returned in terms of the current context.

JDGrimes commented 8 years ago

Multiple Storage Types?

Would it ever be possible for a type of entity to be stored in multiple different places? that doesn't really make sense, because then the entity would have to defined more than just an ID, but also where it was stored. So I guess we'd treat it as a different entity in that case. But then again, couldn't a map of IDs to storage type be maintained by the plugin offering the entity, so that just an entity ID was needed to retrieve an entity. Also, registering separate entities might not really work well, because of the user may perceive them as being one entity, and may want settings to apply across them both. But I guess at present this is a real edge-case, and one which we can burn when we come to it. The worst-case scenario, I suppose, is that such an entity will supply storage info which may not be compatible with existing features that utilize the storage API.

JDGrimes commented 8 years ago

Foreign Attributes and Entity Change API

Foreign attributes will make the entity change API more complex, because we'll have to check not only for changes in the entity tables and relationship tables but also the tables for any attributes. Which is inane. Maybe it would be better to store this data separate from the entities, and perhaps on a per-data type basis. So all of the data for the entities stored in the database would be in one place, and then it would be easy(ier) to get the list of tables and map them to the related entity/relationship/attribute. But then again, that doesn't seem possible, because table names can change with context, for example. So all things considered, I think that the only thing that we can do is provide this information on the entities. If something much better comes along in the future, we can refactor if it is worth it.

Still, we'll end up utilizing some sort of map—based on partial-matching of table names, I suppose—to map these tables to the attributes/relationships (and some entities, although many of them may be able to be auto-guessed). But I guess that will just have to be a separate thing. This does call into question exactly how we can know for sure which entities we can fully support within the entity change API, but that is really another issue altogether.

JDGrimes commented 8 years ago

Post terms relationship

One question is what to do with the terms. Even after the terms were split, we have to join two tables, term_relationships and term_taxonomy, because term_relationships only contains the term taxonomy ID. There is a ticket to combine the terms and term_taxonomy tables, but until that happens a join will be necessary.

JDGrimes commented 8 years ago

User roles relationship

We also have strange things going on with the user roles relationship. This is a case where the WP_User object offers the roles attribute at run-time, but the information it contains is not saved in a field in the users table, but instead has to be extracted from user meta, where it is stored in an array along with (potentially) capability slugs in addition to the role slugs.

But despite the fact that this is a mixed-bag serialized array in metadata, we had previously planned to still offer database storage info for it. This would require the consumer to be able to handle querying serialized data, however. So now I'm wondering if maybe it would be better to just supply the values from memory somehow instead. However, that makes them just about impossible to query, since the values aren't generally stored in memory. Instead, we'd have to offer a method that would grab the roles by user ID—after the query returning user IDs had run.

But because this would likely be so detrimental for query volume to do that, perhaps we would be better off allowing this to be queried anyway. After all, WordPress core does.

JDGrimes commented 8 years ago

Compatibility Detection

A bigger question here is how do "features" (I'll call them consumers) know what entities they support and which one's they don't? It seems that we'd have to loop through the entities, and possibly all of their children (depending on whether the children were used by the consumer). But this seems ludicrous. Especially if it also means that we have to not just check a simple attribute on each entity/child but rather call a slew of methods and carefully analyze the results. (Though I suppose that one could argue that if we're looping through all of the entities/children anyway, what difference does a little more complexity make? On the other hand, of course, if there are times that we only need to deal with a single entity/child, the added complexity would be a huge annoyance.)

Versioning

Which leads us to ponder whether perhaps it would be better to version the storage info syntax. Maybe we could just have simple version interface markers. But then that would eventually get require entities to implement a whole slew of interfaces (1.0, 1.1, 1.2, 2.0...). I guess it would be better to just have the entity have a method that returned a string version number. And I think it would simplify things if we had a convention that the number returned by the entity applied to all its children as well. But then, entity children can be added by any code, even if it doesn't actually control the entity code itself. (Though hopefully there will usually be very little call for this if we define all known entity children from the start, so perhaps we could relegate this concern to the status of edge-case, since the need to improvise/version-cross in this manner should likewise be rare). An additional problem with such a convention is that it means that a single child entity which needed to use a newer version of the syntax could cause its parent and all its siblings to be unusable by a consumer, when that might not be necessary at all.

Another concern with the versioning approach is that when a module requires new syntax and just invents it, it would be unversioned until it was formally included into the core definition of the syntax. So it might make it easier for consumers to know what they could support, but it would make it more difficult to introduce new syntax for the entity storage API. Or at the very least, I don't think that it helps things in that regard.

And this only helps, of course, when the consumer at hand supports all of the possible syntax in a particular version. If it doesn't or can't for some reason, then it is really back at square one.

Named features

Which is why it would maybe be better to declare these as named syntax features, rather than versioning. That would also solve the issue with the modules needing to introduce new syntax features, I think, so long as there was a good naming convention in place and care was taken to officially register/declare the feature name before just putting it to use. (With versioning we could also just limit each new version to a single, discrete feature, but that wouldn't solve this latter issue. And also, it would still force us to support 1.3 before we could support 1.4, so it wouldn't actually solve that dilemma at all.)

Processing syntax features

Ideally, of course, it would be nice if a module which had to use a new syntactical convention could also provide the utilities necessary for processing that syntax. But then, wouldn't such utilities need to be defined on a per-consumer basis? Even so, being able to register a handler to assist the consumer in parsing the syntax would be better then the feature being unusable at all, or the consumer having to be replaced with a modified version (which is accompanied by many disadvantages).

In some ways though, wouldn't this put us just about back where we started? Because we'd still have to go through an check that handlers were registered for that consumer for each syntactical feature that an entity used. So might it not be better to just analyze the data offered by each entity after all? Or, should each entity be assumed to have complete support, since the code which offers the entity can also offer the related handlers, if necessary? But then, we can only offer handlers for consumers that we know about in advance, so this really isn't an assumption that we can make. And also, when a new consumer is being introduced it could require that entities offer particular info, which again invalidates the assumption that entities will always be accompanied by the needed handlers for every consumer.

To work around this, we could require entities to be registered with each consumer, and then this would be a safe assumption to make. But then, doesn't that partly defeat the purpose of forward-composibility?

So it seems that there is no getting around the need to check that we have registered handlers for each syntactical feature employed by each entity. I guess that this is still some better than actually analyzing the returns for a bunch of entity methods in depth though, because the information about what features were used/supported would be in a more consumable format. Something to consider, however, is that this does open up another place for possible error on the part of the developer, resulting in a mismatch between the declared features and the features actually employed, although that could probably be remedied through proper functional tests. But the fact that this adds extra meta-information that takes up memory, etc., makes me reconsider the plausibility of some sort of "dry-run" API instead. I think in general though this is a small price to pay for the better performance and ease of use.

JDGrimes commented 8 years ago

Cont.

Information vs syntax features

Note that there are really two parts to all this: 1) the classes of information provided, and 2) the syntax features employed to provide that information. Some consumers may require different information than others, and each can only use the entities which provide that information. And even among entities that provide the information needed by the consumer, only those entities can be used that also provide that information in a format that the consumer can interpret. So I guess that the syntax features employed is something that really has to be defined per each information group, or possibly even sub-group.

What information is provided might be best determined via interfaces, while the syntax requirements would seem to definitely be better suited to a list. For the information types though, we might find that it would be easier to offer that in a list as well, as long as there might be a variety of different interfaces through which the information could be accessed. Though on the other hand, perhaps we'd need to also know whether that particular manner of providing the information (that particular interface) was supported as well.

JDGrimes commented 8 years ago

Syntax features and information structure

Let us explore how the information should be supplied by class methods of the entity storage API, so that we will best be able to handle various mixtures of syntax features.

Handler slug keyed array

We could return data from the storage method in an array keyed by the slugs of the handlers that could aid in processing that data. However, that would mean that we couldn't add features to a data handler without registering a custom handler over top of it. If we wanted to add a new option to a syntax feature, we'd instead have to either use a different key and register a custom handler (possibly a child class of the handler for the existing syntax options), or else we'd have to put the new option under its own key separate from the main option. Although the latter option is not always feasible, because of the way that the information will be processed: because it is essentially just a modifier of existing syntax features, it we'll usually need to take it into account simultaneously with the other information it relates to.

Custom object

Alternatively, wrapping the return value of a method in an object when it is more complex than the regular syntax would provide a simple means of differentiating new/modified syntax. It would then be easy for the consumer to check if the return value from a method was a string/array or an object, and if the latter to either handle it, based on the type of object, or else pass it off to custom handlers.

Single method vs many methods

Especially if we do this latter option, we're talking about something that would be per-sub-group. So when we're talking about "handlers", they wouldn't be handlers for that particular group, but for a subgroup, and in fact for just a single unit of that subgroup. However, this is a natural trade-off that is necessary to solve the granularity problem of the unimethod+key slugs option mentioned above, where we have trouble modifying existing syntax if the new options can't be granularly separated from the existing syntax because of processing.

Though of course the keyed array doesn't have to be tied to the single method idea. And indeed, we identified a way that we could work around that issue with it: we'd just have to register a custom handler and use a custom key. Which is essentially the same thing that we'd be doing with the "return objects" option. The main difference is in the amount of structure is imposed by interfaces and classes, as opposed to the more free-hand array idea. In many ways, the array idea is simpler, in that it doesn't require us to introduce a bunch of classes. However, the bigger issue is really whether the structure provided by using custom class objects in stead of arrays is actually beneficial to us.

Aside from the (perhaps only perceived) benefit of more structure, the objects option's major attraction for me just now was the fact that it would be more granular, that we'd be registering custom handlers at a more specific level, for subgroups or subgroup units rather than for the entire information group. However, as we've noted above, the array option can work at that level too. So basically the main difference between these two options is that one uses objects, the other arrays.

I will say though, that in this regard, the array option actually seems a bit less versatile. The reason being that we're pretty much left with the options of either living with ambiguity or using slug-keyed arrays exclusively. The fact is that so far as much of the syntax is sort of built-in, it can be offered very simply as just a string or array value. But when we come to the point of needing to add something new, and want to use the slug-keyed array, this is then ambiguous as to whether our array is the slug-keyed array or just a "regular" array representing a built-in feature of the syntax. To avoid that, we'd have to always return handler slug-keyed arrays (possibly with an exception for raw strings), which seems silly when our array will only have one element, and indeed, it seems counter-productive when the feature that we want is actually built-in, not in a custom handler. It is for this reason that the idea of using the arrays was something that seemed to be best suited only to the uni-method design.

JDGrimes commented 8 years ago

"Built-in" syntax features?

However, we might well question whether there is really anything of substance wrong with returning such an array when the feature is "built-in", and indeed perhaps we should consider whether syntax features should really be "built-in" as such at all. Perhaps each consumer should only provide a basic framework for consuming the API, but the processing should be done entirely by handlers, so that there really wouldn't be any hard-coded "built-in" syntax.

This is obviously what we'd tend to do at the storage type level, we wouldn't build-in handling for a particular storage type, but we'd have custom handlers for each storage type that all had a common interface. But in this case we're just talking about doing that at a higher level, for a single unit of a single sub-group of that storage type.

JDGrimes commented 8 years ago

Dictating consumer behavior?

Really though, aren't we presuming quite a lot in regard to consumers? They really are entirely out of our control (more or less). At least I'm not sure now much we should be assuming or dictating in regard to their operation. Especially when some might be much simpler and focused than others, meaning, I suppose, that they'd have no need for all of this handlers within handlers stuff.

I suppose, however, that we aren't really forcing the handlers to behave this way, we are simply imagining they would want to because it would provide the greatest possible flexibility.

JDGrimes commented 8 years ago

Dictating storage type structure?

At the same time, we are also showing quite a bit of presumption in regard to different storage types. We're assuming that they are all going to follow a single pattern in how they present their data. And though they do seem more under our control than consumers, we're still assuming that there is a one-size-fits-all solution. Perhaps it is necessary for us to dictate some things about how storage types should be sub-structured, but in truth this is only a suggestion on our part, and it isn't something that we can force upon them, per se. Not in the same way that we can force a class that wants to partake in an API to define certain methods because we code against an interface.

I guess partly this is all just an observation; it likely just isn't possible for us to build this in a sane manner that would force this structure on them. But I do wonder, at the very least, if perhaps we could provide more than just a common design pattern for consumers and storage types to follow, but actually some sort of basic framework or utilities or something. But then it might be best to burn that bridge when we come to it, that is, when it comes time to build some consumers.

So in short, each storage type can return data in any way that it chooses, so long as custom handlers can be registered to handle it at some level. But here we are just discussing the basic design pattern that we will likely use for our core storage types, in order to make it easier for us to consume the data and process evolving syntax.

JDGrimes commented 8 years ago

Objects vs arrays

So then, we return to the question of how we want our storage types (and I must admit that in this discussion my focus has subconsciously been on the database storage type particularly) to present the data so that the syntax will be able to freely evolve as needed.

Object pros

One benefit of the structure provided by the object approach is that we'd know that we were getting the full slate of data that we were expecting for that handler. Whereas with the arrays, we have no guarantee that every data key that we expect will be present, which could make processing more complex. On the other hand, the objects wouldn't actually force us to supply all of the data, or meaningful data in every way; it could still be empty or invalid, it just couldn't be unset, as the property/method would still exist.

Array pros

The potential benefit of the array approach though is that we might be able to avoid a separate syntax feature list if we wanted to. But then, that would really only be beneficial if we were using the uni-method approach. Otherwise we'd have to loop through every method to get a list for each one. And even with the uni-method approach, this would still not be ideal, because some of the information may have to be pulled in from elsewhere, which is a waste if we only need part of it. Though I suppose that most of the time this would really be negligible.

Nested syntax features list

Actually though, this reminds me that I've been thinking that we might actually want a nested list of the syntax features, indexed by information group, subgroup, etc. That might really just make things more complex though, and we'd be better off with just namespacing our slugs. Though on the other hand, nesting would allow us to check just the list of things for the information group that a particular consumer is interested in, which I think would really be the most common use-case. And we wouldn't have to nest the data for subgroups as well as groups, within each group we could just use namespacing. Although really I suppose that it might be useful to index by subgroup as well since perhaps there will be times that we only are interested in a particular subgroup. For example, we might want to know about all tables that are used. But then how do we know when a table is implied? I suppose we'd either have to have a list or something.

Arbitrary nesting depth

But if we need to nest the settings for groups and subgroups, how do we differentiate the settings array from another nesting level? We can nest arbitrarily if we do the objects, but with the arrays this would be more difficult. Although I suppose that we really can nest arbitrarily with the arrays too, because the handlers will just know that they don't support another nesting level. But then, I suppose it makes more sense for the handlers to follow a pattern here, where it is possible for us to add another nesting level at a later date if needed. Because I don't think we really ever know when we might need another nesting level.

- handler-slug
-- sub-handler-slug
--- ? setting key or sub-handler slug
--- ? 
- handler-slug
...

Logic processing order

I think that if we use arrays it will be very difficult for the handlers to know in what order the logic should be processed, that is, at least if we use a uni-method. There has to be some level of structure, or else our array that contains a bunch of different data will not be processed in a particular order.

That is, the handlers have to contain some logic, they can't just pass everything off to sub-handlers. I guess partly that depends on how the information is being processed though. For example, a query builder could be designed in a manner that would allow any part of the query to be modified during the build process. But even "modification" here implies that a particular part of the query would already have to exist, so we are still expecting a particular order. However, it would be possible for us to make it so that the different parts of the query could be added in any order.

An example of where we'd have trouble with this is when a column contains serialized data. Serialization is sort of a modifier, and it doesn't seem that we could easily mark a field as being serialized before that field was actually added to the query. (It would also seem difficult to modify the query after adding the field to mark the field as serialized, for that matter.) It would be possible to design it this way of course, with a list of the serialized fields separate from the actual query logic. Then when we built the query we'd check if each field was in the list of serialized fields, and if it was we'd handle that as needed. But then, it really isn't as easy as that, because the meta value field isn't always serialized, it could be a raw value in one case and a serialized value in another. So a simple field list wouldn't actually solve the problem, the type of the field would have to be determined for each clause of the query separately.

Another way of working around this would be to just arrange the array in the correct logical order, though that seems a bit fragile. And it would make filtering it very difficult (if we provided a filter).

Conclusion

In short, we cannot really know ahead of time how many levels of nesting we're likely to need in a particular group. So in order to avoid the potential for ambiguity, it will be much easier to just use objects.

The only real downside to the objects that I see is that it takes more overhead to load the classes, and that it also takes more overhead to make method calls. But I don't see why we should be scared to make the properties public in this case—after all, we've just been discussing providing arrays instead, which can be modified willy-nilly, and we've had no qualms about it.

JDGrimes commented 8 years ago

Divisibility of information

Now that we've essentially rejected the uni-method approach, I'd like to note that it did make some sense because to a certain degree the storage information all has to be used as a unit. That is, in a query it would. I suppose however that there are other applications in which it really would be useful to be able to access just part of the information. This again presents a dilemma though, because in any case where such an application is broad enough to extend across different subgroups, or even different storage types, there will be no one single interface for accessing the said data. But then again, that is because there is really no one piece of data that extends across data types, which is why they don't share common interfaces.

So, as far as generic applications go, like the Query API, the data is indivisible. However, for more specific applications it would really be simple to target a particular unit of a common interface, if needed.

JDGrimes commented 8 years ago

Syntax objects: slugs vs interfaces

Is the relationship between handlers and syntax objects one of slugs or interfaces? We'd normally reference the classes in a registry using slugs, so that's likely what we'd do for the handlers. We'd have the syntax objects offer a slug. Then we'd just check for a registered handler with a matching slug, and pass the object to that handler. But then, the handler is going to expect the objects passed to it to implement a particular interface. So we'd need to check that the correct interface was provided. I suppose really that we could just use type-hinting to do this, since all of this is really hand-composed by the developer. The only reason that we wouldn't want to do that is because it would result in a dead program if the developer made a mistake and didn't test properly. But then, that's true of a hundred other things as well, isn't it?

JDGrimes commented 8 years ago

Interface constants

But if we are exposing properties instead of methods, we'd not really need interfaces. Or, that is, our interfaces really wouldn't be able to force particular properties to be exposed. I actually just discovered that interfaces also support constants, which would be public but not modifiable. I'm not sure that using them is really a good idea though, especially as it really isn't supposed to be possible to override the values of interface constants in subclasses, although it is possible if they are grandchildren of the interface. I found a related PHP bug that was fixed, which only affected the child class if it specifically implemented the interface in addition to the parent class implementing it.

However, using this would require us to hard-code the class name, as the class constants cannot be referenced with a dynamic class name until PHP 5.3.0. (Attempting to use the interface name would of course return the value of the constant as defined in the interface.)

JDGrimes commented 8 years ago

Property-only objects = arrays?

We're talking about reducing these objects to just properties. But then I think it is prudent to ask ourselves why they are really objects at all then. Why can't they just be arrays? I was thinking that the use of interfaces/classes and the typing that they provide would be more useful than plain arrays, but I'm not sure that it really makes all that difference.

As far as the issue of ambiguity discussed above, we could just say that any array that had a slug key would expect to be passed to the handler with that slug. Then all we have to do is check for the slug key. There is no ambiguity there.

The discussion on the processing order in the logic is also moot, as it applies to both arrays and objects. Even with the uni-method approach, the main consumer just has to have some general logic in it. (Although order of this does become a problem with the uni-method approach when we are trying to add new information at the most basic level. But in that case we'd just have to assume a new data type, I think.)

So I'm now actually leaning back toward arrays instead of objects.

JDGrimes commented 8 years ago

Parity checking and DRYness

We haven't really considered this from the consumer side, but checking for feature parity between each entity and a consumer during the usage of the API in addition to in the UI or somewhere would tend to lend itself to duplication. Which is where a dry-run API might be superior. But perhaps this would be less severe than we might imagine.

Of course, it would always be possible for us to introduce a dry-run API for a particular consumer later if we want to, and there would even be nothing stopping us from coming up with something generic that could be used across consumers.

JDGrimes commented 8 years ago

RE: Objects vs arrays

Above we were mostly approaching this with a bias toward using objects. But looking back, I don't really see much reason not to use arrays and the uni-method approach. The main potential downside that I see to that is that the lack of structure could end up making the nature of the API less clear. It might make it less obvious that consumers need to be able to utilize handlers to process a wide range of different storage types and sub-types. (Though really, that is just an assumption of how we would likely handle it, it wouldn't be a strict requirement, especially if the consumer was only dealing with a very specific subset of the API.) The use of interfaces and such seems more elegant in some ways, but it is added complexity with little practical benefit beyond possibly the perception developers will have of how consumers should work.

In short, it seems like we've had to spend an awful lot of time arguing against the arrays and for the objects in order to try to say that the objects were better. And we really can't point to any one concrete thing and say "this is why objects are superior." As arrays seem to be capable of thoroughly doing the job, it seems that all of the added complexity of the objects is just cruft that does very little for us.

The biggest downside to the arrays is that it would be easier for developers to accidentally offer data in the wrong format, since they could make a mistake in the structure of the array. Really though, I think the perception that the arrays have greater fragility is really not entirely accurate, since the potential for incorrect structure is still there when using the objects, it is just that the structure is them spread out over several levels of objects returned by other object methods, instead of it all being in one place. So in some ways it might be easier to get things wrong with the objects since the structure is more complex. I suppose that would be especially true when devs are trying to introduce new types/sub-types. It would be more likely that they would have trouble structuring the data according to the common patterns. Of course, the same there is also true of the arrays, I suppose, but the basic structure is contained all in one place, and so is more obvious.

While I do see the benefit of the objects, I'm thinking that the objects just provide a huge amount of overhead. Loading the classes, calling the methods. Normally I would use the object approach, but I think that in this case there is not much benefit from it, and with the arrays we do have the benefit of potentially making it very simple to check for compatibility without having to consult a separate list.

In short, I think this really does come down to the matter of structure. So we have to ask ourselves whether the structure of the objects itself, as a feature, is worth the potential overhead. In many cases objects are definitely worth it. But in this case I'm wondering whether they actually make composibility more difficult. I say that because when we code against a a variety of interfaces, we can't handle different storage types generically. We can then only retrieve the data within code that knows about that interface. Whereas with arrays, we can take any generic code and extract data from the array. But I guess really this is just an illusion of greater interoperability, because the arrays still have their own "interfaces" or different sets of keys that they offer. So we still wouldn't know for any given array what keys it might or might not have.

However, I think above we identified a case where the generic nature of the arrays is better: when we're trying to determine compatibility without using a separate list. If we use a uni-method, looping through the arrays is then pretty simple. Using objects though, and without the uni-method, we can't do that. Though as we've said, there are ways around this, but it means that we have to describe the data separately. Of course, there is the potential that we'd end up finding it better to do that anyway even with the arrays. It is just really hard to know what will work best without actually building a consumer API that will force us to flesh some of this stuff out. But then, maybe we could build a basic API/framework for checking compatibility.

JDGrimes commented 8 years ago

Is compatibility checking really necessary?

All of this makes me wonder whether we really do need to check compatibility up front. Could we just assume that most of the time the consumer will be compatible, and just wait until we attempt to do something until we find out that it is not and throw an error? That might be OK for the retroactive API, but other applications of the Query API might have trouble with that. Of course, we could always just burn that bridge when we come to it.

JDGrimes commented 8 years ago

Mock compatibility checker and resulting thoughts

It is just really hard to know what will work best without actually building a consumer API that will force us to flesh some of this stuff out. But then, maybe we could build a basic API/framework for checking compatibility.

Performing this exercise has been helpful. When approaching it from the perspective of the object approach, I realized how necessary a separate syntax feature list really would be. Without it, we'd have no way of knowing whether we need to check for the existence of any sub-handlers or not. That is, unless we did something crazy like had the handler itself do that. Which is what we'd have to do, because only it would know which methods would be offered by that storage type interface, and the only way to find out if sub-handlers are needed is by checking the return values of those methods.

However, other than the fact that our list must be separate, there is really very little material difference between the object approach and the array/uni-method approach.

We don't check for whether the sub-objects actually exist then though, and whether they implement the expected interfaces. Of course, we wouldn't necessarily be checking whether each array had all of the expected keys, either. And if we did decide that we wanted to do that, would we also then want to check the validity of each of the values? And then do we check, for example, if a database table actually exists, that is, that the data isn't just the correct type and "valid" in some arbitrary sense, but that it would actually work in the query? Where do we stop?

Well, doing that would likely be just as expensive as running the query in the first place, in which case we might as well just do a dry-run. Though even when I've been talking about a dry-run, I haven't been thinking of actually performing the query (thinking of the query API), just building it.

Anyway, really all we are trying to do here is just provide a way to check if we can expect a particular entity to be compatible. Of course, if a particular consumer wanted to go further than that, it could. That is, it could if we used the uni-method approach. Otherwise it could not. So I guess really that is a pretty good argument in favor of the array/uni-method approach. Although of course, technically speaking, it would be possible for us to work around this in some way. But it would be very convoluted I think. As I said above, we'd have to involve every handler. In which case we'd be better off just doing a dry-run.

But I think the point here is maybe that this isn't so much about whether the consumer will actually be able to use the information, just whether it is going to be provided in a format that it should understand. It isn't about the data itself, but about the general classes of data. So I don't think we really ought to worry about providing a way to validate the data. If a consumer really wants to do that, then it should either implement a dry-run API, or, if it is fairly specific, it can check each object for correctness itself. A consumer could, I suppose, even require its handlers to implement an interface that supplied validation methods. And isn't that really what a consumer would have to do in order to validate the arrays anyway? It is only that it would have to only make one method call to get the data, but then it would still end up passing bits and pieces of it around all over the place.

So in short, there is very little material difference between using the uni-method/arrays or using the objects.

JDGrimes commented 8 years ago

Compatibility checks for the entity children

As far as checking for compatibility with the entity children, this leads us to several more questions. First and foremost is whether the entity children are handled independently from the entities themselves. That is, are their handlers in a separate registry. At first I just assumed that they would be, but of course the entity children are of particular storage types just like the entity parents are, and so in our current mock up of the query API we had a single query handler for each storage type, which was passed the query data. So all of the storage type handlers will be for handling things for a particular consumer at a fairly low level, and thus they will encompass much more than just being passed a single entity/entity child. So for each data type there will be a handler, and this wouldn't be based on the entity/child.

But really, isn't that just one possible design? Couldn't it be built differently? And which way would be best?

Yes, each consumer can decide to do this differently if it really needs to for some reason. It just may have to provide its own framework for compatibility checking then.

JDGrimes commented 8 years ago

Cont.

OK, that's the storage types, but what about at the next lower level? Then we'd need to get some kind of slug from any unrecognized [entity] object that was passed. Actually, I guess we'd probably be passed nothing but objects, which we'd then have to pass off to handlers. Because each storage type would be divided into different subgroups based on how exactly the entity was stored within that storage type. Or, actually, I guess that is only true for the entity children. For the entities themselves there really aren't direct subgroups. It is only if a particular entity method returns an unrecognized object that we'd then need to get a slug and pass it off to a handler we'd source from some registry. And then for the entity children, we'd likely handle them as well, but possibly it would be best if we made that similarly extensible, in case we add other types of entity children in the future beside attributes and relationships.

JDGrimes commented 8 years ago

Generic syntax feature list?

But really, none of this is really important to our discussion of arrays vs objects though (which is one reason that we're performing this exercise), because this is processing that would have to take place no matter how we arrange things. But it is important for how we present the syntax feature list.

I was thinking of making the syntax feature list something more generic, beyond just the storage info API. The trouble with that, however, is that it really wouldn't be extensible. If we do need to add another feature to entities, they'd likely have to implement another interface, which could then have its own syntax list method. But it would also be possible that we'd want to store the information separate instead, so that we could add it from outside the entity code itself. In which case we'd also have to store the syntax feature list separate as well. I guess in the case of the interfaces though, we'd have to have control of the code anyway, so there is no reason that we couldn't also modify the syntax list.

So I guess in the end this really isn't a reason not to make the list generic. However, doing so would mean that it might end up returning much more information than we really need, which would just be cluttering up memory. And since we really don't have any need for a generic list at the moment, I suppose it makes the most sense just to wait on that. I think that we could always introduce a generic list later, if some consumers would find it useful.

One reason that I was considering doing this is because then I'd likely not have a separate storage type method, but instead just put that into the syntax feature list. Even with a storage-info-specific method though, we could still do this. In fact it might actually be beneficial to do that, since it would make it possible to have multiple storage types for a single entity (see above).

JDGrimes commented 8 years ago

Methods and the syntax list hierarchy

An oddity of nested hierarchy is that it splits the features by method. Or at least, that is how I intend to do it. On the one hand this is odd though, because the same features may be useful for several methods. But I guess the same handler could be registered for several methods, though that is a minor waste of memory. Alternatively, we wouldn't have to split this by method, and handlers could then just be namespaced if there needed to be similar but different handlers for different methods.

However, there is one positive result of including the methods in the hierarchy, and that is that it allows for a consumer that is only interested in data from a particular method to more easily check whether it is compatible with that particular method's return value. Although of course, in that case it might just have been simpler to call the method. But then, it is possible that it would just be interested in a few methods, and in that case it would be better to have this list.

That said though, it doesn't mean that the handlers have to be stored in per-method registries, that is really a separate question. Although of course that will affect how the compatibility check needs to be carried out.

JDGrimes commented 8 years ago

Entity children and multiple storage types

I'm not sure what to do with the entity children that are stored on the entity object itself, in regard to the possibility of multiple storage types for a single entity. Perhaps the best thing to do is introduce the concept of a special storage type called "parent", or even just "field". Then when an entity child used that storage type, it means that it is stored the same way(s) as the parent entities.

Although, on second thought, I'm not sure why multiple storage types is really a problem for the attributes. They can support them, they just need to return multiple storage types in the array, just like the parent entity does. I guess the only concern would be that then the entity and child are more closely coupled, because the child then needs to know about the storage types that the parent uses. But after all, isn't it already true that they have to be coupled, because the attribute has to define what entity field the value is stored in, etc. So any attribute which is stored on the entity already has to be more-or-less coupled to the parent anyway.

Of course, it might actually make things simpler to just have a "field" storage type for the children, because then it is implied that the child is always stored the same way as its particular parent, not just that it can be stored either of two ways, which might be ambiguous if we just return both storage types.

Multiple storage types present a bigger issue than that though, because really all entity children define their storage info relative to the primary entity. Which means that even relationships that are stored in a separate database table, for example, have to have a column that references the main entity ID. But if the entities of a particular type are sometimes stored outside of the database instead, would their relationship information also be stored separately? But I guess in that case the relationship could provide that data as well. Unless it was stored in a similar manner. If those relationships were also stored in the database but in a different table, I'm not sure what we could do, because obviously we can only implement the database table storage interface once.

I think because of these issues it would be better for us to just ignore the potential for multi-storage type entities. If such an entity ever does exist, I guess it will just have to be split.

I guess though that using the uni-method/array approach would actually allow us to accomplish this, since we wouldn't have the problem of having to implement multiple interfaces. Instead we'd just return the storage information relative to the parent entity, per storage type, by returning the data in the array keyed by storage type, just as we already do with the entities. Then this data would be interpreted relative to the storage type of a particular parent entity. But before we switch everything over to that, I guess we have to decide whether the potential benefits are really worth it. After all, I can't think of any way that an entity could usefully be stored in multiple places while also having relationships with other entities. I guess it really couldn't, or else these relationships would need to be stored as special attributes of the entity itself. But I guess we never know what kind of crazy code is out there, and I'd rather not limit ourselves here more than necessary.

JDGrimes commented 8 years ago

RE: Methods and the syntax list hierarchy

I think the uni-method/array approach would also make it simpler to run the compatibility checks if we want to have the handlers for all of the "methods" in a single registry, because we'd be able to just loop through the array and check each element for a "slug" key. Each value that was an array with a slug we'd then loop through in like manner. But with the objects approach, the alternation between method names and object slugs isn't as obvious.

JDGrimes commented 8 years ago

But the entities still could not be stored in two similar manners, even using the uni-method/array approach. They couldn't be stored in two separate database tables, for example. That is, we could support this if we wanted to, but we'd have to allow multiple definitions per storage type, not just multiple storage types.

Let's just lay the idea of multiple storage types for an entity to rest once and for all. If such an entity does come along, we can always add a "multi" storage type if we really need to use the storage info API for it. Otherwise it just won't be supported, pure and simple.

JDGrimes commented 8 years ago

RE: Objects vs Arrays

I've been very tempted to just go with the objects because I've already coded things up that way. However, it is seeming more and more like it would be better to go with the array/uni-method approach. It is really much simpler and more straightforward in many ways. The objects add complexity, without that structure really provided any benefit.

JDGrimes commented 8 years ago

Uniform Syntax and Query API post-processing

Before when we were building the query API we were thinking that we'd have a basic query syntax that we'd use across the data types. It doesn't really make a whole lot of sense in some ways, but when it comes to fields and stuff, it does kind of make sense to reference them uniformly across different storage types, kind of like we were thinking of doing for the entity children above. One of the hypothetical benefits of this was that we could decide to handle as much of the filtering of the data as we wanted to in memory instead of making every condition part of a massive database query.

You see, the query API originally was designed to allow for post-processing to take the place of providing storage information in cases where the later really wouldn't work. And this really extended beyond the entities themselves and into the extensions, reactors, etc. It was even though that possibly we'd one day detect when a query was possible but likely overly resource-intensive, and dynamically opt for more post-processing rather than querying in such cases.

But now I'm thinking that any post-processing features should probably be provided through the handlers, not direct through the entities and extensions (although of course the handler could call the extension objects into play when post-processing if it needed to). This would mean that we didn't really need a uniform syntax.

JDGrimes commented 8 years ago

Filter?

Using the array/uni-method approach, it is possible for us to offer a filter on the data returned by the method. (This really wouldn't be possible with the interface/object approach because the data would be spread out over multiple methods, and indeed, depending on what changes we needed to make to the data, we'd actually need to utilize a different interface altogether.)

While it is possible to override an entity by registering a different one over top of it, and thus change the storage info for that entity, this is not ideal. Any other modifications/additions to the entity from the source would then be ignored. A filter on the return value of that one method would allow us to modify the storage info distinct from any other part of the entity.

While this seems very logical, the one caveat is that in cases where we need to change the storage info for an entity we might likely need to modify other entity behavior as well. This depends, of course, on whether the plugin that uses the entity in question actually provided a public API for interacting with it, like CRUD methods. If it did, then we would probably have used those methods within our entity object, and it is likely that they would continue to work, as the plugin would have maintained them for the purpose of back-compat.

There are cons of providing a filter, however.

First is the fact that it adds extra overhead, which is always something to keep in mind. Calling a filter each time we get the entity data, even though this filter will largely be of very little use (it is sort of an "edge-case" filter), is likely not worth the cost.

There is also the more practical consideration that we'd have a large amount of duplication if we did this. As the code stands now, the get_storage_info() method is implemented by each entity/child independently. Which means that they'd each have to implement the filter as well. In the end this means that we really couldn't enforce the filter, it would be "optional". Unless, of course, we refactor. However, doing that would basically mean that in most cases we'd have two extra function calls of overhead when providing the filter, not just one: we'd have to call $this->_get_storage_info() from the parent method (or something of this sort), and then call apply_filters(). Then again, even that would likely not be ideal, because we'd end up using the same filter name to filter the data for every single entity/child. It would probably be better to have the granularity of filtering each entity/child's data independently. I suppose that we could do this though, using the entity slug (although for the children this is more complex, because we'd want to use the entity slug and the child slug, and I'm not entirely sure that we could even get the parent slug). But on the other hand, I can see that there could be benefits of adding the global filter as well. Which would mean yet another function call of overhead.

Basically, no matter what we do, this second problem will make the first problem even worse. Right now I don't think it is worth it. I suppose that if we need to provide filters for particular entities' info for some reason, we could.

Also, on second thought, the idea of the global filter is probably not good anyway. If there is information which needs to be added to the entities globally, that should probably be done via a separate interface, or through some filter at a higher level (in a handler or consumer, maybe).

JDGrimes commented 8 years ago

Summary

This is what the API currently looks like. If we find issues, however, anything is subject to change in future tickets.

JDGrimes commented 8 years ago

One final note: the get_storage_info() method is placed in an interface, because not all descendants of the Entityish class are stored things. Specifically, this method didn't seem to make sense for the entity array class. So we decided to make this an interface, which each entity/child has the option of implementing. So an entity which is not stored or for which we cannot or do not wish to offer storage info for for some reason, is not bound to provide any.

JDGrimes commented 8 years ago

For posterity: the code for the mock compatibility checker referenced above:


function wordpoints_check_entity_compatibility(
    WordPoints_App_Registry $registry,
    WordPoints_App_Registry $children_registry
) {

    $entities = wordpoints_entities();
    $children = $entities->children;
    $storage_types = array_flip( $registry->get_all_slugs() );
    $children_slugs = array_flip( $children_registry->get_all_slugs() );

    // what are we expecting from teh entiites?
    // that there is a handler registered that can process them.
    foreach ( $entities->get_all_slugs() as $slug ) {

        $entity = $entities->get( $slug );

        if ( ! $entity instanceof WordPoints_Entity ) {
            return false; // or probably an error as to what the problem is.
        }

        // First, we check the entity storage type.
        if ( ! isset( $storage_types[ $entity->get_storage_type() ] ) ) {
            return false;
        }

        // but we don't know whether sub-handlers are needed.
        // How would we find out?
        // The only way would seem to be to analyze the results of the methods.
        // (But only the handler could do that.)
        // That is, unless we declare this information in one place somehow.

        // the only difference between what we do here and the uni-method/array
        // approach is that the uni-method approach might save one method call. It
        // might make the below slightly more complex though.

        if ( ! iterator_bob( $entity->get_storage_info_syntax(), $registry->sub_apps ) ) {
            return false;
        }

        unset( $entity );

        // Now loop through the entity children.
        foreach ( $children->get_child_slugs( $slug ) as $child_slug ) {

            $child = $children->get( $slug, $child_slug );

            // what kind of child is this? We can check the interfaces, but that
            // wouldn't be extensible if we use a hard-coded list.
            // Does it even matter though? I suppose that it does, if the different
            // children use separate registries.
            // But maybe the children should all just use a single registry, that is,
            // their registries should all descend from a single parent registry.
            // But then, do children really have their own handlers? In the query API
            // don't the handlers handle the children as well?.

            if ( ! isset( $storage_types[ $child->get_storage_type() ] ) ) {
                return false;
            }

            if ( ! iterator_bob( $child->get_storage_info_syntax(), $children_registry->sub_apps ) ) {
                return false;
            }
        }
    }

    return true;
}

function iterator_bob( $syntax, WordPoints_Class_RegistryI $registry ) {

    foreach ( $syntax as $slug => $children ) {

        $sub_registry = $registry->get( $slug );

        if ( ! $sub_registry ) {
            return false;
        }

        if ( ! empty( $children ) ) {
            if ( ! ( $sub_registry instanceof WordPoints_App_Registry ) ) {
                return false;
            } elseif ( ! iterator_bob( $children, $sub_registry->sub_apps ) ) {
                return false;
            }
        }
    }

    return true;
}