Open daviddekoning opened 9 months ago
Very interesting. Thanks for writing this down!
Should IFC5 define geometry explicitly, or be a semantic standard from which geometry is computed
Probably both. In previous meeting we had settled on a layered approach. 1 being explicit triangulations. 2 being the 2x3 coordination view equivalent with extrusions and some voids. 3 being full parametric constraint based geometry. We also came to the conclusion that when exchanging 2 geometry, 1* should also be included for interoperability and for assessing correctness of import. I think this kind of thinking is still valid in the ECS approach, but is has not been explicitly validated recently.
The other question is whether we should work towards equation- or schema- based parametrics. In previous investigations I did a sketch of the latter [0]. I.e by reducing the dimensionality of the representation, from solid to infinite line, but annotating the connection points to other walls using semantic relationships the viewers knows where to trim the line (as you said, thickness comes from the material) and the vertical extent of the wall can as you said be defined using the same kind of trimming relationship with storeys or other planes. The downside of this approach is that it is rather hard to make it explicit and computer interpretable it remains mostly logical connections. I also feel that this kind of "fragmented" definitions are a little bit harder to realize in the ECS approach.
[0] https://speakerdeck.com/aothms/ifc5-adequate-complexity-maximum-reliability?slide=11
Other than that I don't fully comprehend the differences and implications behind the options you have sketched. I hope we can find the time to talk this true in a dedicated meeting in the near future. One question seems to be how much special types we want to accomodate for in the datamodel, the models with special behaviour we tend to gravitate around are: hierarchy/assembly and references/archetyping, should these be ordinary components in the end or are they special enough to warrant a dedicated type in the exchange?
Hi @tomvandig and @aothms,
(cc @berlotti, @gschleusner1972)
Following discussions at the buildingSMART meeting in Chicago last week, and some research I have been doing into other similar formats, I would like to name 5 types or combinations of graphs that can be used to organize a model and then discuss the tradeoffs between explicit geometry vs semantic definitions.
Each of the components I describe below should be understood as something that can only be applied to an entity once. For example, it would be an error if two placement components are attached to a single entity.
In the below, I am not addressing how a model is composed from multiple authors, nor how library entities (Revit-style types) are inserted into a model. This is just a discussion of how a composed model is structured.
My apologies for dropping a wall of text here, it seems to be the right place for this discussion!
Classes of model graphs
Placement Tree / Scenegraph
A placement tree is one where all entities in a model have:
The geometry must be explicitly defined (no information needed from anywhere else in the graph).
The geometry is placed as per the product of all it's parents' (parent, grandparent, great-grandparent, ...) transforms
This means:
In an Entity-component world, this can be achieved by defining two kinds of components:
Placement Tree:
Composed Placement Tree:
The placement graph can be determined by only looking at the placement components in the model.
You will have no doubt noticed that there is no semantic information in this type of graph...
Semantic Overlay on a Placement Graph
One approach include semantic information on a model is to maintain all the requirements of a Placement Tree and simply tag certain entities as assemblies or classes.
For instance, a new component called AssemblyInfo can be defined, with two parameters:
This maintains all the benefits of the Placement Tree, but allows us to give the important entities names and hide non-assembly entities in a simplified tree view.
This is the approach USD takes: they call their entities Prims, each Prim is transformed as per the product of its parent transforms. A subset of the Prims are tagged and organized into a Model Hierarchy. They also require that all Prims tagged as assemblies must be children of assemblies - the semantic overlay covers a contiguous set of entities that includes the root node.
Placement Tree with Semantic Overlay:
Composed Placement Tree with Semantic Overlay:
Semantic Tree
Another option is to have two trees over the same entities: a placement tree and a semantic tree. The placement tree is as above, and the semantic tree is a completely separate organization. In this case, we define a component called "AssemblyInfo" a little bit differently. It will have three properties:
Placement tree with all entities in global coordinates, which has no effect on the assembly hierarchy
A few observations:
Parametric / Procedural Graph
In all three options above, the geometry is defined explicitly, and there is a hierarchy of transforms to place that geometry. But this is not how BIM software typically works. Instead of geometry being drawing, the user is presented with a number of parameters to select, many of which are other entities in the model. The BIM software then computes the geometry.
For example, a wall in Revit has 4 basic parameters: thickness (from the type), and plan layout, lower level and upper level (from the instance). The lower level and upper level are references to other entities in the model.
There are two major differences in this approach versus the first three:
In these cases, the semantic graph is used to the compute the geometry, and the geometry dependencies are often hidden from the user (in authoring applications).
There are two big points that come from the approach of computing geometry from semantics:
Placement Tree with Semantic Graph
Another option is to require that placement and geometric information be placed in a tree structure, but allow the semantic overlay to form a graph. This allows the model snapshot to be efficiently loaded and rendered, while the semantic properties of specific entities can be lazy loaded as required.
We can call this option explicit geometry tree, with semantic overlay graph .
Implications for interoperability
Should IFC5 define geometry explicitly, or be a semantic standard from which geometry is computed? Another way to ask the question is, should we require IFC5 exporters to render out geometry, or should we require IFC5 importers to be able to compute geometry from all the semantics that IFC5 defines?
There is a lot to be said for an explicit geometry tree, with semantic overlay graph approach. This would require that all IFC readers be able to read a specific set of geometric types. Semantic information could be overlaid, and IFC readers that are able to understand those semantics could make use of them. The semantic overlay graph becomes an enrichment of the base data, not a requirement to be able to ingest the file in the first place.
For example, a model with a road alignment would be exported as a series of curves in cartesian space (from OpenRoad or Civil3D). The alignment information would also be written. It could then be imported into Revit or Tekla Structures, without those tools needing to perform any math on the alignment, but if it was imported into a tool that did include support for alignments, it could grab the alignment info (and check against the rendered geometry).