Open Krande opened 2 years ago
The ever most awesome thing I can think of (courtesy of @Moult) would be realtime collaborative development. Multiple BlenderBIM instances talking to the same edgedb.
What kind of "triggers" can you setup so that you get notified when other people push content?
Can you do locking?
Could you define a spatial or domain-based query so that you have only a subset open in your copy of BlenderBIM?
I realize this is really a monster use case but I guess it is inline with the more directly feasible use cases you listed, just an extrapolation thereof?
Regarding triggers
I believe there are multiple ways of solving this. But my immediate thought would be to set up a service bus
like Azure Storage Queue
(or any OS offering if there exists any?).
That way you can set up a messaging service where everyone who is using the software can check the service messaging service each minute (or @ whatever frequency we set it to) for changes.
To send messages on the service bus you can either use a REST api as an intermediary between EdgeDB and clients and let that REST api send a message on the service bus alerting all listeners
of any changes. Then I guess you could do a "pull" of the latest changes either through that REST api or using hardcoded edgedb queries on the edgedb db itself.
Btw: Last I checked the EdgeDB folks recommended using a REST api to do the authorization of users.
Regarding locking
I haven't found any mentioning of locking
when reading the EdgeDB docs. But I guess there's nothing stopping you from adding a lock
bool property on objects in the EdgeDB schema and use that as a mechanism for checking out
specific geometry or spatial domains in your hierarchy. I guess the lock
property might just as well be a reference to a LockUser
object with description of who has locked it and for how long etc..
Regarding Spatial or Domain-based query
This is actually what I am trying to do now! I have gotten to the point that I am able to slice into the spatial hierarchy using a specific name of a spatial IFC element and get all the references to the sub'elements. But I am not 100% done on that prototype just yet :)
Yes, those use cases are definitively ambitious! But I think if we can make changes trickle
instead of having to do diffs
on huge IFC files, it will make IFC so much easier and efficient as a working model :)
This is actually what I am trying to do now! I have gotten to the point that I am able to slice into the spatial hierarchy using a specific name of a spatial IFC element and get all the references to the sub'elements.
I must say in the ArangoDB this looked promising with the traversal query: where we could just put arbitrary constraints on the path (like include DecomposedBy but not ConnectedTo). But on the other hand we'd have to mix INBOUND and OUTBOUND to get properties. Forgot to ask if that's possible.
Regarding locking
Locking is also not a huge priority for me. I think it's a bit of poor man's collaboration. E.g in Google Docs you also don't lock, you just type away. If there's proper auditing of changes and as you said "trickling" changes instead of huge diffs. Maybe it's not necessary at all.
Anyway, let us know how we can help.
I must say in the ArangoDB this looked promising with the traversal query: where we could just put arbitrary constraints on the path (like include DecomposedBy but not ConnectedTo). But on the other hand we'd have to mix INBOUND and OUTBOUND to get properties. Forgot to ask if that's possible.
I also liked the way they did traversing. It seemed much more effortless than what I am trying to do with the nested properties in EdgeDB. With EdgeDB it seems I have to know upfront exactly the object types in the chain of nested properties in order to get all the object information I need.
Anyway, let us know how we can help.
I will spend some time today and tomorrow in the docs to write up a better and more specific explanation of the current state of my spatial hierarchy query
and the "nested object property chains queries"
and use that to ask the EdgeDB people for help on more effective ways of doing these type of queries.
Maybe in the process of writing it, myself (and perhaps others) will be able to get a better overview of how things currently are cobbled together :)
FYI: Here's an excerpt from the current state of query for the spatial hierarchy (which I will write into the markdown file).
I made an arbitrary IFC file of some IfcBeam elements and a spatial hierarchy and insert it into an EdgeDB instance.
The goal:
I want to return all elements below the specific spatial element named Sublevel_1_a
in the spatial hierarchy and create a new IFC file from it.
As of now I do the spatial query in 2 separate queries to the EdgeDB database (which maybe isn't that bad all things considered )
With the following query, I get the entire spatial hierarchy where it returns all the elements with their respective name (Name), EdgeDB uuid (id) and class name (type : { name }).
SELECT {
spatial_stru := (
SELECT IfcRelContainedInSpatialStructure {
id,
RelatingStructure : { Name, id, __type__ : { name } },
RelatedElements : { Name, id, __type__ : { name } }
}
),
rel_aggs := (
SELECT IfcRelAggregates {
id,
RelatingObject : { Name, id, __type__ : { name } },
RelatedObjects : { Name, id, __type__ : { name } }
}
)
}
which returns the following (fyi: I shortened the results json for the sake of compactness):
{
'spatial_stru': [
{
'id': 'fb6d6d2a-f2be-11ec-ac74-23608326f6e7',
'RelatingStructure': {
'Name': 'Sublevel_1a_2a',
'id': 'f6980fd0-f2be-11ec-ac74-abe5fe6aa302',
'__type__': {
'name': 'default::IfcBuildingStorey'
}
},
'RelatedElements': [
{
'Name': 'bm1_1',
'id': 'f96474a6-f2be-11ec-ac74-e796fcfa53d3',
'__type__': {
'name': 'default::IfcBeam'
}
},
{
'Name': 'bm2_1',
'id': 'f9835114-f2be-11ec-ac74-d7a2757c5f6b',
'__type__': {
'name': 'default::IfcBeam'
}
}
]
},
...
'rel_aggs': [
{
'id': 'f64d4aea-f2be-11ec-ac74-4bd25461394f',
'RelatingObject': {
'Name': 'AdaProject',
'id': 'f2ef5848-f2be-11ec-ac74-d764e1155ac1',
'__type__': {
'name': 'default::IfcProject'
}
},
'RelatedObjects': [
{
'Name': 'SpatialHierarchy1',
'id': 'eebd84e8-f2be-11ec-ac74-e7851f987412',
'__type__': {
'name': 'default::IfcSite'
}
}
]
},
...
Then I just use regular python to find parent/children relationships of the spatial hierarchy and slice out all sub-elements of the
Sublevel_1_a
. I also include the parent elements to "form a straight line of elements to the top level". This is so that I have "anchored" the sublevel in the total spatial tree.
For the next query I have all the EdgeDB object references (uuid's) and their respective class types. Which I can use together with the baked in ifcopenshell schema to find all the related classes hiding in the nested chain of properties on the different IFC object types.
I'll just copy this into the markdown file for further development before sending my question over to the EdgeDB folks.
Any thoughts or suggestions?
With EdgeDB it seems I have to know upfront exactly the object types in the chain of nested properties in order to get all the object information I need.
Yes, on the one hand maybe it's schemaless vs strongly typed in action and blind traversal is also not the full solution because for example sometimes you need inverse attributes (StyledByItem for example). Also with the nearly 1000 classes I also don't really see how you can conveniently represent them in vertex collections (maybe not a problem). Especially also considering things like ordered lists of instances (maybe not a problem).
Then I just use regular python to find parent/children relationships of the spatial hierarchy and slice out all sub-elements of ...
While this does give us very granular reads in the object types (we're not reading the entire model because we leave out all geometry for ex), it still does sound very scalable or ideal for large graphs. What would it look like if you do start off with a specific constraint on the Name of the spatial hierarchy?
What would it look like if you do start off with a specific constraint on the Name of the spatial hierarchy?
Not 100% sure if I understood the question, but If you're asking what the result of my SELECT query would be if I filter the spatial hierarchy-related elements by a specific Name
, then the following query and result applies:
SELECT {
spatial_stru := (
SELECT IfcRelContainedInSpatialStructure {
id,
RelatingStructure : { Name, id, __type__ : { name }, OwnerHistory },
RelatedElements : { Name, id, __type__ : { name }, OwnerHistory }
} filter .RelatingStructure.Name = 'Sublevel_1_a'
),
rel_aggs := (
SELECT IfcRelAggregates {
id,
RelatingObject : { Name, id, __type__ : { name }, OwnerHistory },
RelatedObjects : { Name, id, __type__ : { name }, OwnerHistory }
} filter .RelatingObject.Name = 'Sublevel_1_a'
)
}
which would result in
[
{
"spatial_stru": [],
"rel_aggs": [
{
"id": "02f9df7c-fb68-11ec-ac95-d3d7429b4ff8",
"RelatingObject": {
"Name": "Sublevel_1_a",
"id": "fde66820-fb67-11ec-ac95-3f521c8ae9ff",
"__type__": {
"name": "default::IfcBuildingStorey"
},
"OwnerHistory": {
"id": "f5765b28-fb67-11ec-ac95-17f3c5afea6e"
}
},
"RelatedObjects": [
{
"Name": "Sublevel_1a_2a",
"id": "01806670-fb68-11ec-ac95-1b47e635f7d0",
"__type__": {
"name": "default::IfcBuildingStorey"
},
"OwnerHistory": {
"id": "f5765b28-fb67-11ec-ac95-17f3c5afea6e"
}
},
{
"Name": "Sublevel_1a_2b",
"id": "01cf6b44-fb68-11ec-ac95-ff09a17ad994",
"__type__": {
"name": "default::IfcBuildingStorey"
},
"OwnerHistory": {
"id": "f5765b28-fb67-11ec-ac95-17f3c5afea6e"
}
}
]
}
]
}
]
I could add a second SELECT query using the filtered result within the same query-session to find the spatial and physical elements at the level below this level. However, since I do not know the total depth below the queried spatial level it will fail to catch the sublevels\subelements further down the hierarchy. Maybe it's possible to solve this using a for loop in my query (there is no while
loop, but for loops
exists) but as of now I think the current implementation to do the slicing on the client side is a fair compromise.
Was that close to what you were asking for? :)
edit: A side note is that I added OwnerHistory to my select query as it seemed logical to export it and evaluate it on the client side in case users want to do updates. That way we could do diff's on the last modified date stamp on all the IfcOwnerHistory elements associated to the relevant elements to limit the number of elements that needs updating.
Ok, so to summarize, there are three options:
For 1. and 2. there is probably a tradeoff where 2. becomes more efficient then 1. but I guess the database needs to be really really big before that's going to happen.
For 3. a follow up question would be, is there something like an 'optional' clause or 'union' clause (these exist in sparql), or the equivalent of an outer join in sql? So that you could basically do several attempts of guessing the structure in one query.
Or maybe a 4th option:
Yes, a union operator exists in edgedb.
Regarding recursive queries
I did stumble upon this discussion about recursive query of links. At the time they did not support it (but that was 2y ago).
I also found some discussion on aggregate functions which also seems interesting.
On a related note, my primary focus these last couple of days have been digging into building queries of classes that contains long chains of nested object properties, select types and abstract classes with many subtypes. When I started working on slicing in the spatial hierarchies I found frequently that my query builder would miss properties pointing to abstract/non-abstract (IfcProfileDef) classes with subtypes that in turn have more nested information inside their own property chains.
You can take a look at some reflections I've made here. I think one improvement might be skipping OwnerHistory and do a separate query for it. That could potentially save a lot of redundant output when traversing many rooted elements. In most cases I guess elements are generated/modified by the same user/application.
BTW! EdgeDB is releasing v2.0 on Thursday 14th of July. Hopefully there are some new features in v2.0 that might be helpful for us!
I see, I also think it's simpler to compartmentalize your query. First determine the Guids of elements you're interested in. Then have queries to get their OwnerHistory / Geometry / Properties / Associations also depending on the kind of use case.
Not sure which union
this is. I was hoping it'd for example allow you to write a query to get all property set properties irrespective of their type. Basically you have a query for PropSingleValue, PropEnumeratedValue ... and then you union them. That would be one way out of needing generic traversal in case there are limited options (there aren't that many property types).
@aothms @Moult
Hey, now that I have gotten a bit more comfortable with EdgeDB and the IFC schema I believe it's time to demonstrate some of the possibilities of an IFC Database :)
Therefore I have started to build a prototype of what I think can become a "realtime collaborative development" using a backend EdgeDB (IfcDB) for BlenderBIM.
Here is a rough overview of what I had in mind and have begun working on. Any feedback is most welcome!
Login
Button that authenticates using Azure AD and returns a JWT token for all subsequent API callsPush
Button that uploads some IFC entity instances from the current BlenderBIM IFC scene.Pull
Button that downloads IFC entity instances from the IFC Database. Go Live
Button that starts a process that listens for updates from other BlenderBIM instances using Azure Storage Queue messaging serviceI have started to split the work into separate tasks, and you can follow the progress here.
Btw: A fun thing with EdgeDB is that you can visualize your uploaded schema in the browser:
I am now testing with a portion of the IFC schema (I have limited it to only 229 of the 1400+ classes of the IFC schema) running in Azure.
@Moult Regarding my Go Live
functionality, I need some way of creating a separate thread inside Blender that can listen to messages using Azure Storage Queue. I did find some code for starting processes on other threads inside blender in the SendToUnreal
work by EpicGames that I can base some of it on. But if you have any better suggestions I would love to hear your thoughts on the matter!
Anyways, my ambition is to have a PoC ready in a few weeks. So any feedback from you is as always most welcome!
Cheers
This is awesome and I can't believe I missed this message. Maybe @gorgious56 knows some tricks on neat ways to integrate this with the Blender process.
He gave me a tip in the osarch live chat to look into app handlers
to run code periodically (https://docs.blender.org/api/current/bpy.app.handlers.html) which definitively could solve this.
I am very close to having something workable in my PoC. So my guess is that I have something running by end of this week!
I'll provide some updates here as soon as I have something that I can show:)
Hello ! I'm honored to be called here but unfortunately we're treading very far from my area of expertise ^^
You might also want to look through the application timers page https://docs.blender.org/api/current/bpy.app.timers.html
Here's a small update.
I finally managed to upload the entire IFC schema in a single EdgeDB instance and this is how it looks in the EdgeDB UI:
@Moult @aothms
I am planning my next steps of experiments with EdgeDB and I am thinking it would be a good idea to define some specific scenarios in which we want to test\benchmark an IFC database.
Originally my use cases for a IFC database were:
diffing
mechanism on local IFC content. That way only what you have modified on your local IFC content is what you send back to the database and check for differences there.PS! I have started to build docs to structure these different scenarios in a more readable manner.
Any thoughts on this?