ThatOpen / engine_components

MIT License
328 stars 129 forks source link

Multi-thread (multi-worker for ifcparser) #259

Closed RyugaRyuzaki closed 4 months ago

RyugaRyuzaki commented 9 months ago

Description 📝

Usually when we parse a model, the process will take place in 2 parts, one is the geometry parser and the property parser (like link https://github.com/IFCjs/components/blob/main/src/fragments/FragmentIfcLoader/index.ts#L135-L141 , line 135 and line 141). That means the time to parser is the total time of those two processes. Now I try a solution: these two processes take place in parallel, when one of the two processes ends, we use threejs to generate geometry. Let's say the geometry parser takes 80s, and the property parser takes 60s, so the total time is 140s. Now we divide it into 2 parallel processes, it only takes us 80 seconds.

Suggested solution 💡

To speed up the process and prevent the browser from crashing I use workers. I created 2 workers. 1 are IfcGeometryWorker and IfcPropertyWorker. You can test to see the results. Below is the repo I use. https://github.com/RyugaRyuzaki/multi-worker_openbim-components.I know this method has been used in the past, but as @agviegas explains, it will take longer to load on the website, because a worker file can be up to 5MB.

Alternative ⛕

No response

Additional context ☝️

No response

Validations ✅

agviegas commented 9 months ago

Hey @RyugaRyuzaki this is quite interesting. We are now working with tiles, so that we can open any model in under 5 seconds. Of course, that still needs to convert the IFC to fragments and the result stored in some backend, but I think it is more scalable than the current frontend-only approach we have right now (doing everything in the browser). What do you think?

RyugaRyuzaki commented 9 months ago

Yes I saw it, and of course we can use multiple threads in server side.

agviegas commented 4 months ago

We are not planning to implement multithreading for now, as the current streaming features already solve scalability issues for big models, and we don't have technical capacity to investigate multithreading now. Closing this!