penpot / penpot-exporter-figma-plugin

Penpot exporter Figma plugin
https://www.figma.com/community/plugin/1219369440655168734/penpot-exporter
Mozilla Public License 2.0
237 stars 24 forks source link

Performance thread #149

Open sneko opened 2 weeks ago

sneko commented 2 weeks ago

Hi,

Following up on my message on performance issues both for exporting and importing https://community.penpot.app/t/figma-to-penpot-export-plugin/5554/7?u=sneko I make this thread to not flood the official forum announcement.

I had a look at browsing Figma nodes and it seems what justifies having async on all functions like transformChildren / translateChildren / transformSceneNode / transformGroupNode / ... is due to what's inside transformInstanceNode in case of a remote component (both await node.getMainComponentAsync() and await registerExternalComponents(mainComponent)).

I'm not familiar with those, but if they indeed do remote calls, since all the browsing is done with chained await in for loops, it will act as synchronous operations. I'm wondering if doing things like Promise.all() instead would help when "real async functions" are called.

[EDIT while writing this I saw your message on the forum] It seems you face issues when parallelizing those. Juste some ideas instead of using a Promise.all that would make Figma crashes:

  1. You could cap the // with libraries like async (using for example eachOfLimit(), cf. https://www.npmjs.com/package/async). Since there is a lot of level of abstraction, it's needed to define where to apply this // ? Only on "page layer" and sub-layers with a Promise.all by default?
  2. Or have a more complex logic with a dynamic parallelizing limit according to information available inside window.performance.memory (still hard to know about the computer capacity but it may help?)
  3. Or let the user entering a "speed value" as it's done for tool converting video format... Like "ultra-fast, fast, medium, slow, ultra-slow" (which would define the // limit, and if it crashes the user would understand setting lower speed expectations would do the trick)
  4. Since having multiple layers, if the layer N-4 is already ongoing 3 times (but blocking due to network requests), the logic of eachOfLimit may not help since scoped to where it's called. In this case, adjust with a share instance? Or a shared semaphore/mutex?

(I will keep you updated if at work I have free time to contribute to the plugin, will be easier than just trying to say "try this or this" (sorry for that))

cc @Cenadros

EDIT2: additional notes:

  1. When using RemoteComponentsLibrary, also parallelize?
  2. Maybe use a Map inside RemoteComponentsLibrary / ComponentLibrary instead of an object? Maybe remove from the Map once a specific iteration has been fully processed (not sure if this one can be huge)? The latter could apply everytime it's a new object created (not a reference to an internal from Figma).
  3. Use streams? Here we don't read a huge file as input (or an entire big database table), the discovery is done step by step through Figma Plugin API... but I wonder if having the "transform" process in a stream could help factorizing parallelizing logic, but also it could help pre-formating some bundle files (but it may imply having the generation on the plugin side, not the UI side)

EDIT3: Allow exporting specific pages? It would allow people to do chunks (an export for a huge page, and another for the rest), so the export process works and that the import does not make Penpot crashing (it breaks the ideal, but makes things usable).

EDIT4: Did you think about using the Figma REST API instead of the FIgma Plugin API? It would avoid having a Figma instance running, but not sure everything is possible (despite seeing endpoints to get nodes...). The downside is it's less accessible for lambda people (due to access token to enter, using a binary or having it hosted somewhere else).

EDIT5: Just saw there is a download report when Penpot crashes when opening an import (not really seeable in dark mode). It's always the same error:

{:type :validation, :code :request-body-too-large, :hint "http error"}

I should dig into it.

sneko commented 2 weeks ago

Hi @jordisala1991 and @Cenadros,

I will have soon some days to contribute to your plugin and I'm definitely interested by your thoughts on the following.

The final goal

The french government's design system (named DSFR) is currently only available and maintained on Figma (https://www.figma.com/@gouvfr and https://github.com/GouvernementFR/dsfr) and we are interested by the open source approach of Penpot.

The most effective for us would be to keep our design system designers only working on Figma (since they do it right now, and to not duplicate the hard work of maintaining multiple platforms). But we expect a result similar enough into Penpot (which implies documents synchronization), so our agents can use it.

Context and roadmap

The initial plan I had was:

  1. Prove manually this plugin can export our design system to Penpot
  2. Take a few days to automate this, watch Figma modification once a day, and if any, create a new version on Penpot (the automation was imagined in a GitHub Actions by using Playwright or Puppeteer to log into Figma, use the plugin, save the .zip, open Penpot and import the archive).

As stated before I had some performance issues:

I see 2 possible paths to follow:

A. Initial plan

I keep the initial plan and can help you on improving the performance of exporting. But I have no guarantee in a short time to help on improving Penpot performance so it does not crash when it has too many components.

So after those days there is a risk our design system would be on Penpot but not usable until the core team find tricks to improve Penpot performance (which were announced during the last live), or until the plugin optimize enough current duplicates (but not sure 100% it would solve issues since the original document is huge). Or maybe it would be usable but only to powerful computers? (mine is MBP 2020)

B. API Alternative plan

Otherwise, have a third binary to make the export and the import by using REST APIs from Figma and Penpot.

The global idea would be:

  1. To enter document IDs to synchronize, it fetches fonts, files and nodes locally (saves it to files for further process), and in case of shared library (design system or so) it would add the shared document to the list of those to synchronize
  2. To locally transform the Figma nodes data into the format of Penpot nodes. The idea would be to reuse your logic of converting format, but I guess it's not 100% reusable since you go through SVG files, maybe the Penpot API has a slightly different format. But at least you provide a strong base so I can try to map them to the other format.
  3. Get the current nodes from Penpot (in case the corresponding documents already exist)
  4. Make the difference to add/modify/delete/idle (both for documents and document nodes)
  5. Save the IDs mapping locally

Pros I see:

Cons:


Did you already think about the (B) path? Do you think it can make a difference in term of performance/usability? And if the benefit would be enough compared to the (A)? Or is it a loss of time and it's better to full focus on (A)?

Thank you,

By the way: do you have a specific Discord or so for discussions about this plugin?

cc @niwinz @Alotor