Closed aheber closed 5 years ago
Recommendations for reducing the Static Resource footprint or using Unlocked Packages are being considered but still don't address the underlying limitation of the tooling.
I would expect to be able to hold the entire 200MB static resource org limit in my project and have the tooling still able to function.
@aheber The limits are in place on the API, not the tooling as you pointed out in a previous comment. The reality is we could allow you 200 GB in your org and some piece of the API or underlying protocols might hit some limit. Having said that, some limits (not sure if the 40mb limit fits this) where somewhat arbitrary and mostly conservative. To that end we have been on a journey over the last 5 or 6 releases to increase or, in some cases, remove limits. I don't have the background at hand on this limit, but it may be there for an actual reason, will find out.
We have two paths forward, maybe combined.
1) You can put your static resource into a different folder and then when you set up a scratch org, source:deploy
the static resources, then source:push
the rest of the project metadata.
2) Increase the limits on the SOAP message. This is never going to be a final fix as there are practical limits for the size of a SOAP message, but it's closer to 2gb, but ultimately someone will want to do more than that. Additionally, if you understand SOAP, it needs to be serialized on the client side and deserialized, in its entirety on the server side. This will lead to performance issues and possibly memory issues on both server and client side.
From a practical approach, I think number 1 above is best. If you static resources are zip files, then you really can't change the contents of them from within the org. The best you can do is replace the zip file after editing the contents, which is what the source:deploy
does for you.
@dcarroll thanks for responding.
1 sounds pretty good but then moves the static resources out of the push tracked files. This would mean that any changes to the static resources (CSS changes the most often) would have to be "manually" tracked and pushed. Not the worst problem in the world.
My linked issue #110, brings to light another behavior of source:deploy
that is problematic, it destroys the push state tracking for all other metadata. If it instead of destroying the push state actually counted as a tracked push then we could side-step this issue effectively without significant effort on our part.
I think #110 is a bug. You should be able to user source:push
, source:pull
, source:deploy
and source:retrieve
without mucking up the local or remote source tracking, I think.
Closing, but will be digging into 110
@clairebianchi I want to make a case for this. DX should do something other than fall over because I have too much code/static resources/etc. #110 still only helps as a workaround to the limit described here.
Is this a "the team doesn't consider this a problem" or is it "they are unable to take on the work to fix this as an unsupported edge case?" If the later could this be a backlog item somewhere instead of being closed?
For any future readers. I've built an SFDX plugin to help handle this. You can deploy all static resources via the tooling api. This allows us not to push ALL content at once and bypass the error.
From there we .forceignore all static resources temporarily during the initial push and that is getting us off the ground. As an added bonus we went from a 7+ minute static resource deployment to ~30 seconds.
5 years later, is aheber's SFDX plugin (thank you!) still the right way to go?
@gwarburton I'm not sure that is the best option anymore. I think if you look at using the REST Metadata API it will do some things better. I'd give that a try first.
Much of the file size limit is due to the SOAP API that is normally used. If you force the CLI over to the REST API then some things work better.
I realized I didn't have to do everything at once, so I'm setting the pipeline to do 3+ deployments to get everything into the scratch org. Thanks for the advice 😃
Summary
Trying to use
force:source:push
to delivery metadata and receiving an error that the request is too large.Steps To Reproduce:
Add a significant amount of metadata, most easily static resource files. Static resources have an individual max of 5MB so you'll need a few of them. The compressed total size should be > 40MB to be safe.
Try and deliver that metadata to an org using
sfdx force:source:push
Expected result
Push would be successful and deal with the various size limitations without crashing.
DX already pre-compresses the static resources into zip files and stores them in the temp directory. It might be appropriate to have a flag or other configuration that authorizes delivering the static resources in appropriate sized chunks BEFORE the main deployment. Static resources don't have external dependencies that I know of so they are an ideal candidate to pre-load before the remainder of the metadata.
This is opening up the scratch org to partial success and somewhat unpredictable state. I think that is a reasonable trade off and also why I recommend this be a behavioral flag instead of default behavior.
Actual result
Push fails and offers no reasonable recourse to use that mechanism to delivery configuration.
Additional information
This is a specific documented limit https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_deploy.htm The base64 encoded size of the deploy package cannot be over 50MB, ~39MB on disk.
In my case I have roughly 4MB of metadata configuration files (apex, aura, lwc, object, etc...) and 36MB of static resources.
SFDX CLI Version(to find the version of the CLI engine run sfdx --version):
sfdx-cli/7.8.1-8f830784cc win32-x64 node-v10.15.3
SFDX plugin Version(to find the version of the CLI plugin run sfdx plugins --core)
OS and version: Windows 10