forcedotcom / cli

Salesforce CLI
https://developer.salesforce.com/docs/atlas.en-us.sfdx_cli_reference.meta/sfdx_cli_reference/
BSD 3-Clause "New" or "Revised" License
489 stars 78 forks source link

force:source:push -- ERROR: Maximum size of request reached. Maximum size of request is 52428800 bytes #109

Closed aheber closed 5 years ago

aheber commented 5 years ago

Summary

Trying to use force:source:push to delivery metadata and receiving an error that the request is too large.

Steps To Reproduce:

Add a significant amount of metadata, most easily static resource files. Static resources have an individual max of 5MB so you'll need a few of them. The compressed total size should be > 40MB to be safe.

Try and deliver that metadata to an org using sfdx force:source:push

Expected result

Push would be successful and deal with the various size limitations without crashing.

DX already pre-compresses the static resources into zip files and stores them in the temp directory. It might be appropriate to have a flag or other configuration that authorizes delivering the static resources in appropriate sized chunks BEFORE the main deployment. Static resources don't have external dependencies that I know of so they are an ideal candidate to pre-load before the remainder of the metadata.

This is opening up the scratch org to partial success and somewhat unpredictable state. I think that is a reasonable trade off and also why I recommend this be a behavioral flag instead of default behavior.

Actual result

Push fails and offers no reasonable recourse to use that mechanism to delivery configuration.

Additional information

This is a specific documented limit https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_deploy.htm The base64 encoded size of the deploy package cannot be over 50MB, ~39MB on disk.

In my case I have roughly 4MB of metadata configuration files (apex, aura, lwc, object, etc...) and 36MB of static resources.

SFDX CLI Version(to find the version of the CLI engine run sfdx --version): sfdx-cli/7.8.1-8f830784cc win32-x64 node-v10.15.3

SFDX plugin Version(to find the version of the CLI plugin run sfdx plugins --core)

@oclif/plugin-commands 1.2.2 (core)
@oclif/plugin-help 2.1.6 (core)
@oclif/plugin-not-found 1.2.2 (core)
@oclif/plugin-plugins 1.7.8 (core)
@oclif/plugin-update 1.3.9 (core)
@oclif/plugin-warn-if-update-available 1.7.0 (core)
@oclif/plugin-which 1.0.3 (core)
@salesforce/sfdx-trust 3.0.2 (core)
analytics 1.1.2 (core)
generator 1.1.0 (core)
salesforcedx 45.16.0 (core)
├─ force-language-services 45.12.0 (core)
└─ salesforce-alm 45.18.0 (core)

sfdx-cli 7.8.1 (core)
sfdx-typegen 0.6.0 (link) C:\Users\aheber\dev\sfdx-typegen

OS and version: Windows 10

aheber commented 5 years ago

Recommendations for reducing the Static Resource footprint or using Unlocked Packages are being considered but still don't address the underlying limitation of the tooling.

I would expect to be able to hold the entire 200MB static resource org limit in my project and have the tooling still able to function.

dcarroll commented 5 years ago

@aheber The limits are in place on the API, not the tooling as you pointed out in a previous comment. The reality is we could allow you 200 GB in your org and some piece of the API or underlying protocols might hit some limit. Having said that, some limits (not sure if the 40mb limit fits this) where somewhat arbitrary and mostly conservative. To that end we have been on a journey over the last 5 or 6 releases to increase or, in some cases, remove limits. I don't have the background at hand on this limit, but it may be there for an actual reason, will find out.

We have two paths forward, maybe combined. 1) You can put your static resource into a different folder and then when you set up a scratch org, source:deploy the static resources, then source:push the rest of the project metadata. 2) Increase the limits on the SOAP message. This is never going to be a final fix as there are practical limits for the size of a SOAP message, but it's closer to 2gb, but ultimately someone will want to do more than that. Additionally, if you understand SOAP, it needs to be serialized on the client side and deserialized, in its entirety on the server side. This will lead to performance issues and possibly memory issues on both server and client side.

From a practical approach, I think number 1 above is best. If you static resources are zip files, then you really can't change the contents of them from within the org. The best you can do is replace the zip file after editing the contents, which is what the source:deploy does for you.

aheber commented 5 years ago

@dcarroll thanks for responding.

1 sounds pretty good but then moves the static resources out of the push tracked files. This would mean that any changes to the static resources (CSS changes the most often) would have to be "manually" tracked and pushed. Not the worst problem in the world.

My linked issue #110, brings to light another behavior of source:deploy that is problematic, it destroys the push state tracking for all other metadata. If it instead of destroying the push state actually counted as a tracked push then we could side-step this issue effectively without significant effort on our part.

dcarroll commented 5 years ago

I think #110 is a bug. You should be able to user source:push, source:pull, source:deploy and source:retrieve without mucking up the local or remote source tracking, I think.

clairebianchi commented 5 years ago

Closing, but will be digging into 110

aheber commented 5 years ago

@clairebianchi I want to make a case for this. DX should do something other than fall over because I have too much code/static resources/etc. #110 still only helps as a workaround to the limit described here.

Is this a "the team doesn't consider this a problem" or is it "they are unable to take on the work to fix this as an unsupported edge case?" If the later could this be a backlog item somewhere instead of being closed?

aheber commented 5 years ago

For any future readers. I've built an SFDX plugin to help handle this. You can deploy all static resources via the tooling api. This allows us not to push ALL content at once and bypass the error.

https://github.com/aheber/sfdx-heber#sfdx-heberstaticresourcesdeploy--c--r--v-string--u-string---apiversion-string---json---loglevel-tracedebuginfowarnerrorfataltracedebuginfowarnerrorfatal

From there we .forceignore all static resources temporarily during the initial push and that is getting us off the ground. As an added bonus we went from a 7+ minute static resource deployment to ~30 seconds.

gwarburton commented 3 weeks ago

5 years later, is aheber's SFDX plugin (thank you!) still the right way to go?

aheber commented 3 weeks ago

@gwarburton I'm not sure that is the best option anymore. I think if you look at using the REST Metadata API it will do some things better. I'd give that a try first.

https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_rest_deploy_enable_cli.htm

Much of the file size limit is due to the SOAP API that is normally used. If you force the CLI over to the REST API then some things work better.

gwarburton commented 3 weeks ago

I realized I didn't have to do everything at once, so I'm setting the pipeline to do 3+ deployments to get everything into the scratch org. Thanks for the advice 😃