Open LRotherfield opened 6 years ago
Do you use a shared database for all instances?
Yes
@anyt - We're load balancing for performance + redundancy of a single instance of OroPlatform, hence the shared database.
Why are you run oro:platform:update
on multiple instances?
if cache and database are shared I don't see why it's needed
To only install assets, if they are not shared you can use this commands:
app/console fos:js-routing:dump --env=prod
app/console oro:localization:dump --env=prod
app/console oro:assets:install --symlink --env=prod
app/console assetic:dump --env=prod
app/console oro:translation:dump --env=prod
app/console oro:requirejs:build --env=prod
About migrations from my perspective it's enough to run platform:update only on single instance as they affects only database and caches
But I'd recommend sharing assets as well. It's possible by using CDN or some sort of local CDN just by moving them to separate domain. Here is example configuration for configuring CDN on top of assetic: https://symfony.com/blog/new-in-symfony-2-7-the-new-asset-component#configuration-changes
@anyt, we understand that running platform:update would be better on only one instance, is there a way to configure the platform to do that though? Or is that something that you write a separate script for in your deployment tool?
There are several rules: 1) All resources that can be modified at runtime should be shared. 2) During deployment, an application must be in maintenance mode so no one will trigger cache rebuild by loading the webpage when an application is not ready, as this can lead to some unexpected errors. 3) It's just more reliable to prepare single instance and then clone all the code to other instances, as there is no need to warmup cache or build assets several times from different apps. For example platform:update will run migrations, build assets, warmup cache etc. on each application you run it, but it's enough to do it once.
I don't have any ready to use scripts to share, just recommendations.
@anyt on typical CI/Automated deployment we do: 1. Push code, 2: Build zip in build environment, 3: deploy.
With Oro Platform we cannot do a full build during build time (part 2) because it requires access to the database for entity config and we don't want to give our build server access to production database. We also then we might need to also run migrations on the server during step 3.
Our infrastructure is really common, one load balancer, multiple web servers then 1 database. We have NFS mounts for all "shared" points (which is basically just media/uploads) and we keep cache in redis.
When you make a deploy using AWS to multiple servers, it's not possible to say "deploy on this server first and run oro:platform:update, then, deploy on every other server without running platform update". Also, doing it like that would increase the amount of time where different versions of the software are running connected to the same database.
This feels like the most common usecase for running in load balanced environment and we can't find any guidance on documentation for how to run such a setup. Surely your hosted enterprise solution encounters many of the same issues? Would it be possible to get some guidance/recommendations on how this is done / what best practice recommendations are for such a scenario please?
would increase the amount of time where different versions of the software are running connected to the same database.
Currently, OroPlatform based application doesn't support running old code with the new database or search index or caches, and vice-versa. So during update maintenance mode should be enabled for all instances.
@anyt ok fair enough, so when we start running push we enable maintenance mode. Far from ideal, but can accept that.
What is the recommended set of steps then.
"First server" oro:platform:update
"all other servers" app/console fos:js-routing:dump --env=prod app/console oro:localization:dump --env=prod app/console oro:assets:install --symlink --env=prod app/console assetic:dump --env=prod app/console oro:translation:dump --env=prod app/console oro:requirejs:build --env=prod
?
Also, this means every push is going to result in many minutes of downtime - from a CI perspective it's a bit of a nightmare no?
This is the regular update flow for single instance application https://oroinc.com/orocrm/doc/2.0/dev-guide/cookbook/how-to-upgrade-to-new-version . Depending on what is shared on your setup you can adapt it to your needs. oro:platform:update command just runs a bunch of other commands, you can check the list here https://github.com/oroinc/platform/blob/master/src/Oro/Bundle/InstallerBundle/Command/PlatformUpdateCommand.php
I just listed commands that required for rebuilding assets, but above list can be incomplete for your setup.
About downtime, an update can be scheduled at night or you can send notifications before the update so customers will plan their work and update will not affect them a lot.
@anyt Thanks for the docs, we will review and come back to you with our issues once done.
Updates scheduled at night is not a solution, Oro platform is a platform for business applications, we can't tell our customers sorry we can't update your website until midnight today because of deploy.
Consulted with the cloud team. We don't have such problem because we share all the application code between nodes. When we didn't have it we considered the option when the install/upgrade is on one node and then we clone the finished code with all the assets (the caches still need to be shared as well as the media).
We have the solution without downtime, or rather it can be very short there when we restart the PHP-fpm so that the opcache get a new code.
To support dynamic scale, for example, you have 3 nodes and in 5 minutes - 4, we considered the option to build rpm from code and run in on the start of a new node.
Application upgrade depends on the deployment architecture and model you are using in your environment as well as on upgrade type. For upgrades with backward incompatible schema update we recommend to put application in maintenance mode. Upgrade without schema update or with backward compatible updates can be done with no downtime. As an idea you can consider to implement no downtime updates by building application source code and running platform update commands in separate directory and switch active source code location by using symlink.
There are no limitation on platform side, it all about the way how development operations are organized and which tools are used for this.
@anyt @DimaSoroka thank you for your responses. I think it would be helpful that we create the exact steps and share with you, so we can find the ideal solution for deployment. Perhaps then we can also document these, as I'm sure there are other development teams struggling with deployment too.
Will come back to you next week!
We're aiming for a process like @MikeParkin outlined above. Most of the "prep" work of building assets is done in a CI process, then the artifact is deployed, and migrations are run in place against the shared production database. We've run into most of the issues covered in this thread. We raised a ticket with partner support detailing our experiences, please see OPS-1157 for more details.
It would be great if Oro could consider a more sreamlined build/deploy workflow for a future release. I understand that it can't happen for 3.0, but a future release would be great.
Deploying during the night might be acceptable for in internal CRM system which mostly sits idle overnight, but isn't an option for an eCommerce platform where the expectation is generally that people can browse the catalog/place orders at any time. As the underpinning technology for both OroCRM and OroCommerce, Platform needs to support the deployment needs of both use cases.
Thanks, @aligentjim, we are planning few major improvements in the source code building and deployment flow. In the meantime i would recommend you to review your pipeline configuration as you can build your application source code and roll the upgrade without downtime. Based on information you provided it looks as your pipeline is containers based, in this case you can build container with new version of the application using production environment parameters and run rolling upgrade.
Here is an example of the flow we built in our deployment process for the case when there are no backward incompatible updates to data schema:
For upgrades with schema update we highly recommend to use maintenance mode and backups before applying any changes.
@MikeParkin Were you able to get your CD working as you hoped ?
If so, can you share your build script ? How did you manage to warmup cache without a DB present ?
Thanks, Samuel
@aligentjim @MikeParkin
Got it working ! oro:assets:install
is now working without a database. I am not sure if i should submit PRs, since it involve changes in crm-application
, oro/crm
and oro/platform
Anyway, here are the required changes
config/config.yml | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/config/config.yml b/config/config.yml
index 7a49260350..8009887851 100644
--- a/config/config.yml
+++ b/config/config.yml
@@ -34,6 +34,13 @@ framework:
serializer:
enabled: true
+
+
+doctrine:
+ dbal:
+ server_version: '5.7' # set this in your parameters.yml instead :)
+
+
# Twig Configuration
twig:
debug: "%kernel.debug%"
.../ContactImportExportConfigurationProvider.php | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/src/Oro/Bundle/ContactBundle/ImportExport/Configuration/ContactImportExportConfigurationProvider.php b/src/Oro/Bundle/ContactBundle/ImportExport/Configuration/ContactImportExportConfigurationProvider.php
index 8df5a25a4..03a58f0a3 100644
--- a/src/Oro/Bundle/ContactBundle/ImportExport/Configuration/ContactImportExportConfigurationProvider.php
+++ b/src/Oro/Bundle/ContactBundle/ImportExport/Configuration/ContactImportExportConfigurationProvider.php
@@ -2,6 +2,7 @@
namespace Oro\Bundle\ContactBundle\ImportExport\Configuration;
+use Doctrine\DBAL\DBALException;
use Oro\Bundle\ContactBundle\Entity\Contact;
use Oro\Bundle\ImportExportBundle\Configuration\ImportExportConfiguration;
use Oro\Bundle\ImportExportBundle\Configuration\ImportExportConfigurationInterface;
@@ -28,13 +29,21 @@ class ContactImportExportConfigurationProvider implements ImportExportConfigurat
*/
public function get(): ImportExportConfigurationInterface
{
+ $trans = 'oro.contact.import.strategy.tooltip';
+ try{
+ $trans = $this->translator->trans($trans);
+ }
+ catch(DBALException $exception)
+ {
+ // Not installed or no database present
+ }
+
return new ImportExportConfiguration([
ImportExportConfiguration::FIELD_ENTITY_CLASS => Contact::class,
ImportExportConfiguration::FIELD_EXPORT_PROCESSOR_ALIAS => 'oro_contact',
ImportExportConfiguration::FIELD_EXPORT_TEMPLATE_PROCESSOR_ALIAS => 'oro_contact',
ImportExportConfiguration::FIELD_IMPORT_PROCESSOR_ALIAS => 'oro_contact.add_or_replace',
- ImportExportConfiguration::FIELD_IMPORT_STRATEGY_TOOLTIP =>
- $this->translator->trans('oro.contact.import.strategy.tooltip'),
+ ImportExportConfiguration::FIELD_IMPORT_STRATEGY_TOOLTIP => $trans ,
]);
}
}
.../Bundle/SecurityBundle/Metadata/ActionMetadataProvider.php | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/Oro/Bundle/SecurityBundle/Metadata/ActionMetadataProvider.php b/src/Oro/Bundle/SecurityBundle/Metadata/ActionMetadataProvider.php
index d79179a9e6..c260acb650 100644
--- a/src/Oro/Bundle/SecurityBundle/Metadata/ActionMetadataProvider.php
+++ b/src/Oro/Bundle/SecurityBundle/Metadata/ActionMetadataProvider.php
@@ -3,6 +3,7 @@
namespace Oro\Bundle\SecurityBundle\Metadata;
use Doctrine\Common\Cache\CacheProvider;
+use Doctrine\DBAL\DBALException;
use Symfony\Component\Translation\TranslatorInterface;
class ActionMetadataProvider
@@ -68,7 +69,13 @@ class ActionMetadataProvider
*/
public function warmUpCache()
{
- $this->loadMetadata();
+ try{
+ $this->loadMetadata();
+ }catch(DBALException $exception)
+ {
+ return ;// Not installed or no database connection
+ }
+
}
/**
We solved this in latest version by patching server_version to all DB configs as in https://github.com/oroinc/platform/pull/1001
We also used https://github.com/cweagans/composer-patches to include temporary patch (patch from commit)
Hi Guys
We are running the Oro Platform on multiple servers and are running into an issue when we deploy.
To insure migrations are run and new assets are generated at the same time as the new code is deployed, we run
oro:platform:update
on each server on each deploy. Occasionally one of our servers will fail to deploy because of a race condition inoro:platform:update
which results in:I raised another ticket about load balanced environments and was pointed to https://oroinc.com/orocrm/doc/current/dev-guide/scale-nodes but it does not seem to deal with deployments and how you would run
oro:platform:update
.What is the suggested method for making sure assets are generated and migrations run in a multi server deploy without running into the above (and potentially other) race conditions?
Cheers Luke