omega8cc / boa

Barracuda Octopus Aegir 5.5.0-PRO
https://omega8.cc/compare
395 stars 75 forks source link

Integrating S3FS redirects for aggregation to work #1364

Open iaminawe opened 5 years ago

iaminawe commented 5 years ago

I have been struggling for a few days to successfully implement the S3FS module within my BOA server and was hoping you could provide some guidance about where I am going wrong.

According to the module documentation https://git.drupalcode.org/project/s3fs/blob/7.x-2.x/README.txt#L197 - if I want aggregation of js and css (not using adv agg) to work with a remote file system I need to set up the following redirect in nginx

location ~ ^/(s3fs-css|s3fs-js)/(.) { set $s3_base_path 'YOUR-BUCKET.s3.amazonaws.com/s3fs-public'; set $file_path $2;

resolver 8.8.4.4 8.8.8.8 valid=300s; resolver_timeout 10s;

proxy_pass http://$s3_base_path/$file_path; }

Following the suggestions in https://github.com/omega8cc/boa/blob/master/docs/HINTS.txt - I have tried creating first a nginx_vhost_include.conf and then replacing it with a nginx_force_include.conf with that code in.

I then restarted nginx, reverified sites and went to check the url being used for the aggregated css and found the following url was not being redirected as specified in the nginx redirect and was instead giving a Server 500 Error on that page.

https://s3fs.***.net/s3fs-css/css/css_lQaZfjVpwP_oGNqdtWCSpJT1EMqXdMiU84ekLLxQnc4.css

Can you offer any insight as to what I may be doing wrong or if any additional documentation on this exists.

I am using BOA 4.0.0 and s3fs 7.x-2.13. Thank you

iaminawe commented 5 years ago

@omega8cc hoping someone can provide me with some guidance here as it is holding up other work and the end goal of getting this working is to reduce the size of our filesystems to realistically try and calculate the costs of moving to your managed hosted service. The sooner I can unblock this, the sooner I can give my clients an estimate.

omega8cc commented 5 years ago

Many BOA users don’t experience any problems nor any extra configuration on the Nginx level needed to use this module. Maybe I’m missing something, but why to go into this rabbit hole for css/js if these files take relatively very tiny space on the server, are compressed by default and you are further limiting your options by excluding AdvAgg instead of using default configuration with css/js stored locally?

Sent with GitHawk

iaminawe commented 5 years ago

Thanks for the response. I am trying to stay out of rabbit holes and thought that by following the S3Fs readme file and setting up the nginx redirects and having everything on S3 was the easiest option.

We don't use the advanced aggregation modules as have found they consistently break layouts across our network of sites when enabled so we just use built in jss/css aggregation.

I did track down that there was an an ssl issue with our cname that was blocking some assets and resolved that by disabling the cname feature and using the s3 bucket address instead.

I have tried excluding the css/js using the setting on the s3fs settings page and have copied the folders over to S3 using the built in tool but I still get lots of issues with broken relative paths.

So I think the rabbithole ends with just resolving the nginx header issue and avoiding all the other issues I am now encountering.

So I guess the questions I am hoping to get clear on based on your response are

Do I not need to add any nginx redirects for the S3Fs module to work in public takeover mode?

If so, why when I visit the styles address like this https://s3fs.***.net/s3fs-css/css/css_lQaZfjVpw.css does the s3fs-css not be rewritten as the Amazon S3 one?

Shouldn't the proxy redirect code work using BOA's NGINX config override options detailed in the HINTS.txt

Thanks for any further light you can shine on this

omega8cc commented 5 years ago

From our experience and feedback from anyone using this module for years, no extra configuration with hairy proxy is needed at all. The module by default leaves the aggregated css/js locally, and its readme says explicitly:

If you want your site's aggregated CSS and JS files to be stored on S3, rather than the default of storing them on the webserver's local filesystem, you'll need to do two things:

As for the proxy trick -- again, we have no experience with it, because no one ever needed it when using s3fs module. The Nginx configuration the optional configuration proposes will not work because other locations take precedence to handle css/js -- it's perhaps doable, but what is the point if this is irrelevant in the disk space usage context, complicates configuration and may actually introduce more issues with broken layout, inability to reliably rebuild them or clean up etc. It's not default for good reasons.

Again, please try to ignore this non-standard feature and allow the module to leave aggregated css/js locally, which is the module default behaviour.

iaminawe commented 5 years ago

Thanks for the response. I guess I have been trying to use the "S3 for public:// files" checkbox to avoid having so go through and convert a large number of file and image fields to use S3 instead of default file storage. If thats what I have do to do then its certainly an option but one I was hoping to avoid.

So I guess the takeaway from this then is that BOA does not support the Public Files takeover feature of S3FS and will only work with the File source option.

Would it be possible to consider adding this S3 redirect to NGINX in BOA core in the future? I really dont think its a fringe feature in that its the quickest way to convert an existing site filesystem to an amazon s3 based one. Thanks for your help

omega8cc commented 5 years ago

Here is the problem with implementing this proxy feature:

  1. It can't be put in the default configuration because YOUR-BUCKET must be configurable (how?)
  2. To allow it to be put in the custom vhost include we would have to figure out what to do to allow this kind of location to override default locations without introducing conflicts

It may turn out simple or complex to implement, I'm just speculating, but if you think it could really help for reasons we may not be aware of, because we have never received any similar feedback before, then we are of course interested in making it happen.

iaminawe commented 5 years ago

Yes, good point about the custom bucket name - perhaps the YOUR-BUCKET variable can be specified alongside the amazon s3 auth details in the barracuda.cnf file

I think the biggest advantage about being able to switch to using the full public filesystem instead of selectively on some file fields is that there is no migration needed (beyond the drush files system sync) making it easier to retroactively implement S3 on an already running site.

I would be happy to test or help out if I can in any way. Thanks for the info