activecell / activecell.com

activecell marketing site
www.activecell.com
2 stars 1 forks source link

liquid filter only working locally #14

Closed adamrneary closed 12 years ago

adamrneary commented 12 years ago

.gitignore should include assets/packaged_*

and when we deploy to master, it should replace asset links with links to cdn. it's working locally, but if you check the production site source, it's referencing local css and js.

umarsheikh commented 12 years ago

i dont completely understand this issue. you mean that the production site http://profitably.com/ is referencing local assets, as in http://profitably.com/assets/packaged_css_20120515.css and http://profitably.com/assets/packaged_js_20120515.js instead of http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120515.css and http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_js_20120515.js ?

i think that if you set up the cname records and all correctly, the site runs at profitably.com but accesses all assets, including html files from the s3 bucket. it also should be accessing assets from the cdn. is that not the case? how can we verify if it is running off amazon or ur own machine? have you tried shutting down your server to check?

regarding production, when u deploy, the contents of the _site folder are uploaded on the server wherever that is, and that content doesnt change. if it is a link to the cdn it will remain so, and if it is a link to profitably.com it will remain so. there is nothing dynamic about that. so i dont understand the issue completely yet.

umarsheikh commented 12 years ago

jekyll uses the liquid filters to generate the static site and writes the static site to _site. once that is done, there is no liquid filter, nothing dynamic about the site. the site is deployed statically as simple html/javascript/css.

adamrneary commented 12 years ago

The problem is that we aren't, in fact pointing the CNAME to amazon. The CNAME points to profitably.github.com. The reason is that amazon doesn't provide a method for dealing people accessing profitably.com as opposed to www.profitably.com. We looked at this last time around.

So, if you check the _site folder, you see that it's correctly referencing the CDN, but if you check the actual site, the references are local. That's the bug.

umarsheikh commented 12 years ago

So, the bug is, that the s3 site http://www.profitably.com.s3.amazonaws.com.s3.amazonaws.com/index.html is referencing css and js files on cdn, example http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120517.css and http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_js_20120517.js, but the actual site on www.profitably.com is referencing the same files as http://profitably.com/assets/packaged_css_20120515.css and http://profitably.com/assets/packaged_js_20120515.js? In order to fix that, we probably need to create cname records pointing our profitably.com site to the amazon end points, as this suggests: http://www.maxmasnick.com/2012/01/21/jekyll_s3_cloudfront/#setting_up_your_dns

so you will probably have to create a cname record with your host for profitably.com to either create a redirect from your domain to the amazon s3 endpoint, or do something else.

the solution with a cname file that points to profitably.com that we now have probably works with github pages, and not for the case of amazon s3. also, i think the idea of that file there is so that github pages is served from your server, not the other way round! what we want to solve is that when someone hits profitably.com, the site should be served from the s3 endpoint and not our server. for that the cname records and all have to be created on your domain registrar, not at the s3 endpoint.

does that make sense?

adamrneary commented 12 years ago

No. What I am saying above is the bug. We don't want to point the CNAME to amazon. That creates other bugs that are more challenging.

The bug is that the liquid filter works locally but doesn't seem to be working on production.

We need the live site to show d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120515.css, and that happens when the liquid filter you created is working on github pages (production). Does that make sense?

umarsheikh commented 12 years ago

Ah! that is an ouch! Sorry for not understanding this faster. I though you meant it was not working on production, meaning that somehow the production server is not doing some tasks. But I now understand that the production is currently powered by github pages, and if the self-made plugin we are now using does not work on github pages, our production (github pages again!) will serve them locally and not from the cdn. I will dig up the appropriate references in a while and post again, but if i remember correctly (i think i do!) then github pages DOES NOT let plugin code run. That is a safety feature, or may be it is to avoid computation which might be expensive. So if we use any plugins, we cannot rely on github pages to run them. So yes you are correct, it does not run on production, and seemingly we cannot fix this problem if we want to rely on our plugin to do the conversion to the cdn url. If we totally move to amazon, we avoid this problem. That is a limitation of github pages. I will dig up appropriate references and let you know as well...

umarsheikh commented 12 years ago

yes that is correct, absolutely no plugins on github pages. there are workarounds, which involve generating _site locally, committing it, and then pushing the static site up, example at http://arademaker.github.com/blog/2011/12/01/github-pages-jekyll-plugins.html

i will explore the various options and decide which one to use. i find the above neatest. if you also like it, i can employ that strategy as well. the idea is to generate _site locally, and then push it using a file called .no_jekyll so that jekyll transformations are not applied again.

adamrneary commented 12 years ago

@umarsheikh sorry if the tone sounded harsh--definitely not my intention! :-)

The 100% amazon option comes with its own set of problems. It really struggled with the apex domain of profitably.com, and I think there were a couple other problems we ran into when we tried that, so I fear that if we go back to that we are going to run into even more problems. :-/

This seems like a silly problem to have given that jekyll always generates a static site in the _site folder. You'd think they would just set github pages up to point to _site if you like.

I don't like the idea of two different projects, and I think some of the git voodoo in the article you recommended (which is the best one I've been able to find!), scares me.

Maybe we should in fact just use Amazon and have that be that. To solve the apex domain problem, I will sign up for Route 53 and see if that helps. More to follow...

adamrneary commented 12 years ago

@umarsheikh I just signed up with dnsimple (dnsimple.com), who seems to have a suitable alias solution. GoDaddy doesn't, and Amazon doesn't. You CAN in fact point the apex domain to an elastic load balancer, which can then point to a series of EC2 instances, but as of now you can't just point the apex domain (profitably.com) at S3's website endpoint.

Dnsimple allows you to, so we can try that. If it works, we should be able to simply have rake deploy sync _site with s3, and we're in business (I think!).

DNS propagation takes a little bit, but by tomorrow we should be good.

adamrneary commented 12 years ago

Boom. That did it. Way to go dnsimple!

umarsheikh commented 12 years ago

Sure, this is resolved now! but u still have to see the proper configuration for the cdn, so that the assets are served from there.

adamrneary commented 12 years ago

@umarsheikh I am not sure if I understand your comment above. We have the cdn configured properly, yes?

umarsheikh commented 12 years ago

@adamrneary I dont think we have the cdn configured properly, or it may be that there is a dependency such that the cdn becomes "unconfigured" after we do a deploy. You can manually configure the cdn once more, and note its configuration. after another deploy if it again becomes unconfigured, we can compare its changed configuration to the original and figure out what changes, and then insert some code into our deployment process "bundle exec rake deploy" such that the original configuration is preserved.

If you open the site profitably.com, you see that the two requests to packaged_css and packaged_js files are failing, hence the site is unstyled, and without any javascript effects. That site is actually powered by http://www.profitably.com.s3.amazonaws.com.s3.amazonaws.com/index.html , so if you see that site, you again see that the requests to the two css and js files are failing, that is, for example, the request to http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120522_2.css is failing. If you simply put the s3 url as the host in the cdn url, that is, try to access http://www.profitably.com.s3.amazonaws.com.s3.amazonaws.com/assets/packaged_css_20120522_2.css you see that the css file is actually there.

So this means that the file is deployed on s3 correctly, may be it is on the cdn as well, but may be there is a wrong configuration on the cdn so that the css file is not being served from there.

So i think you can check the cdn configuration now and mail it to me. then you can changes its configuration to the correct one and again mail it to me. i will see if there are any commands that i need to run to restore the correct configuration using s3 when we do a deployment from now on.

Note: doesnt matter if we deploy with rake deploy or jekyll-s3, this same issue is present with both.

umarsheikh commented 12 years ago

@adamrneary there is a command called "s3cmd cflist" which should "List CloudFront distribution points" but when i execute it, i get: ERROR: S3 error: 403 (OptInRequired): The AWS Access Key Id needs a subscription for the service

umarsheikh commented 12 years ago

@adamrneary also if you try to get http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120507.css you get the css file, and you get a 200 ok response, but if you access http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120522_2.css you get 403 forbidden and you dont get the css file

adamrneary commented 12 years ago

Are you seeing the problem now? The site is fine for me...

Sent by iPhone

On May 24, 2012, at 1:31 AM, umar reply@reply.github.com wrote:

@adamrneary also if you try to get http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120507.css you get the css file, and you get a 200 ok response, but if you access http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120522_2.css you get 403 forbidden and you dont get the css file


Reply to this email directly or view it on GitHub: https://github.com/profitably/profitably.github.com/issues/14#issuecomment-5892289

umarsheikh commented 12 years ago

may be u need to hard refresh. or clear your cache and try loading profitably.com again

On Thu, May 24, 2012 at 6:03 PM, Adam Neary < reply@reply.github.com

wrote:

Are you seeing the problem now? The site is fine for me...

Sent by iPhone

On May 24, 2012, at 1:31 AM, umar reply@reply.github.com wrote:

@adamrneary also if you try to get http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120507.css you get the css file, and you get a 200 ok response, but if you access http://d2h0dyinjrpfzl.cloudfront.net/assets/packaged_css_20120522_2.cssyou get 403 forbidden and you dont get the css file


Reply to this email directly or view it on GitHub:

https://github.com/profitably/profitably.github.com/issues/14#issuecomment-5892289


Reply to this email directly or view it on GitHub:

https://github.com/profitably/profitably.github.com/issues/14#issuecomment-5899516