Closed smiller171 closed 6 years ago
Thanks for your interest @smiller171 ! :) The repo welcomes all contributors and contributions!
By automate - you mean we can automate even the process of IAM user creation and giving it SQS, SNS, SES access and also get out of the SES sandbox by putting up a automated limit increase request ?
We also need to configure Google APIs stuff for the Google OAuth. So, we have to create a project and enable Google Plus API and have to create OAuth token and also configure the origins and redirection URLs. That can be automated too?
But I think, for any automation to be possible, we need programmatic access - API level access, and so, however, we have to do some configuration at least right ? Even in AWS
cc @AndrewGHC @4iar
@karuppiah7890 I'm not familiar enough with Google's Oauth to know if we can get that down to one-click automation (would have to do research) but AWS has APIs for every single thing you can do through the console, and SDKs for all common programming languages. My preferred tool for automating all of this kind of stuff is Ansible, which is an open source configuration management tool owned by RedHat. It includes modules for managing most AWS resources, and you can drop to CLI commands for anything else.
If this is something you want to approach it's going to require some in-depth conversation about how you want to approach it unless you just want to hand the reigns to me to use best judgement and submit a PR.
With the Oauth stuff, if it can't be easily automated, it's still a lot simpler if they just have to get that one thing manually and enter it as a variable to the automated tool.
@smiller171 I understand you want to make deployment simpler, but I think it won't be very simple even if you do all the automated stuff. Like I said, you need API keys to access the AWS SDK, which is again a set of access key and secret key http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html and it is just a bit easier than what the Mail for Good (MfG) video says about creating a group for MfG application with permissions for SES, SQS, SNS and then creating a user under the group.
And you are going to make the complete deployment automated ? I think that's what Ansible does. Like
And some similar stuff for updating the application
At a high level, the user needs to get API keys for an IAM user or run the Playbook from an EC2 instance with sufficient IAM Role permissions, and they probably have to get the Oauth API key (need to research to what degree this can be automated if at all). Everything else in AWS can be automated fairly easily. I have experience doing this for continuous delivery pipelines. Everything is idempotent, so running the playbook multiple times is a safe operation and won't change anything.
Running the playbook itself consists of installing Ansible, configuring your AWS credentials, and running a single command. Prompts can be included to ask the user for any configuration options if desired. Updating the application is just a matter of running the latest version of the script again.
Hi @smiller171,
Thanks for your input on this. I'm not a devops engineer and I don't have experience with automation tools such as Ansible. In the real world though I've seen the capability of these tools and would imagine that our current build process can be dramatically simplified.
I'd be very surprised if we could automate the creation of OAuth credentials, but the installation of Docker and general setup certainly could be. What are your current thoughts in terms of the scope of the process that could be automated?
@AndrewGHC Everything other than generating the API keys for AWS and for Google Oauth can be automated easily. I have a couple changes I would recommend while going through the process, but those are small details.
Can I be involved? I want to explore those technologies but I'm not comfortable enough to do that alone, If you mind to mentor me, I would be glad to assist you @smiller171
Absolutely @luisfmelo! The best way to improve is to teach.
@karuppiah7890 @AndrewGHC The level of effort involved in ~the current architecture~ automating the current architecture is pretty low. Do you want to take the time to lay out acceptance criteria or should I just go off with @luisfmelo to build an initial proof of concept?
Level of effort ? Sorry, I don't understand
Sorry @karuppiah7890, updated my comment for clarity. I meant to automate it.
@luisfmelo I pinged you on Gitter to sync up
What kind of acceptance criteria ? Like, EC2 instance specs (OS, RAM, Disk) ?
And what changes do you want to make ? You mentioned -
I have a couple changes I would recommend while going through the process
@karuppiah7890
None of these changes are required to automate the process, and can easily be changed at a later date.
Just want to offer a +1 for automating as much of the setup process as possible. I work with lots of progressive digital activists, and 95% of them have never touched AWS and wouldn't have the faintest idea how to set up Google Auth API keys. If you want this tool to be ubiquitous, the setup process needs to be as good, or better, than the WordPress five-minute install. Needs to be simple to the point of being dummy-proof.
@pandemicsoul That's true. Only Devs or DevOps engineers know about doing such stuff and it takes time even for them. This is just one step towards automation. As we do more stuff, we can iterate on things, build on top of this to automate more and make things better :) But getting AWS API keys is still a thing, but I guess we can automate the process of getting Google API Keys - if we can use OAuth and have access to the user's Google Console to create a project and do stuff for configuring APIs. That would mean hosting a central app for MfG which will be used by people for deploying MfG application easily
I've thought about that as well @karuppiah7890. Most of the time orgs are willing to pay a bit more for a fully managed service so that they don't have to operate it themselves. I wonder if there would be a benefit to doing an OpenSource+ model, where the whole thing is Open Source if you want to run it yourself, but you can also purchase the service from FCC or an affiliate organization. This would provide an avenue for paying the core developers and tipping contributors.
@smiller171 The points you put up, I didn't get point 1 alone. About point 2, yes, it's definitely a custom domain. And the others, it's all good, but if we want a cost effective and simple system it wouldn't be necessary. The solution you are suggesting is good and robust when there are many (like 100s) users using the App. But MfG is actually gonna be used only by a small group of people max or just one user cc @AndrewGHC Correct me if I am wrong
So, it won't require scaling and stuff. If you see the Readme of the project, it mentions that people need just a $10 DO server to deploy the whole app. That's quite cost effective. Unlike using RDS (Postgres) and Elasticache and we have to think about the specs for them too. And if something goes wrong, people can use Docker volumes to store the Postgres data, it helps in upgrading the App too. But yes, we haven't done all that in our docker compose config now. In fact we are yet to even mentioned about how to upgrade the app in our Wiki. Also, we haven't configured the docker compose to start on system reboot or restart on failures and stuff. Those are things that need to be done, but once that's done, a basic robust app will be ready.
This is just an opinion based on my usage of MfG. I think @AndrewGHC can make things more clear
@smiller171 I am not sure about that. Gotta ask @QuincyLarson @AndrewGHC and other contributors
But the main idea of MfG is - self hosted solution and privacy. You can see about the privacy part here too - https://github.com/freeCodeCamp/mail-for-good#why-are-we-doing-this
Also, FCC is more of a non-profit, but yes, hosting a service would help contribute to FCC's development of this project and other projects, but hosting a service is not as simple as that. We need to maintain SLAs and put up T&C and also talk about privacy of data. And yes, if FCC is gonna host a paid service, in that case @smiller171 we can use the architecture you mentioned, because in that case, there will be many users using the paid service, unlike the self hosted model
Also, to make it a paid service, we have to make changes to the code to add payment features and make some changes, and may be even change some stuff to make it more scalable, so it becomes a totally different problem, instead of the goal that's mentioned in the Readme
And enterprises are looking for beautiful templates and template building and other features, which MfG doesn't aim completely, though it does have some thoughts on it, it's not the primary thing since Non-profit Organisations can easily use simple plain text emails and they are effective too
@karuppiah7890 You should aim for those architecture changes anyway. If everything is running on a single EC2 instance the change of losing all your data is nearly 100% if you're running it for long at all.
but for the topic at hand, my intent is to automate based on the current instructions first, and iterate on that. I think unless you have anything specific you'd like to interject as far as requirements, it's probably easiest for me to present an initial version and iterate from there.
@AndrewGHC @QuincyLarson have been using the App in production for a lot of time. I believe they are using just a single machine and Docker containers (https://github.com/freeCodeCamp/mail-for-good#performance). But it's better to wait for them to tell their opinions on the architecture and what they are using in production
And yes, you can automate based on the current instructions of course!
The chance of losing an individual EC2 instance is extremely high, so running Postgres or Redis on a single instance is a really bad idea for production. AWS has managed services for both that have a free tier, so you can have data resiliency while still paying nothing.
@smiller171 I know about the 1 year free tiers yes. And I am still stunned at how the chance of losing an individual EC2 instance is high. I don't have much of a say in this as I don't know so much about instance availability in AWS or any other cloud service provider. But looks like it's pretty bad from what you say
@karuppiah7890 Amazon gives no guarantee for individual EC2 instances. You are instructed to treat them as a disposable commodity. That's why ELBs and auto-scaling groups exist. Losing an instance is just a matter of when, not if, but if you are following best practices for auto-healing and scalability, it will never affect you. It's also fairly common for an entire Availability Zone to go down temporarily. Again, if you are following best practices for auto-healing and scalability, it will never affect you.
Heroku button or Digital Ocean 1-click installation would be top notch.
Also, we still have to configure some stuff in AWS for SES and Google API keys. That's why it's more better to have a almost-complete automated installation in AWS - excluding Google API keys and basic AWS config
Since Mail for Good depends on SES, a one-click deploy is only possible in AWS using CloudFormation, not DigitalOcean, Heroku, or Zeit. Since SES is a requirement, AWS is the only cloud it makes any real sense to run this in.
@smiller171 People can run it even in DigitalOcean or other hosting if they want. Even the Readme mentions the performance with respect to DO servers. But yes, if they want least manual config, AWS should be the choice, with the one click install method that is gonna be developed. Looking forward to it! :smile:
@karuppiah7890 I know you can run it anywhere, but maintaining infrastructure in two different clouds introduces complexity and latency that doesn't make sense.
@smiller171 Your comments have been taken of board and given us greater perspective in terms of how the deployment of this app can be improved. Would it be ok if we ping you on this in the near future following some immediate plans to discuss implementing this?
@AndrewGHC Absolutely. Probably the quickest way to reach me is to PM me on Gitter.
deploy should be way simpler now. More hosting solutions will be explicitly supported if the community asks for it.
@zhakkarn I think you should keep this open as it is not way simpler on AWS. I just tried it & it was a pain in the ass. It literally takes a lot of time (approx. 30 mins) for a AWS beginner.
@smiller171 I agree with you. CloudFormation is the way to go. This won't require new users to set up a lot of thing. CloudFormation is like Heroku's one-click deployment when I didn't have to setup anything & everything worked. But while setting it up on AWS, it took a lot of time. For me, its done now but for other users one-click installation through CloudFormation would be way better.
The current application deployment process is much more complicated than necessary, involving a great deal of manual effort in the AWS console.
IMO an effort should be made to automate this process both for initial deployment and for updates with tools such as Ansible or Cloudformation. As a DevOps Engineer, I'd be happy to assist with the planning and building of this automation, and could assume lead on it if none of the existing project leads feel comfortable doing so.