chuangtim / my-fund

0 stars 0 forks source link

AWS development #6

Open chuangtim opened 8 years ago

chuangtim commented 8 years ago

关于AWS opswork部署相关的总结

chuangtim commented 8 years ago
chuangtim commented 8 years ago
搭建本地opswork开发环境

参考博客 http://pixelcog.com/blog/2015/simplify-opsworks-dev-with-packer/ http://pixelcog.com/blog/2014/virtualizing-aws-opsworks-with-vagrant/#disqus_thread

chuangtim commented 8 years ago

In Chef 11.10 and earlier versions for Linux, stack settings are available as Chef attributes and are accessed through the Chef node object or through Chef search. These attributes are stored on AWS OpsWorks instances in a set of JSON files in the /var/lib/aws/opsworks/chef directory. For more information, see Stack Configuration and Deployment Attributes: Linux (p. 519).

In Chef 12 Linux, stack settings are available as Chef data bags and are accessed only through Chef search. Data bags are stored on AWS OpsWorks instances in a set of JSON files in the /var/chef/runs/run-ID/data_bags directory, where run-ID is a unique ID that AWS OpsWorks assigns to each Chef run on an instance. Stack settings are no longer available as Chef attributes, so stack settings can no longer be accessed through the Chef node object. For more information, see the AWS OpsWorks Data Bag Reference (p. 612).

normal[:apache][:keepalivetimeout] = 5
normal[:apache][:logrotate][:schedule] = 'weekly'

You should not override AWS OpsWorks attributes by modifying a copy of the associated built-in attributes file. If, for example, you copy apache.rb to your apache2/attributes folder and modify some of its settings, you essentially override every attribute in the built-in file. Recipes will use the attribute definitions from your copy and ignore the built-in file. If AWS OpsWorks later modifies the built-in attributes file, recipes will not have access to the changes unless you manually update your copy. To avoid this situation, all built-in cookbooks contain an empty customize.rb attributes file, which is required in all modules through an include_attribute directive. By overriding attributes in your copy of customize.rb, you affect only those specific attributes. Recipes will obtain any other attribute values from the built-in attributes files, and automatically get the current values of any attributes that you have not overridden.

create "deploy/attributes/customize.rb" in your cookbook repository and put the overrides in YOUR customize.rb file.

normal[:opsworks][:deploy_user][:shell] = '/bin/zsh'
normal[:opsworks][:deploy_user][:user] = 'deploy'

You can use the Chef search Method in your recipes to query for stack data. You use the same syntax as you would for Chef server, but AWS OpsWorks obtains the data from the local node object instead of querying a Chef server. This data includes: • The instance's stack configuration and deployment attributes (p. 112). • The attributes from the instance's built-in and custom cookbooks' attributes files. • System data collected by Ohai.

chuangtim commented 8 years ago
安装opswork local envirment

cd ~
mkdir vagrant
cd vagrant
vagrant add box base ~/box/xx.box
vagrant init
vagrant up
vagrant ssh

vagrant reload --provision #Vagrant在重启的时候运行provisioners的配置,通常情况下,provisioners的配置只在执行vagrant up命令的时候才会执行。

vagrant destroy #清除当前visualBox 的配置

vagrant plugin install vagrant-example-plugin
vagrant plugin update
vagrant plugin update NAME

vagrant plugin uninstall
vagrant plugin uninstall vagrant-example-plugin

vagrant plugin list

vagrant up --debug
config.vm.network :forwarded_port, guest: 80, host: 4567
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"

Suspending (挂起) 执行命令vagrant suspend将会保存当前虚拟机的运行状态并且停止它,跟主机挂起是一样的,当你需要再次启动的时候,使用vagrant up命令将会恢复之前的运行状态。这种方式的优点是,启动很迅速,可以很快进入开发环境中,缺点是会占用比较多的磁盘空间,因为虚拟机将内存中的数据都存储到了磁盘上。

Halting(停止) 这种方式是正常的停止虚拟机,使用虚拟机的关闭命令完成,当需要重新使用的时候使用命令vagrant up重新启动虚拟机,该方法与计算机冷启动一样,缺点是启动的时候耗时比较多,依然占用部分磁盘空间。

Destroying(销毁) 该方式将会销毁虚拟机,不会再占用磁盘空间,下次启动的时候将会重新建立开发环境,包括下载软件等等

chuangtim commented 8 years ago

使用proxifier做全局代理,cmd命令行也可以使用代理 https://kyonli.com/p/64

使用privoxy做http代理,转发数据给shadowsocks本地客户端 http://www.privoxy.org/

使用privoxy做http代理,转发数据给shadowsocks本地客户端 http://www.privoxy.org/

vagarnt guest machine proxy with modues vagrant-proxyconf,work flow is: vagrant-proxyconf + privoxy + shadowsocks it is wonderful. https://github.com/tmatilai/vagrant-proxyconf

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.vm.box = "base"

  if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http     = "http://192.168.1.201:8118"
    config.proxy.https    = "http://192.168.1.201:8118"
    config.proxy.no_proxy = "localhost,127.0.0.1"
  end
  if Vagrant.has_plugin?("vagrant-cachier")
     config.cache.auto_detect = true
     config.cache.scope = :machine
     config.omnibus.cache_packages = true
     config.omnibus.install = false
     config.omnibus.chef_version = "11.10.0"
  end
  config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
  config.omnibus.chef_version = '11.10.0'
  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  #config.vm.network "private_network", ip: "192.168.1.199"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  #config.vm.network "public_network"
  config.vm.network "public_network", ip: "192.168.1.199"
  config.vm.provision "chef_solo" do |chef|
    chef.add_recipe "installpkg::default"
  end
  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #  vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
  # such as FTP and Heroku are also available. See the documentation at
  # https://docs.vagrantup.com/v2/push/atlas.html for more information.
  # config.push.define "atlas" do |push|
  #   push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
  # end

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   sudo apt-get update
  #   sudo apt-get install -y apache2
  # SHELL
end
chuangtim commented 8 years ago
stackname = node[:opsworks][:stack][:name]
layername = node[:opsworks][:instance][:layers].first
nodelist = search(:node, 'layers:"#{layername}"')
hostname = node[:opsworks][:instance][:hostname]
Chef::Log.info("Hostname: #{hostname}")
Chef::Log.info("Nodes detected: #{nodelist}")
chuangtim commented 8 years ago

Haproxy+websocket configure

global
    maxconn     4096 # Total Max Connections. This is dependent on ulimit
    nbproc      2

defaults
    mode        http

frontend all 0.0.0.0:80
    timeout client 5000
    default_backend www_backend
    acl is_websocket hdr(Upgrade) -i WebSocket
    acl is_websocket hdr_beg(Host) -i ws

    use_backend socket_backend if is_websocket

backend www_backend
    balance roundrobin
    option forwardfor # This sets X-Forwarded-For
    timeout server 5000
    timeout connect 4000
    server server1 localhost:8081 weight 1 maxconn 1024 check
    server server2 localhost:8082 weight 1 maxconn 1024 check
    server server3 localhost:8083 weight 1 maxconn 1024 check

backend socket_backend
    balance roundrobin
    option forwardfor # This sets X-Forwarded-For
    timeout queue 5000
    timeout server 5000
    timeout connect 5000
    server server1 localhost:8081 weight 1 maxconn 1024 check
    server server2 localhost:8082 weight 1 maxconn 1024 check
    server server3 localhost:8083 weight 1 maxconn 1024 check

ELB(proxy-protocol)+Websocket

global
    log 127.0.0.1:514 local0
defaults
    mode http
    log global
    option httplog
    option http-server-close
    option dontlognull
    option redispatch
    option contstats
    retries 3
    backlog 10000
    timeout client 25s
    timeout connect 5s
    timeout server 25s
    timeout tunnel 3600s
    timeout http-keep-alive 1s
    timeout http-request 15s
    timeout queue 30s
    timeout tarpit 60s
    default-server inter 3s rise 2 fall 3
    option forwardfor
frontend storify_editor_frontend
    bind *:3030 accept-proxy name storify_editor_frontend
    maxconn 1000
    default_backend storify_editor_backend
backend storify_editor_backend
    timeout check 5000
    option httpchk GET /status?all=1
    balance source
    server storifyeditor1.prod.livefyre.com storifyeditor1.prod.livefyre.com:3030 maxconn 1000 weight        10 cookie websrv1 check inter 10000 rise 1 fall 3
    server storifyeditor2.prod.livefyre.com storifyeditor2.prod.livefyre.com:3030 maxconn 1000 weight 10 cookie websrv1 check inter 10000 rise 1 fall 3
chuangtim commented 8 years ago

Nginx proxy-protocol

worker_processes auto;
error_log /usr/local/nginx/logs/error.log info;
events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 0.0.0.0:5000  fail_timeout=30s;
    }

    server {
        listen 3000;
        proxy_protocol on;
        proxy_pass backend;
    }
}
chuangtim commented 8 years ago

haproxy simple http server

global
    daemon
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

listen http-in
    bind *:9000
    server server1 127.0.0.1:8000 maxconn 32