Open rainit2006 opened 6 years ago
EBS ■EBSのタイプ
スループット最適化HDD(st1) 高スループットを必要とするワークロード(MapReduce、Kafka、ETL処理、ログ処理、データウェアハウスなど)向けのタイプ; 1GBあたり月額0.054ドル スループット最適化HDD (st1) – 1TBあたり250MB/秒で最大500MB/秒まで UseCase: Big data, data warehouses, log processing.
コールドHDD (sc1)
同様のワークロードでアクセス頻度が低いユースケース向け; 1GBあたり月額0.03ドル
コールドHDD (sc1) – 1TBあたり80MB/秒で最大250MB/秒まで
UseCase: older data requiring fewer scans per day.
EBS Provisioned IOPS SSD (io1) Highest performance SSD volume designed for latency-sensitive transactional workloads. UseCase: I/O-intensive NoSQL and relational databases.
EBS General Purpose SSD (gp2)* General Purpose SSD volume that balances price performance for a wide variety of transactional workloads
■Maximum ratio of IOPS to volume size is 50:1 The Maximum ratio of IOPS to volume size is 50:1 , so if the volume size is 8 GiB , the maximum IOPS of the volume can be 400. If you go beyond this value , you will get an error as shown in the screenshot below.
■statuses for Amazon EBS volumes Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. If all checks pass, the status of the volume is ok. If a check fails, the status of the volume is impaired. If the status is insufficient-data, the checks may still be in progress on the volume. You can view the results of volume status checks to identify any impaired volumes and take any necessary actions.
■ backup: point-in-time snapshot of an EBS volume A point-in-time snapshot of an EBS volume, can be used as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental—only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the entire volume. You can create a snapshot via the CLI command – create-snapshot 注意:lifecycle policy不适用于EBS,那是S3的功能。
■Snapshot for EBS Volumes in a RAID configuration When you take a snapshot of an attached Amazon EBS volume that is in use, the snapshot excludes data cached by applications or the operating system. For a single EBS volume, this is often not a problem. However, when cached data is excluded from snapshots of multiple EBS volumes in a RAID array, restoring the volumes from the snapshots can degrade the integrity of the array.
When creating snapshots of EBS volumes that are configured in a RAID array, it is critical that there is no data I/O to or from the volumes when the snapshots are created. RAID arrays introduce data interdependencies and a level of complexity not present in a single EBS volume configuration.
The correct process is:
■EBS Volume无法tolerate AZ失败 EBS volume会自动replicated在同一AZ下 ,但是如果AZ挂了那么EBS Volume就完蛋了。 EBS Volume replicated to physical hardware with in the same available zone, So if AZ fails then EBS volume will fail. That's why AWS recommend to always keep EBS volume snapshot in S3 bucket for high durability.
■Monitoring Volumes with CloudWatch Basic : Data is available automatically in 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances. Detailed :Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch.
Q: When creation of an EBS snapshot is initiated,but not completed,the EBS volum A. Cannot be used until the snapshot completes B. Can be used in read-only mode while the snapshot is in progress C. Can be used while the snapshot is in progress D. Cannot be detached or attached to an EC2 instance until the snapshot completes Answer: C.
EC2
■ Spot block runs continuously for a finite duration (1 to 6 hours).
■ Spot fleet (スポット群) attempts to launch the number of Spot Instances and On-Demand Instances to meet the target capacity that you specified in the Spot Fleet request. A Spot Instance pool is a set of unused EC2 instances with the same instance type, operating system, Availability Zone, and network platform (EC2-Classic or EC2-VPC).
■ I2 Instance Type The I2 instance type was designed to host I/O intensive workloads typically generated by relational databases, NoSQL databases, and transactional systems.
■Bastion Server A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances.
Bastion Server are instances in the public subnet which are used as a jump server to resources within other subnets.
■インスタンスの削除(Terminate) インスタンスが終了すると、そのインスタンスに関連付けられたすべてのインスタンスストアボリュームのデータが削除されます。 デフォルトでは、インスタンスの削除時に Amazon EBS のルートデバイスボリュームが自動的に削除されます。 ただし、起動時にアタッチした追加の EBS ボリューム、または既存のインスタンスにアタッチした EBS ボリュームがある場合、デフォルトでは、インスタンスの削除後もそれらのボリュームは保持されます。
■Termination Protection for an Instance If you want to prevent your instance from being accidentally terminated using Amazon EC2, you can enable termination protection for the instance. The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance.
S3-backed instace 就是instance stored-backed AMI, 当terminated时候数据会丢失。
Limits :You can't enable termination protection for Spot instances — a Spot instance is terminated when the Spot price exceeds your bid price. However, you can prepare your application to handle Spot instance interruptions.
■ instance store-backed instances can be terminated but they can't be stopped.
■ VPC Flow Logs VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.
■ Server access logging for all required Amazon S3 buckets Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. Access log information can be useful in security and access audits.
■ CloudWatch
● CloudWatch Events Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources(监视Instances等AWS resource状态的变化)。
● Amazon CloudWatch Logs Amazon CloudWatch Logs を使用して、Amazon Elastic Compute Cloud (Amazon EC2) インスタンス、AWS CloudTrail、Route 53、およびその他のソースのログファイルの監視、保存、アクセスができます。 その後、CloudWatch Logs から、関連するログデータを取得できます。
■Amazon EC2 Usage Reports AWS provides a free reporting tool called Cost Explorer that enables you to analyze the cost and usage of your EC2 instances and the usage of your Reserved Instances.
■CloudTrail AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.
An event in CloudTrail is the record of an activity in an AWS account. This activity can be an action taken by a user, role, or service that is monitorable by CloudTrail. CloudTrail events provide a history of both API and non-API account activity made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. There are two types of events that can be logged in CloudTrail: management events and data events. By default, trails log management events, but not data events. Both management events and data events use the same CloudTrail JSON log format.
A trail is a configuration that enables delivery of CloudTrail events to an Amazon S3 bucket, CloudWatch Logs, and CloudWatch Events.
A trail that applies to all regions has the following advantages: -- You receive CloudTrail events from all regions in a single S3 bucket -- You immediately receive events from a new region. When a new region is launched, CloudTrail automatically creates a trail for you in the new region with the same settings as your original trail.
CloudTrail的log默认是加密的,不要特别为它考虑加密方法。 By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files.
■AWS Trusted Advisor AWS 環境を最適化することで、コスト削減、パフォーマンスの向上、セキュリティの向上に役立つオンラインリソースです。 不具备auditing目的。可以监视是否超过了service limit。
AWS Support Plans 和 Trusted Advisor 的访问内容有关。
■AWS Config AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
AWS Config は、AWS リソースの設定を評価、監査、審査できるようにするサービスです。Config では、AWS リソースの設定が継続的にモニタリングおよび記録されるため、必要な設定に対する記録された設定の評価を自動的に実行できます。Config を使用すると、AWS リソース間の設定や関連性の変更を確認し、詳細なリソース設定履歴を調べ、社内ガイドラインで指定された設定に対する全体的なコンプライアンスを確認できます。
AWS Config rules Config により、AWS リソースのプロビジョニングや設定のルールを定義できるようになります。ルールから逸脱するリソース設定や設定変更が生じると、自動的に Amazon Simple Notification Service (SNS) 通知がトリガーされる
■AWS Organizations AWS Organizations では、複数の AWS アカウントをポリシーベースで管理できます。 AWS Organizations を使用して、アカウントのグループを作成し、グループにポリシーを適用できます。また、カスタムスクリプトや手動処理なしで、複数のアカウントに適用するポリシーを集中管理できます。
■ELB log Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.
disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify.
Which of the following requires a custom CloudWatch metric to monitor?
A. Memory Utilization of an EC2 instance
B. CPU Utilization of an EC2 instance
C. Disk usage activity of an EC2 instance
D. Data transfer of an EC2 instance
Answer:
A. Metrics for CPU utilization, data transfer, and disk usage activity from Amazon EC2 instances are free.
What services will help identify Amazon EC2 instances with underutilized CPU capacity? Amazon EC2 usage reports, Amazon CloudWatch
3, A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement? Please select : A. Enable AWS CloudTrail to audit all Amazon S3 bucket access. B. Enable server access logging for all required Amazon S3 buckets. C. Enable the Requester Pays option to track access via AWS Billing D. Enable Amazon S3 event notifications for Put and Post.
Answer: B.
NAT Gate Storage cache mode , storage mode, isct acrros -region snapshot
ACL , security group. attach security group to xxxx
aws tail, cloudwatch,
metadata index
s3 suffix, preffix, outdated data.
pose single risk.
appropriate ebs
multiple account direct link
subnet elb and ec2 must in public subnet?
encryption on ec2 instance, ebs, s3.
message in order : SNS, SQS
redshift data across region, snowball
kinsis firhore
auto scaling : step, dynamic , schedule
AWS Storage Gateway AWS Storage Gateway サービスにより、オンプレミスの環境と AWS クラウドの間でハイブリッドストレージが実現されます。 用于备份on-premise 数据。
AWS Storage Gateway supports three storage interfaces: file, volume, and tape. The file gateway enables you to store and retrieve objects in Amazon S3 using file protocols, such as NFS. Objects written through file gateway can be directly accessed in S3.
The volume gateway provides block storage to your applications using the iSCSI protocol. Data on the volumes is stored in Amazon S3. To access your iSCSI volumes in AWS, you can take EBS snapshots which can be used to create EBS volumes.
The tape gateway provides your backup application with an iSCSI virtual tape library (VTL) interface, consisting of a virtual media changer, virtual tape drives, and virtual tapes. Virtual tape data is stored in Amazon S3 or can be archived to Amazon Glacier.
The gateway is stateless, allowing you to easily create and manage new instances of your gateway as your storage needs evolve. Finally, it integrates natively into AWS management services such as Amazon CloudWatch, AWS CloudTrail, AWS KMS, and IAM.
Q: What sort of encryption does AWS Storage Gateway use to protect my data? All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL. By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). Also, when using the file gateway, you can optionally configure each file share to have your objects encrypted with AWS KMS-Managed Keys using SSE-KMS.
■cached mode , stored mode Volume gateway provides an iSCSI target, which enables you to create volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The volume gateway runs in either a cached or stored mode. ・In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access. ・In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.
Security Groups:
■What will be the outcome when a workstation of IP 54.12.34.34 tries to access your subnet? Answer: The request will be allowed. The following are the parts of a network ACL rule: Rule number. Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it's applied regardless of any higher-numbered rule that may contradict it. Now since the first rule number is 100 and allows all traffic , no matter what rule you put after that all traffic will be allowed. Hence, all options except A are incorrect.
■Ping命令没有响应的话,可能是security group设定有问题。 和IP设定没关系。ublic IP or Elastic IP of the instances are used to communicate with internet. 当然也要正确地设定route table。
Ping命令走的不是http/https协议,而是ICMP。 The security groups need to configured to ensure that ping commands can go through. The below snapshot shows that the ICMP protocol needs to be allowed to ensure that the ping packets can be routed to the instances. You need to edit the Inbound Rules of the Web Security Group.
■Static IP for instance An instance must either have a public or Elastic IP in order to be accessible from the internet.
A public IP address is reachable from the Internet. You can use public IP addresses for communication between your instances and the Internet. An Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
An Elastic IP address is a public IP address, which is reachable from the Internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.
■VPN connect 需要PublicIP地址在custome geteway上。 In order to establish a successful site-to-site VPN connection from your on-premise network to the VPC (Virtual Private Cloud), you need to configure a public IP address on the customer gateway for the on-premise network.
■NAT
Network Address Translation (NAT) Instances: You can use a network address translation (NAT) instance in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet or other AWS services, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. ●NAT instances must be in a public subnet ●The amount of traffic that NAT instances support depend on the size of the NAT instance. If bottlenecked, increase the instance size. ●If you are experiencing any sort of bottleneck issues with a NAT instance, then increase the instance size. ●NAT instances are always behind a security group, ●must disabling Source/Destination Checks
Network Address Translation (NAT) Gateway: You can also use a NAT gateway, which is a managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort. For common use cases, we recommend that you use a NAT gateway rather than a NAT instance. ●NAT Gateways scale automatically up to 10Gbps ●No need to assign a security group, NAT gateways are not associated with security groups ●No need to disable Source/Destination checks ●More secure than a NAT instance
■Network Access Control Lists (NACLS/ ACL): NACL's are stateless, meaning both inbound and outbound rules must be configured for traditional request/response model. Start with rules starting at 100 so you can insert rules if needed. The Default NACL will allow ALL traffic in and out by default. Custom NACL's by default will deny all inbound and outbound traffic until allow rules are added. You must assign a NACL to each subnet, if a subnet is not associated with a NACL, it will allow no traffic in or out. You can only assign a single subnet to a single NACL. When you associate a NACL with a subnet, any previous associations are removed, You can block IP addresses using NACLs not Security Groups.
■VPC Peering: You can create VPC peering connections between your own VPCs or with a VPC in another account within a SINGLE REGION. There is NO single point of failure for communication nor any bandwidth bottleneck.
■VPC Endpoints: Allows internal resources such as EC2 instances to reach various AWS services without having to traverse the public internet to get to the service.
■Direct Connect (DX): AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs.
AWS Direct Connect は、お客様の内部ネットワークを AWS Direct Connect ロケーションに、標準の 1 ギガビットまたは 10 ギガビットイーサネット光ファイバケーブルを介して接続するサービスです。ケーブルの一端がお客様のルーターに、他方が AWS Direct Connect のルーターに接続されます。この接続を使用すると、パブリックな AWS のサービス (たとえば Amazon S3) または Amazon VPC への仮想インターフェイスを直接作成できるので、お客様のネットワークパスの中でインターネットサービスプロバイダーをバイパスできるようになります。
■Direct Connect ゲートウェイ AWS Direct Connect ゲートウェイを使用すると、プライベート仮想インターフェイスを介して、AWS Direct Connect 接続を、同じリージョンまたは異なるリージョンに配置されたご自身のアカウントの 1 つ以上の VPC に接続できます。
次の図では、Direct Connect ゲートウェイによって、米国東部(バージニア北部) リージョンの AWS Direct Connect 接続を使用し、米国東部(バージニア北部) と 米国西部 (北カリフォルニア) の両方のリージョンでご自身のアカウントの VPC にアクセスできます。
■Custome Gateway The traffic from the VPC gateway must be able to leave the VPC and traverse through the internet onto the customer gateway. Hence the customer gateway needs to be assigned a static IP that can be routable via the internet.
NO.9 You have been asked to design a NAT solution for your company's VPC-based web application. Traffic from the privatesubnets varies throughout the day from 500 Mbps to spikes of 7 Gbps. What is the most cost-effective and scalable solution? A. Create an Amazon EC2 NAT instance with a second elastic network (ENI) in a public subnet; route all private subnet Internet traffic through the NAT gateway. B. Create an Auto Scaling group of Amazon EC2 NAT instances in a public subnet; route all private subnet Internet traffic through the NAT gateway C. Move the Internet gateway for the VPC to a public subnet; route all Internet traffic through the Internet gateway D. Create a NAT gateway in a public subnet; route all private subnet Internet Traffic through the NAT gateway Answer: D
Eliminating Single Points of Failures on AWS Cloud https://www.botmetric.com/blog/eliminating-single-points-of-failures-on-aws-cloud/
Single NAT Instance in Network NAT acts like a cable modem and connects your public network to your private subnets. If NAT instance is having any impact, it ultimately leads to your workloads getting affected. To prevent this, it is necessary to setup an HA NAT on another instance and make it Cross-Region.
Running all Workloads in single AZ Compute/Storage Running /storing all of your critical workloads in one single availability zone is highly risky. To avoid this, take a backup of all your IT infrastructure modules, essential application settings, etc. it is highly recommended to periodically copy your data backups across the AWS regions.
you can do so by scheduling a job for cross-region copy: (1) Copy EBS Volume snapshot (based on volume tags) across region (2) Copy RDS snapshot (based on RDS tags) across regions This is perhaps the best strategy to survive from extreme cloud outages, even if failure occurs in an entire AWS region.
Single DNS and other DNS Issues in Network To prevent this, use Multi region DNS and make sure Time to live (TTL) messages are in short intervals to enable fast failover.
Not setting up for Auto-Scale Core Services In such alarming situations of server going down, go for AWS Auto-scaling option which works with selective services such as ELB.
AWS Load Balancer – Cross Network Many times it happens that after setting up your ELB, you experience significant drops in your performance. The best way to handle this situation is to start with identifying whether your ELB is single AZ or multiple AZ, as single AZ ELB is also considered as one of the Single Points of Failures on AWS Cloud. Once you identify your ELB, it is necessary to make sure ELB loads are kept cross regions.
AWS RDS within single AZ Database it is required to have RDS in multi AZ. Also, make sure that you take snapshots of the cross regions, as a backup plan.
Manual Scale you need to have AWS Auto-scaling option ready that works with only selective services such as ELB.
Auto Scaling Amazon EC2 Auto Scaling provides several ways for you to scale your Auto Scaling group.
Dynamic Scaling for Amazon EC2 Auto Scaling 動的スケーリングを設定する場合は、需要の増減に応じてスケールする方法を定義する必要があります。たとえば、現在 2 つのインスタンスで実行中のウェブアプリケーションがあり、Auto Scaling グループの CPU 使用率が 70 パーセントを超えないようにするとします。このニーズを満たすため、自動的にスケールするように Auto Scaling グループを設定できます。ポリシータイプによって、スケーリングアクションがどのように実行されるかが決まります。
★If you are scaling is based on a metric, which is an utilization metric that increases or decreases proportionally to the number of instances in the Auto Scaling group, we recommend that you use a target tracking scaling policy instead. In Target tracking scaling policies you select a predefined metric or configure a customized metric, and set a target value. EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value.
Scheduled scaling works better when you can predict the load changes and also when you know how long you need to run. Here in our scenario we just know that there will be a heavy traffic during the campaign period (period is not specified) but not sure about the actual traffic. Don't have any history to predict it either.
Amazon Redshift ■ Cross-region COPY 従来、Amazon Redshiftに取り込むS3データについては、『Amazon Redshiftクラスタと同じリージョンに作成されたバケット』にアップロードしていなければなりませんでした。それが、Cross-region COPYでリージョンを跨いでCOPYが行えるようになります。 防止灾害对策时,使用cross-region copy.
https://docs.aws.amazon.com/redshift/latest/APIReference/API_EnableSnapshotCopy.html
■Block size
■Amazon Redshift Enhanced VPC Routing If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network. Amazon Redshift Enhanced VPC Routing provides VPC resources, the access to Redshift. Redshift will not be able to access the S3 VPC endpoints without enabling Enhanced VPC routing, NAT instance (the proposed answer) cannot be reached by Redshift without enabling Enhanced VPC Routing.
■snapshots Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection. If you need to restore from a snapshot, Amazon Redshift creates a new cluster and imports data from the snapshot that you specify.
snapshot积累太多会费钱,要及时清除。 Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need.
■尽管Redshift是基于SQL的,提供RDMS功能,支持OLTP。但是它不适合high concurrent workload。如果要实现high concurrent workload,请选择RDS或DynamoDB。
-- Using SAML-Based Federation for API Access to AWS Imagine that in your organization, you want to provide a way for users to copy data from their computers to a backup folder. You build an application that users can run on their computers. On the back end, the application reads and writes objects in an S3 bucket. Users don't have direct access to AWS. Instead, the following process is used:
シングルサインオン(SSO) 1回の認証手続きで複数のOSやアプリケーション、サービスなどにアクセスできること。または、それを実現するシステムを指す。
Cross-Account Access enable a user to switch roles directly in the AWS Management Console to access resources across multiple AWS accounts—while using only one set of credentials. AWS Identity and Access Management (IAM) roles in combination with IAM users to enable cross-account API access or delegate API access within an account. This functionality gives you better control and simplifies access management when you are managing services and resources across multiple AWS accounts. You can enable cross-account API access or delegate API access within an account or across multiple accounts without having to share long-term security credentials.
AWS identity and Access Management roles IAM roles
Web identity Federation ウェブ ID フェデレーションを使用すると、カスタムサインインコードを作成したり独自のユーザー ID を管理したりする必要はありません。代わりに、アプリのユーザーはよく知られている ID プロバイダー(IdP)(Login with Amazon、Facebook、Google、またはその他の任意の OpenID Connect(OIDC)互換 IdP)を使用してサインインし、認証トークンを受け取って、そのトークンを AWS アカウントのリソースを使用するためのアクセス権限を持つ IAM ロールにマッピングし、AWS の一時的なセキュリティ認証情報に変換することができます。
给Database加密 1, 在creation of database时加密。 2 确保underlying instance supports DB encryption.
Encryption of Data at Rest http://jayendrapatil.com/aws-securing-data-at-rest/
■ CloudFront and SQS do NOT have Encryption at Rest. AWS Storage Gateway, DynamoDB, Glacier have encryption at rest.
■EFS Amazon Elastic File System (EFS) now allows you to encrypt your data at rest using keys managed through AWS Key Management Service (KMS). https://aws.amazon.com/about-aws/whats-new/2017/08/amazon-efs-now-supports-encryption-of-data-at-rest/?nc1=h_ls
■Amazon Glacier provide native encryption。 Glacier provide encryption of the data, by default Before it’s written to disk, data is always automatically encrypted using 256-bit AES keys unique to the Amazon Glacier service。
■AWS Storage Gateway provide native encryption。 AWS Storage Gateway transfers your data to AWS over SSL AWS Storage Gateway stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes.
■Amazon S3 データ保護には、転送時 (Amazon S3 との間でデータを送受信するとき) のデータを保護するものと、保管時 (Amazon S3 データセンター内のディスクに格納されているとき) のデータを保護するものがあります。 SSL を使用するか、クライアント側の暗号化を使用することによって、転送中のデータを保護できます。 Amazon S3 で保管時のデータを保護するには、次のようなオプションがあります。 (1). サーバー側の暗号化を使用する (Use Server-Side Encryption ) – データセンター内のディスクに保存する前にオブジェクトを暗号化し、オブジェクトをダウンロードするときに復号することを Amazon S3 にリクエストします。 (2), クライアント側の暗号化を使用する (Use Client-Side Encryption ) – クライアント側でデータを暗号化し、暗号化したデータを Amazon S3 にアップロードできます。この場合、暗号化プロセス、暗号化キー、関連ツールはお客様が管理してください。
Using Server-Side Encryption You have three mutually exclusive options depending on how you choose to manage the encryption keys:
※注: SSE-S3 requires that Amazon S3 manage the data and master encryption keys. SSE-C requires that you manage the encryption key. SSE-KMS requires that AWS manage the data key but you manage the master key in AWS KMS.
■Amazon EBS When Amazon EBS volume is created, you can choose the master key in KMS to be used for encrypting the volume。 EBS加密的步骤 : 1,EC2请求KMS生成加密键。Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key. 2,KMS生成加密键给EC2. AWS KMS generates this volume key, encrypts it using the master key, and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server. 3,平文加密键保存在内存里去加密或解密数据。 Plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume.
■AWS services CloudHSM can be used for RDS , Amazon Redshift Q: Can Amazon recover my keys if I lose my credentials to my HSM? No. Amazon does not have access to your keys or credentials and therefore has no way to recover your keys if you lose your credentials.
S3 ■パフォーマンス最適化 ワークロードが 1 秒あたり 100 個のリクエストを常に超えることが予想される場合、連続するキー名は避けてください。キー名で連番や日付と時刻のパターンを使用する場合は、キー名にランダムなプレフィックスを追加します。プレフィックスがランダムであることで、キー名が複数のインデックスパーティションに均等に分散されます。 例 1: 16 進のハッシュプレフィックスをキー名に追加する
キー名をランダムにする 1 つの方法は、キー名のプレフィックスとしてハッシュ文字列を追加することです。 オブジェクトをグループ化する必要がある場合は、キー名のハッシュ文字列の前にさらにプレフィックスを追加します。次の例では、キー名に animations/ と videos/ のプレフィックスを追加しています。 この場合、GET Bucket (List Objects) オペレーションで返される整列済みのリストは、animations と videos のプレフィックスでグループ化されています。
例 2: キー名の文字列を左右反転する
お客様のワークロードが主に GET リクエストを送信することである場合は、パフォーマンスを最適化するために、前述のガイドラインに加えて、Amazon CloudFront の使用を検討してください。
■CORS Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
■Troubleshooting CORS Issues If you encounter unexpected behavior while accessing buckets set with the CORS configuration, try the following steps to troubleshoot:
■buckets命名的限制 可以是:小写字母,数字 和 “-”。 可以有period(.),但是不能处于开头和末尾。 Following are the restrictions when naming buckets in S3.
■ S3のURLパターン web hosting
■ Cross-Region Replication (CRR) Cross-region replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions.
Requirements for cross-region replication:
Cross-region replication does not protect against accidental deletion.
■ Server access logging for all required Amazon S3 buckets Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. Access log information can be useful in security and access audits.
■Read-After-Write Consistency With read-after-write consistency, a newly created object or file or table row will immediately be visible, without any delays. S3 provide Read-After-Write Consistency for all regions. Amazon S3 在所有区域为 S3 存储桶中的新对象的 PUTS 提供写后读一致性.
https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions. Updates to a single key are atomic. For example, if you PUT to an existing key, a subsequent read might return the old data or the updated data, but it will never write corrupted or partial data.
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data centers. If a PUT request is successful, your data is safely stored. However, information about the changes must replicate across Amazon S3, which can take some time, and so you might observe the following behaviors:
■Amazon S3 Infrequent Access Amazon S3 Infrequent Access is perfect if you want to store data that need not be frequently accessed. It is must more cost effective than Amazon S3 Standard. And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement,because you would have an increased cost with this option. 注:Amazon Glacier with expedited retrievals的数据取得料金要比Amazon S3 Infrequent Access的贵。
Q: Your company's IT policies mandate that all critical data must be duplicated in two physical locations at least 100 miles apart. What storage option meets this requirement? Answer: One Amazon S3 bucket. S3 標準、S3 標準 – IA、Amazon Glacier の各ストレージクラスについては、オブジェクトはひとつの AWS リージョン内で互いに長距離離れた少なくとも 3 つのアベイラビリティーゾーンにまたがった複数のデバイスにわたって自動的に保存されます。
ELB (Elastic Load Balancing) ■注意:Region deployment is not possible for ELB !
■ Elastic Load Balancing supports three types of load balancers:
■Application Load Balancer L7のロードバランサーです 注:不支持TCP协议。 パスベースルーティング:URLのパスに基いてルーティングが可能です。 例: 通常はtarget1へルーティングして、パスが/target/*の場合はtarget2へルーティングします。
Application Load Balancer Components: 1, A load balancer serves the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application. You add one or more listeners to your load balancer. 2, A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define.
You can use a microservices architecture to structure your application as services that you can develop and deploy independently. You can use a single Application Load Balancer to route requests to all the services for your application.
LCUは「Load Balancer Capacity Units」の略でALBの使用量の単位です。
■Classic Load Balancer Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.
■Network Load Balancer L4のロードバランサーです. 静的なIPアドレス:ロードバランサーに固定のIPアドレスを設定できます。
Components: 1、A load balancer serves as the single point of contact for clients. The load balancer distributes incoming traffic across multiple targets, such as Amazon EC2 instances. 2.A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to a target group. 3.Each target group routes requests to one or more registered targets, such as EC2 instances, using the TCP protocol and the port number that you specify.
■不用给ELB 加上IP地址,因为ELB is not accessible from the Internet.
■Monitor Your Classic Load Balancer
■Register or Deregister (Classic ELB) Registering an EC2 instance adds it to your load balancer. The load balancer continuously monitors the health of registered instances in its enabled Availability Zones, and routes requests to the instances that are healthy. If demand on your instances increases, you can register additional instances with the load balancer to handle the demand.
Deregistering an EC2 instance removes it from your load balancer. The load balancer stops routing requests to an instance as soon as it is deregistered. If demand decreases, or you need to service your instances, you can deregister instances from the load balancer.
■Configure Cross-Zone Load Balancing for Your Classic Load Balancer With cross-zone load balancing, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only.
■Access Logs for Your Classic Load Balancer Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify.
There is no additional charge for access logs. You will be charged storage costs for Amazon S3。
■ELB Request Tracing You can use request tracing to track HTTP requests from clients to targets or other services. When the load balancer receives a request from a client, it adds or updates the X-Amzn-Trace-Id header before sending the request to the target. Any services or applications between the load balancer and the target can also add or update this header.
■
■ Q: You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance? A. Create a load balancer, and register the Amazon EC2 instance with it B. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin C. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action D. Create a launch configuration from the instance using the CreateLaunchConfiguration action
The answer is C!
A is wrong, with ELB we don’t solve the problem becouse we need more instance B is wrong, it’s for content delivery, the key is “Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin” Delivery is one of the job of a content distribution system. There can be other N number of jobs for a content management system – for example search the content, fetching the content, catalog the content, saving the content – many of which is not related to network issues. D is wrong, LauchConfiguration is just a template. Creating a template does not help in this situation.
API Gateway ■Deploying an API To deploy an API, you create an API deployment and associate it with a stage. Each stage is a snapshot of the API and is made available for the client to call.
To call a deployed API, the client submits a request against an API method URL. The method URL is determined by an API's host name, a stage name, and a resource path. The host name and the stage name determine the API's base URL.
https://{restapi-id}.execute-api.{region}.amazonaws.com/{stageName}
For example, you can deploy an API to a test stage and a prod stage, and use the test stage as a test build and use the prod stage as a stable build. After the updates pass the test, you can promote the test stage to the prod stage. The promotion can be done by redeploying the API to the prod stage or updating a stage variable value from the stage name of test to that of prod.
■Stage Variable Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of an API. They act like environment variables and can be used in your API setup and mapping templates. You can also access stage variables in the mapping templates, or pass configuration parameters to your AWS Lambda or HTTP backend.
■AWS Integration you can create an API Gateway API to expose other AWS services, such as Amazon SNS, Amazon S3, Amazon Kinesis, and even AWS Lambda. This is made possible by the AWS integration. Unlike the Lambda proxy integration, there is no corresponding proxy integration for other AWS services. Hence, an API method is integrated with a single AWS action. For more flexibility, similar to the proxy integration, you can set up a Lambda proxy integration. The Lambda function then parses and processes requests for other AWS actions.
■CORS When your API's resources receive requests from a domain other than the API's own domain, you must enable cross-origin resource sharing (CORS) for selected methods on the resource.
■Use Client-Side SSL Certificates for Authentication by the Backend You can use API Gateway to generate an SSL certificate and use its public key in the backend to verify that HTTP requests to your backend system are from API Gateway. This allows your HTTP backend to control and accept only requests originating from Amazon API Gateway, even if the backend is publicly accessible.
■API Caching You can enable API caching in Amazon API Gateway to cache your endpoint’s response. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of the requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.
■Throttle API Requests for Better Throughput To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request. Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account.
When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client. Upon catching such exceptions, the client can resubmit the failed requests in a rate-limiting fashion, while complying with the API Gateway throttling limits. リクエストの送信数がリクエストの定常レートおよびバーストを超えると、API Gateway は制限を超えたリクエストを失敗させ、クライアントに 429 Too Many Requests エラーレスポンスを返します。このような例外をキャッチすると、クライアントは API Gateway のスロットリング制限に準拠して、失敗したリクエストをレート制限内で再送信できます。
■control API Gateway:IAM、Lambda authorizer、Amazon Cognito user pool In addition to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.
■API Gateway Pricing API caching in Amazon API Gateway is not eligible for the AWS Free Tier(無料利用枠).
■ All of the endpoints created with the API gateway are of HTTPS.
■Swagger Swagger は RESTful APIを構築するためのオープンソースのフレームワークのことです。「Open API Initiative」という団体がRESTful APIのインターフェイスの記述をするための標準フォーマットを推進していて、その標準フォーマットがSwaggerです。
Swagger に対する API Gateway 拡張:API Gateway 拡張では、AWS 固有の認証および API Gateway 固有の API 統合がサポートされています。
■Known Issues Paths of /ping and /sping are reserved for the service health check. Use of these for API root-level resources with custom domains will fail to produce the expected result. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
AutoScaling ■ Scale In:台数を減少。
■Scaling Processes Amazon EC2 Auto Scaling supports the following scaling processes:
■ Default Termination Policy during Scale In The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. With the default termination policy, the behavior of the Auto Scaling group is as follows:
■Customizing the Termination Policy 1.OldestInstance. 2.NewestInstance. 3.OldestLaunchConfiguration. 4.ClosestToNextInstanceHour. 5.Default.
■Instance Protection To control whether an Auto Scaling group can terminate a particular instance when scaling in, use instance protection.
■Auto Scaling Lifecycle Scale out時、まずはPending状態、次はInService状態。 Scale in時、Termining状態。
■Lifecycle Hooks Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them. For example, while your newly launched instance is paused, you could install or configure software on it.
After you add lifecycle hooks to your Auto Scaling group, they work as follows: 1.Responds to scale out events by launching instances and scale in events by terminating instances. 2.Puts the instance into a wait state (Pending:Wait or Terminating:Wait). The instance is paused until either you continue or the timeout period ends. 3.You can perform a custom action 4.By default, the instance remains in a wait state for one hour, and then the Auto Scaling group continues the launch or terminate process (Pending:Proceed or Terminating:Proceed).
■Cooldowns and Custom Actions When an Auto Scaling group launches or terminates an instance due to a simple scaling policy, a cooldown takes effect.
Consider an Auto Scaling group with a lifecycle hook that supports a custom action at instance launch. When the application experiences an increase in demand, the group launches instances to add capacity. Because there is a lifecycle hook, the instance is put into the Pending:Wait state, which means that it is not available to handle traffic yet. When the instance enters the wait state, scaling actions due to simple scaling policies are suspended. When the instance enter the InService state, the cooldown period starts. When the cooldown period expires, any suspended scaling actions resume.
Security ■AssumeRoleWithSAML Returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. This operation provides a mechanism for tying an enterprise identity store or directory to role-based AWS access without user-specific credentials or configuration. The temporary security credentials returned by this operation consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS services. By default, the temporary security credentials created by AssumeRoleWithSAML last for one hour.
SAMLとは SAMLとは、Security Assertion Markup Languageの略称であり、OASISによって策定された異なるインターネットドメイン間でユーザー認証を行うための XML をベースにした標準規格です。 SAMLを利用することで企業の持つアイデンティティ情報、例えば、Active Directoryなどを利用して、複数のクラウドサービスへのシングルサインオンを実現します。つまり、ユーザーは認証サーバーに1回ログインするだけで、SAML対応しているクラウドサービスやWebアプリケーションを利用することができるようになるのです。
■AssumeRoleWithWebIdentity Returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider, such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible identity provider.
■AssumeRole General assume role.
■Across AWS Accounts
MFA With IAM policies, you can specify which APIs a user is allowed to call. In some cases, you might want the additional security of requiring a user to be authenticated with AWS multi-factor authentication (MFA) before the user is allowed to perform particularly sensitive actions.
■Permissions for AssumeRole, AssumeRoleWithSAML, and AssumeRoleWithWebIdentity https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_assumerole.html The permission policy of the role that is being assumed determines the permissions for the temporary security credentials that are returned by AssumeRole, AssumeRoleWithSAML, and AssumeRoleWithWebIdentity. You define these permissions when you create or update the role. These permissions are added to any resource-based policies (such as an Amazon S3 bucket policy) that are attached to the resource that the temporary security credentials can access. You cannot use the passed policy to grant permissions that are in excess of those allowed by the permissions policy of the role that is being assumed. The policies that are attached to the credentials that made the original call to AssumeRole are not evaluated by AWS when making the "allow" or "deny" authorization decision.
■GetSessionToken Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. The GetSessionToken action must be called by using the long-term AWS security credentials of the AWS account or an IAM user. Credentials that are created by IAM users are valid for the duration that you specify, from 900 seconds (15 minutes) up to a maximum of 129600 seconds (36 hours), with a default of 43200 seconds (12 hours);
■设定立即反映
ECS ■Task Definitions A task definition is required to run Docker containers in Amazon ECS. You can define multiple containers in a task definition.
■Private Registry Authentication The Amazon ECS container agent can authenticate with private registries, including Docker Hub, using basic authentication. When you enable private registry authentication, you can use private Docker images in your task definitions.
■Amazon ECR Repositories Amazon ECR is a managed AWS Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR supports private Docker repositories with resource-based permissions using AWS IAM so that specific users or Amazon EC2 instances can access repositories and images.
■Monitoring Amazon ECS You can monitor your Amazon ECS resources using Amazon CloudWatch, which collects and processes raw data from Amazon ECS into readable, near real-time metrics. These statistics are recorded for a period of two weeks.
■Docker Diagnostics Docker provides several diagnostic tools that help you troubleshoot problems with your containers and tasks. 注意:cloudwatch等services没有Docker的诊断功能。
■Dynamic Ports and Path-based Routing you can specify a dynamic host port which gives your container an unused port when it is scheduled on an EC2 instance. The ECS scheduler will automatically add the task to the Application Load Balancer using this port. Dynamic port mapping with an Application Load Balancer makes it easier to run multiple tasks from the same ECS service on an ECS cluster. The Classic Load Balancer requires that you statically map port numbers on a container instance. You cannot run multiple copies of a task on the same instance, because the ports would conflict. An Application Load Balancer allows dynamic port mapping. You can have multiple tasks from a single service on the same container instance.
An ELB can also be shared amongst multiple services using path-based routing. Each service can define its own URI, and that URI routes traffic to that service.
■Port Mapping Purpose:Port mappings allow containers to access ports on the host container instance to send or receive traffic.
■ support containner Kubernetes と Docker
■VPC Peering can using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks.
■ Encryption
■ EBS Boot Volumes Encryption You can now create Amazon Machine Images (AMIs) that make use of encrypted EBS boot volumes and use the AMIs to launch EC2 instances. The stored data is encrypted, as is the data transfer path between the EBS volume and the EC2 instance. The data is decrypted on the instance on an as-needed basis, then stored only in memory. Each EBS backed AMI contains references to one or more snapshots of EBS volumes. The first reference is to an image of the boot volume. The others (if present) are to snapshots of data volumes. When you launch the AMI, an EBS volume is created from each snapshot. Because EBS already supports encryption of data volumes (and by implication the snapshots associated with the volumes), you can now create a single AMI with a fully-encrypted set of volumes. You can, if you like, use individual Customer Master Keys in KMS for each volume.
■encrypted EBS volume When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
■CMK Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMKs) when creating encrypted volumes and any snapshots created from them.
You cannot change the CMK that is associated with an existing snapshot or encrypted volume. However, you can associate a different CMK during a snapshot copy operation so that the resulting copied snapshot uses the new CMK.
■注意事項 The snapshots that you take of an encrypted EBS volume are also encrypted and can be moved between AWS Regions as needed. You cannot share encrypted snapshots with other AWS accounts and you cannot make them public.
you cannot enable encryption for an existing EBS volume. Instead, you must create a new, encrypted volume and copy the data from the old one to the new one using the file manipulation tool of your choice.
■想获得public-ipv4的metadata内容,正确的URL格式是下面这样。 注意“latest”所在的顺序 http://169.254.169.254/latest/meta-data/public-ipv4
■ Disaster Reovery You can copy an Amazon Machine Image (AMI) within or across an AWS region . You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by default, copied to an identical but distinct target snapshot.
■AWS CloudFormation テンプレート構成
■consolidating billing Since the resources need to be separated and a separate governance model is required for each section of resources, then it’s better to have a separate AWS account for each division. Each division’s AWS account can sign up for consolidating billing to the main corporate account by creating AWS Organisations. The IT administrators can then be granted access via cross account role access.
Enable IAM cross-account access for all corporate IT administrators in each child account. Use AWS Consolidated Billing by creating AWS Organisations to link the divisions’ accounts to a parent corporate account.
AWS RDS ■マルチ AZ のフェイルオーバー中どのようなことが起き、どれくらいの間継続しますか? フェイルオーバーの際には、Amazon RDS は単純に DB インスタンスの正規名レコード (CNAME) を反転させ、スタンバイをポイントします。そしてこのスタンバイが今度は新しいプライマリになります。ベストプラクティスに従い、アプリケーションレイヤーでデータベース接続のリトライを実施することを推奨いたします。
フェイルオーバーは、プライマリで障害が検出されてからスタンバイでトランザクションが再開されるまでの間隔として定義され、通常 1 ~ 2 分以内に完了します。コミットされていない大きなトランザクションを回復させる必要があるかどうかによっても、フェイルオーバー時間は異なります。最適の結果を得るには、マルチ AZ では十分に大きなインスタンスタイプを使用することをお勧めします。また、高速かつ予測可能で、安定したスループットパフォーマンスを得るには、マルチ AZ インスタンスとともにプロビジョニングされた IOPS を使用することをお勧めします。
■Data encryption Encryption for the database can be done during the creation of the database. Also, you need to ensure that the underlying instance type supports DB encryption.
■replicationについて Multi-AZ :replicating synchronously in Master Slave configuration. Read Replicas : asynchronous replication
■Tag Tagging Support for Amazon EC2 Resources
Tagging not support:
■AWS Budgets AWS Budgets enable you to plan your service usage, service costs, and instance reservations. Budgets provide you with a way to see the following information: ・How close your plan is to your budgeted amount or to the free tier limits ・Your usage-to-date, including how much you've used of your Reserved Instances (RIs) ・Your current estimated charges from AWS, and how much your predicted usage will accrue in charges by the end of the month ・How much of your budget has been used
AWS Budgets information is updated up to three times a day.
You can create the following types of budgets: ・Cost budgets – Plan how much you want to spend on a service. ・Usage budgets – Plan how much you want to use one or more services. ・RI utilization budgets – Define a utilization threshold, and receive alerts when your RI usage falls below that threshold. This lets you see if your RIs are unused or under-utilized. ・RI coverage budgets – Define a coverage threshold, and receive alerts when the number of your instance hours that are covered by RIs fall below that threshold. This lets you see how much of your instance usage is covered by a reservation.
■灾害对策 In a disaster recovery scenario, the best choice out of all given options is to divert the traffic to a static web site.
■关于static web page You can host a static website in S3. You need to ensure that the nameserver records for the Route53 hosted zone are entered in your domain registrar.
Route 53 ■ multivalue answers If you want to route traffic approximately randomly to multiple resources, such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record.
■Weighted Use to route traffic to multiple resources in proportions that you specify. Ex. Blue-Green deployment.
DynamoDB ■DynamoDB Streams The ability to capture changes to items stored in a DynamoDB table, at the point in time when such changes occur. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application will need to access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application will need to access a DynamoDB Streams endpoint in the same region.
■Replicating Amazon Aurora MySQL DB Clusters You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS Region that is closer to your users, and make it easier to migrate from one AWS Region to another.
Glacier ■データの取り出し時間 [Standard] 3 ~ 5 時間以内に完了します。 [Bulk retrievals]: within 5 – 12 hours. [Expedited retrievals] are typically made available within 1 – 5 minutes.
CloudFront ■Using an Origin Access Identity(OAI) to Restrict Access to Your Amazon S3 Content To ensure that your users access your objects using only CloudFront URLs, regardless of whether the URLs are signed, perform the following tasks:
AWS Database Migration Service 既存のデータベースを、最小限のダウンタイムで AWS に移行します。
http://jayendrapatil.com/aws-solution-architect-associate-exam-learning-path/ http://clusterfrak.com/notes/certs/aws_saa_notes/
https://www.whizlabs.com/aws-solutions-architect-associate/