A company is running an application on a group of Amazon EC2 instances behind an Application Load Balancer. The EC2 instances run across three Availability Zones. The company needs to provide the customers with a maximum of two static IP addresses for their applications.
How should a SysOps administrator meet these requirement?
A. Add AWS Global Accelerator in front of the Application Load Balancer.
B. Add an internal Network Load Balancer behind the Application Load Balancer.
C. Configure the Application Load Balancer in only two Availability Zones.
D. Create two Elastic IP addresses and assign them to the Application Load Balancer.
A company hosts a continuous integration and continuous delivery (CI/CD) environment on AWS. The CI/CD environment includes a Jenkins server that is hosted on an Amazon EC2 instance. A 500 GB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume is attached to the EC2 instance.
Because of disk throughput limitations, the Jenkins server reports performance issues that are resulting in slower builds on the server. The EBS volume needs to sustain 3,000 IOPS while performing nightly build tasks.
A SysOps administrator examines the server's history in Amazon CloudWatch. The BurstBalance metric has had a value of 0 during nightly builds. The SysOps administrator needs to improve the performance and meet the sustained throughput requirements.
Which solution will meet these requirements MOST cost-effectively?
A. Double the gp2 EBS volume size from 500 GB to 1,000 GB.
B. Change the volume type from gp2 to General Purpose SSD (gp3).
C. Change the volume type from gp2 to Throughput Optimized HDD (st1).
D. Change the volume type from gp2 to Provisioned IOPS SSD (io2).
A SysOps administrator wants to share a copy of a production database with a migration account. The production database is hosted on an Amazon RDS DB instance and is encrypted at rest with an AWS Key Management Service (AWS KMS) key that has an alias of production-rds-key.
What must the SysOps administrator do to meet these requirements with the LEAST administrative overhead?
A. Take a snapshot of the RDS DB instance in the production account. Amend the KMS key policy of the production-rds-key KMS key to give access to the migration account's root user. Share the snapshot with the migration account.
B. Create an RDS read replica in the migration account. Configure the KMS key policy to replicate the production-rds-key KMS key to the migration account.
C. Take a snapshot of the RDS DB instance in the production account. Share the snapshot with the migration account. In the migration account, create a new KMS key that has an identical alias.
D. Use native database toolsets to export the RDS DB instance to Amazon S3. Create an S3 bucket and an S3 bucket policy for cross account access between the production account and the migration account. Use native database toolsets to import the database from Amazon S3 to a new RDS DB instance.
A company is running Amazon RDS for PostgreSQL Multi-AZ DB clusters. The company uses an AWS CloudFormation template to create the databases individually with a default size of 100 GB. The company creates the databases every Monday and deletes the databases every Friday.
Occasionally, the databases run low on disk space and initiate an Amazon CloudWatch alarm. A SysOps administrator must prevent the databases from running low on disk space in the future.
Which solution will meet these requirements with the FEWEST changes to the application?
A. Modify the CloudFormation template to use Amazon Aurora PostgreSQL as the DB engine.
B. Modify the CloudFormation template to use Amazon DynamoDB as the database. Activate storage auto scaling during creation of the tables.
C. Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.
D. Create a CloudWatch alarm to monitor DB instance storage space. Configure the alarm to invoke the VACUUM command.
A SysOps administrator is responsible for a company's disaster recovery procedures. The company has a source Amazon S3 bucket in a production account, and it wants to replicate objects from the source to a destination S3 bucket in a nonproduction account. The SysOps administrator configures S3 cross-Region, cross-account replication to copy the source S3 bucket to the destination S3 bucket. When the SysOps administrator attempts to access objects in the destination S3 bucket, they receive an Access Denied error.
Which solution will resolve this problem?
A. Modify the replication configuration to change object ownership to the destination S3 bucket owner.
B. Ensure that the replication rule applies to all objects in the source S3 bucket and is not scoped to a single prefix.
C. Retry the request when the S3 Replication Time Control (S3 RTC) has elapsed.
D. Verify that the storage class for the replicated objects did not change between the source S3 bucket and the destination S3 bucket.
A company uses Amazon CloudFront to serve static content to end users. The company's marketing team recently deployed updates to 150 images on the company's website. However, the website is not displaying some of the new images.
A SysOps administrator reviews the CloudFront distribution's cache settings. The default TTL for the distribution is set to 1 week (604,800 seconds).
What should the SysOps administrator do to refresh the cache with the new images in the MOST operationally efficient way?
A. Create a new CloudFront distribution that has the same origin. Set the default TTL to 1 minute (60 seconds). Switch Amazon Route 53 DNS records to use the new distribution.
B. Instruct the marketing team to upload the new images to a different location. When the new images are uploaded, update the website to locate the new images.
C. Issue a CloudFront invalidation request to immediately expire the new images from the marketing team's update.
D. Update the existing CloudFront distribution to reconfigure the default TTL to 1 minute (60 seconds). During submission of the new configuration, include the flag to invalidate objects in the specified path.
A company is transitioning away from applications that are hosted on Amazon EC2 instances. The company wants to implement a serverless architecture that uses Amazon S3, Amazon API Gateway, AWS Lambda, and Amazon CloudFront. As part of this transition, the company has Elastic IP addresses that are unassociated with any EC2 instances after the EC2 instances are terminated. A SysOps administrator needs to automate the process of releasing all unassociated Elastic IP addresses that remain after the EC2 instances are terminated.
Which solution will meet this requirement in the MOST operationally efficient way?
A. Activate the eip-attached AWS Config managed rule to run automatically when resource changes occur in the AWS account. Configure automatic remediation for the rule. Specify the AWS-ReleaseElasticIP AWS Systems Manager Automation runbook for remediation. Specify an appropriate role that has permission for the remediation.
B. Create a custom Lambda function that calls the EC2 ReleaseAddress API operation and specifies the Elastic IP address AllocationId. Invoke the Lambda function by using an Amazon EventBridge rule. Specify AWS services as the event source, All Events as the event type, and AWS Trusted Advisor as the target.
C. Create an Amazon EventBridge rule. Specify AWS services as the event source, Instance State-change Notification as the event type, and Amazon EC2 as the service. Invoke a Lambda function that extracts the Elastic IP address from the notification. Use AWS CloudFormation to release the address by specifying the AllocationId as an input parameter.
D. Create a custom Lambda function that calls the EC2 ReleaseAddress API operation and specifies the Elastic IP address AllocationId. Invoke the Lambda function by using an Amazon EventBridge rule. Specify AWS services as the event source, Instance State-change Notification as the event type, and Amazon EC2 as the service.
A company has several member accounts that are in an organization in AWS Organizations. The company recently discovered that administrators have been using account root user credentials. The company must prevent the administrators from using root user credentials to perform any actions on Amazon EC2 instances.
What should a SysOps administrator do to meet this requirement?
A. Create an identity-based IAM policy in each member account to deny actions on EC2 instances by the root user.
B. In the organization's management account, create a service control policy (SCP) to deny actions on EC2 instances by the root user in all member accounts.
C. Use AWS Config to prevent any actions on EC2 instances by the root user.
D. Use Amazon Inspector in each member account to scan for root user logins and to prevent any actions on EC2 instances by the root user.
A company has an Amazon EC2 instance that supports a production system. The EC2 instance is backed by an Amazon Elastic Block Store (Amazon EBS) volume. The EBS volume's drive has filled to 100% capacity, which is causing the application on the EC2 instance to experience errors.
Which solution will remediate these errors in the LEAST amount of time?
A. Modify the EBS volume by adding additional drive space. Log on to the EC2 instance. Use the file system-specific commands to extend the file system.
B. Create a snapshot of the existing EBS volume. When the snapshot is complete, create an EBS volume of a larger size from the snapshot in the same Availability Zone as the EC2 instance. Attach the new EBS volume to the EC2 instance. Mount the file system.
C. Create a new EBS volume of a larger size in the same Availability Zone as the EC2 instance. Attach the EBS volume to the EC2 instance. Copy the data from the existing EBS volume to the new EBS volume.
D. Stop the EC2 instance. Change the EC2 instance to a larger instance size that includes additional drive space. Start the EC2 instance.
A company stores data in Amazon S3 buckets that are provisioned in three separate AWS Regions. The data is copied from the S3 buckets to the data center over the public internet using a VPN. The SysOps administrator notices that, occasionally, the transfers take longer than usual, and determines the issue is congestion within the company's ISP network.
What is the MOST cost-effective approach the administrator can take to ensure consistent transfer times from S3 to the data center?
A. Establish an AWS Direct Connect link to each Region. Create a private virtual interface over each link.
B. Establish an AWS Direct Connect link to each Region. Create a public virtual interface over each link.
C. Establish an AWS Direct Connect link to one of the Regions. Create a private virtual interface over that link.
D. Establish an AWS Direct Connect link to one of the Regions. Create a public virtual interface over that link.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your SOA-C02 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.