An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Attach service control policies (SCPs) to the root of the organization to identity the failed login attempts. B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization. C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a central Amazon S3 bucket. D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.
B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization. explanation:
Explanation
Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/guard-duty-rds-protection.html
Question 2:
A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company's security team wants to protect the application and the database from SQL injection and other web-based attacks.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use security groups and network ACLs to secure the database and application servers. B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings. C. Use AWS Network Firewall to protect the application and the database. D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.
B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings. explanation:
Explanation
Question 3:
A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table are not returning the latest data. The company's users have not reported any other issues with database performance. Latency is in an acceptable range.
Which design change should the solutions architect recommend?
A. Add read replicas to the table. B. Use a global secondary index (GSI). C. Request strongly consistent reads for the table. D. Request eventually consistent reads for the table.
C. Request strongly consistent reads for the table. explanation:
Explanation
The most suitable design change for the company's application is to request strongly consistent reads for the table. This change will ensure that the requests to the table return the latest data, reflecting the updates from all prior write operations. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB supports two types of read consistency: eventually consistent reads and strongly consistent reads. By default, DynamoDB uses eventually consistent reads, unless users specify otherwise. Eventually consistent reads are reads that may not reflect the results of a recently completed write operation. The response might not include the changes because of the latency of propagating the data to all replicas. If users repeat their read request after a short time, the response should return the updated data. Eventually consistent reads are suitable for applications that do not require up-to-date data or can tolerate eventual consistency. Strongly consistent reads are reads that return a result that reflects all writes that received a successful response prior to the read. Users can request a strongly consistent read by setting the ConsistentRead parameter to true in their read operations, such as GetItem, Query, or Scan. Strongly consistent reads are suitable for applications that require up-to- date data or cannot tolerate eventual consistency. The other options are not correct because they do not address the issue of read consistency or are not relevant for the use case. Adding read replicas to the table is not correct because this option is not supported by DynamoDB. Read replicas are copies of a primary database instance that can serve read-only traffic and improve availability and performance. Read replicas are available for some relational database services, such as Amazon RDS or Amazon Aurora, but not for DynamoDB. Using a global secondary index (GSI) is not correct because this option is not related to read consistency. A GSI is an index that has a partition key and an optional sort key that are different from those on the base table. A GSI allows users to query the data in different ways, with eventual consistency3. Requesting eventually consistent reads for the table is not correct because this option is already the default behavior of DynamoDB and does not solve the problem of requests not returning the latest data.
References: Read consistency - Amazon DynamoDB Working with read replicas - Amazon Relational Database Service Working with global secondary indexes - Amazon DynamoDB
Question 4:
A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. DB cluster usage increases at a much faster rate.
The company adds a read replica, which reduces the DB cluster usage for a short period of time. However, the load continues to increase. The operations that cause the increase in DB cluster usage are all repeated read statements that are related to delivery details. The company needs to alleviate the effect of repeated reads on the DB cluster.
Which solution will meet these requirements MOST cost-effectively?
A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster. B. Add an additional read replica to the DB cluster. C. Configure Aurora Auto Scaling for the Aurora read replicas. D. Modify the DB cluster to have multiple writer instances.
A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster. explanation:
Explanation
Question 5:
A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances run in an Auto Scaling group for the application tier. The company needs to make an automated scaling plan that will analyze each resource's daily and weekly historical workload trends. The configuration must scale resources appropriately according to both the forecast and live changes in utilization.
Which scaling strategy should a solutions architect recommend to meet these requirements?
A. Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances. B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking C. Create an automated scheduled scaling action based on the traffic patterns of the web application. D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.
B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking explanation:
Explanation
This solution meets the requirements because it allows the company to use both predictive scaling and dynamic scaling to optimize the capacity of its Auto Scaling group. Predictive scaling uses machine learning to analyze historical data and
forecast future traffic patterns. It then adjusts the desired capacity of the group in advance of the predicted changes. Dynamic scaling uses target tracking to maintain a specified metric (such as CPU utilization) at a target value. It scales the
group in or out as needed to keep the metric close to the target. By using both scaling methods, the company can benefit from faster, simpler, and more accurate scaling that responds to both forecasted and live changes in utilization.
References:
Predictive scaling for Amazon EC2 Auto Scaling
[Target tracking scaling policies for Amazon EC2 Auto Scaling]
Question 6:
A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost.
How can these requirements be met?
A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload. B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3. D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. explanation:
Explanation
The solution that will meet the requirements is to deploy AWS Storage Gateway using cached volumes and use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. This solution will allow the company to migrate its storage infrastructure to AWS while minimizing bandwidth costs, as it will only transfer data that is not cached locally. The solution will also allow for immediate retrieval of data at no additional cost, as the cached volumes will provide low-latency access to the most recently used data. The data stored in Amazon S3 will be durable, scalable, and secure. The other solutions are not as effective as the first one because they either do not meet the requirements or introduce additional costs or complexity. Deploying Amazon S3 Glacier Vault and enabling expedited retrieval will not meet the requirements, as it will incur additional costs for both storage and retrieval. Amazon S3 Glacier is a low-cost storage service for data archiving and backup, but it has longer retrieval times than Amazon S3. Expedited retrieval is a feature that allows faster access to data, but it charges a higher fee per GB retrieved. Provisioned retrieval capacity is a feature that reserves dedicated capacity for expedited retrievals, but it also charges a monthly fee per provisioned capacity unit. Deploying AWS Storage Gateway using stored volumes to store data locally and use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the requirements, as it will not migrate the storage infrastructure to AWS, but only create backups. Stored volumes are volumes that store the primary data locally and back up snapshots to Amazon S3. This solution will not reduce the storage capacity needed on-premises, nor will it leverage the benefits of cloud storage. Deploying AWS Direct Connect to connect with the on-premises data center and configuring AWS Storage Gateway to store data locally and use Storage Gateway to asynchronously back up point-in- time snapshots of the data to Amazon S3 will not meet the requirements, as it will also not migrate the storage infrastructure to AWS, but only create backups. AWS Direct Connect is a service that establishes a dedicated network connection between the on-premises data center and AWS, which can reduce network costs and increase bandwidth. However, this solution will also not reduce the storage capacity needed on- premises, nor will it leverage the benefits of cloud storage.
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's operations. The company wants to prevent this type of disruption in the future.
Which solution will meet this requirement with the LEAST operational overhead?
A. Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables. B. Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually. C. Configure deletion protection on the DynamoDB tables. D. Enable point-in-time recovery on the DynamoDB tables.
C. Configure deletion protection on the DynamoDB tables. explanation:
Explanation
Deletion protection is a feature of DynamoDB that prevents accidental deletion of tables. When deletion protection is enabled, you cannot delete a table unless you explicitly disable it first. This adds an extra layer of security and reduces the risk of data loss and operational disruption. Deletion protection is easy to enable and disable using the AWS Management Console, the AWS CLI, or the DynamoDB API. This solution has the least operational overhead, as you do not need to create, manage, or invoke any additional resources or services.
References: Using deletion protection to protect your table Preventing Accidental Table Deletion in DynamoDB Amazon DynamoDB now supports table deletion protection
Question 8:
A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.
The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game's user base is increasing rapidly.
What should a solutions architect do to improve the performance of the data tier?
A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards. C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis. explanation:
Explanation
The solution that will improve the performance of the data tier is to deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that supports geospatial data types and commands. By using ElastiCache for Redis, the game can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for high-frequency updates and queries of location data. ElastiCache for Redis also supports replication, sharding, and auto scaling to handle the increasing user base of the game. The other solutions are not as effective as the first one because they either do not improve the performance, do not support geospatial data, or do not leverage caching. Taking a snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve the performance of the data tier, as it only provides high availability and durability, but not scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards will not improve the performance of the data tier, as OpenSearch Service is mainly designed for full-text search and analytics, not for real-time location tracking. OpenSearch Service also does not support geospatial data types and commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance and modifying the game to use DAX will not improve the performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for PostgreSQL. DAX also does not support geospatial data types and commands.
References: Amazon ElastiCache for Redis Geospatial Data Support - Amazon ElastiCache for Redis Amazon RDS for PostgreSQL Amazon OpenSearch Service Amazon DynamoDB Accelerator (DAX)
Question 9:
A company maintains about 300 TB in Amazon S3 Standard storage month after month. The S3 objects are each typically around 50 GB in size and are frequently replaced with multipart uploads by their global application. The number and size of S3 objects remain constant, but the company's S3 storage costs are increasing each month.
How should a solutions architect reduce costs in this situation?
A. Switch from multipart uploads to Amazon S3 Transfer Acceleration. B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads. C. Configure S3 inventory to prevent objects from being archived too quickly. D. Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.
B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads. explanation:
Explanation
This option is the most cost-effective way to reduce the S3 storage costs in this situation. Incomplete multipart uploads are parts of objects that are not completed or aborted by the application. They consume storage space and incur charges until they are deleted. By enabling an S3 Lifecycle policy that deletes incomplete multipart uploads, you can automatically remove them after a specified period of time (such as one day) and free up the storage space. This will reduce the S3 storage costs and also improve the performance of the application by avoiding unnecessary retries or errors. Option A is not correct because switching from multipart uploads to Amazon S3 Transfer Acceleration will not reduce the S3 storage costs. Amazon S3 Transfer Acceleration is a feature that enables faster data transfers to and from S3 by using the AWS edge network. It is useful for improving the upload speed of large objects over long distances, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the feature. Option C is not correct because configuring S3 inventory to prevent objects from being archived too quickly will not reduce the S3 storage costs. Amazon S3 Inventory is a feature that provides a report of the objects and their metadata in an S3 bucket. It is useful for managing and auditing the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by generating additional S3 objects for the inventory reports. Option D is not correct because configuring Amazon CloudFront to reduce the number of objects stored in Amazon S3 will not reduce the S3 storage costs. Amazon CloudFront is a content delivery network (CDN) that distributes the S3 objects to edge locations for faster and lower latency access. It is useful for improving the download speed and availability of the S3 objects, but it does not affect the storage space or charges. In fact, it may increase the costs by adding a data transfer fee for using the service.
References: Managing your storage lifecycle Using multipart upload Amazon S3 Transfer Acceleration Amazon S3 Inventory What Is Amazon CloudFront?
Question 10:
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the
number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues.
Which solution will resolve this issue with the LEAST administrative overhead?
A. Scale out the nodes by tracking the memory usage. B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster. C. Use an AWS Lambda function to resize the EKS cluster automatically. D. Use an Amazon EC2 Auto Scaling group to distribute the workload.
B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster. explanation:
Nowadays, the certification exams become more and more important and required by more and more
enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare
for the exam in a short time with less efforts? How to get a ideal result and how to find the
most reliable resources? Here on Vcedump.com, you will find all the answers.
Vcedump.com provide not only Amazon exam questions,
answers and explanations but also complete assistance on your exam preparation and certification
application. If you are confused on your SAA-C03 exam preparations
and Amazon certification application, do not hesitate to visit our
Vcedump.com to find your solutions here.