Exam Details

  • Exam Code
    :SAA-C03
  • Exam Name
    :AWS Certified Solutions Architect - Associate (SAA-C03)
  • Certification
    :Amazon Certifications
  • Vendor
    :Amazon
  • Total Questions
    :1304 Q&As
  • Last Updated
    :Jun 07, 2025

Amazon Amazon Certifications SAA-C03 Questions & Answers

  • Question 781:

    A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads against the DB instance and recommends adding a read replica.

    Which combination of actions should a solutions architect take before implementing this change? (Choose two.)

    A. Enable binlog replication on the RDS primary node.

    B. Choose a failover priority for the source DB instance.

    C. Allow long-running transactions to complete on the source DB instance.

    D. Create a global table and specify the AWS Regions where the table will be available.

    E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.

  • Question 782:

    A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company's application. A solutions architect wants to implement a solution that is highly available, fault tolerant, and automatically scalable.

    What should the solutions architect recommend?

    A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.

    B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.

    C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.

    D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.

  • Question 783:

    A company has deployed a web application on AWS. The company hosts the backend database on Amazon RDS for MySQL with a primary DB instance and five read replicas to support scaling needs. The read replicas must lag no more than 1 second behind the primary DB instance. The database routinely runs scheduled stored procedures.

    As traffic on the website increases, the replicas experience additional lag during periods of peak load. A solutions architect must reduce the replication lag as much as possible. The solutions architect must minimize changes to the application code and must minimize ongoing operational overhead.

    Which solution will meet these requirements?

    A. Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace the stored procedures with Aurora MySQL native functions.

    B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the application to check the cache before the application queries the database. Replace the stored procedures with AWS Lambda functions.

    C. Migrate the database to a MySQL database that runs on Amazon EC2 instances. Choose large, compute optimized EC2 instances for all replica nodes. Maintain the stored procedures on the EC2 instances.

    D. Migrate the database to Amazon DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput, and configure on-demand capacity scaling. Replace the stored procedures with DynamoDB streams. optimized EC2 instances for all replica nodes. Maintain the stored procedures on the EC2 instances.

  • Question 784:

    A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate office users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.

    Which solution provides the LOWEST data transfer egress cost for the company?

    A. Host the visualization tool on premises and query the data warehouse directly over the internet.

    B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.

    C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region.

    D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in the same Region.

  • Question 785:

    A company hostss a three application on Amazon EC2 instances in a single Availability Zone. The web application uses a self-managed MySQL database that is hosted on an EC2 instances to store data in an Amazon Elastic Block Store (Amazon EBS) volumn. The MySQL database currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company expects traffic of 1,000 IOPS for both reads and writes at peak traffic.

    The company wants to minimize any distruptions, stabilize perperformace, and reduce costs while retaining the capacity for double the IOPS. The company wants to more the database tier to a fully managed solution that is highly available and fault tolerant.

    Which solution will meet these requirements MOST cost-effectively?

    A. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.

    B. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.

    C. Use Amazon S3 Intelligent-Tiering access tiers.

    D. Use two large EC2 instances to host the database in active-passive mode.

  • Question 786:

    An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?

    A. Use a VPC endpoint for DynamoDB.

    B. Use a NAT gateway in a public subnet.

    C. Use a NAT instance in a private subnet.

    D. Use the internet gateway attached to the VPC.

  • Question 787:

    A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group. The number of transactions can vary but the beseline CPU utilization that is noted on each run is at least 60%. The company needs to provision the capacity 30 minutes before the jobs run.

    Currently engineering complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group's capacity.

    Which solution will meet these requiements with the LEAST operational overhead?

    A. Ceate a dynamic scalling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric to 60%.

    B. Create a scheduled scaling polcy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes. Before the batch jobs run.

    C. Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the Policy, set the instances to pre-launch 30 minutes before the jobs run.

    D. Create an Amazon EventBridge event to invoke an AWS Lamda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group's desired capacity and maximum capacity by 20%.

  • Question 788:

    A company recently deployed a new auditing system to centralize information about operating system versions patching and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated

    Which solution achieves these goals MOST efficiently?

    A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.

    B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated

    C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are launched and terminated

    D. Run a custom script on the instance operating system to send data to the audit system Configure the script to be invoked by the EC2 Auto Scaling group when the instance starts and is terminated

  • Question 789:

    A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company's HPC workloads run on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is shorllived, and generates thousands of output files that are ultimately stored in persistent storage for analytics and long-term future use.

    The company seeks a cloud storage solution that permits the copying of on-premises data to long-term persistent storage to make data available for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read and write datasets and output files. Which combination of AWS services meets these requirements?

    A. Amazon FSx for Lustre integrated with Amazon S3

    B. Amazon FSx for Windows File Server integrated with Amazon S3

    C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)

    D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume

  • Question 790:

    An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-processing tasks. These tasks require manual approvals as part of the workflow A solutions architect needs to design an architecture for the order-processing application The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications The solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers

    Which solution will meet these requirements with the LEAST operational overhead?

    A. Use AWS Step Functions to build the application.

    B. Integrate all the application components in an AWS Glue job

    C. Use Amazon Simple Queue Service (Amazon SQS) to build the application

    D. Use AWS Lambda functions and Amazon EventBridge (Amazon CloudWatch Events) events to build the application

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your SAA-C03 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.