Exam Details

  • Exam Code
    :SAA-C03
  • Exam Name
    :AWS Certified Solutions Architect - Associate (SAA-C03)
  • Certification
    :Amazon Certifications
  • Vendor
    :Amazon
  • Total Questions
    :1304 Q&As
  • Last Updated
    :Jun 07, 2025

Amazon Amazon Certifications SAA-C03 Questions & Answers

  • Question 421:

    A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images need to be

    archived but must be available instantly upon request. After 5 years, only auditors can access the images. The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this process.

    A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle rule.

    Which solution will meet these requirements MOST cost-effectively?

    A. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

    B. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

    C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

    D. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

  • Question 422:

    A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The company's online application uses the database to process transactions. The data analysis team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible. Which solution will meet these requirements with the LEAST operational overhead?

    A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes

    B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes

    C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes

    D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes

  • Question 423:

    An ecommerce company runs a PostgreSQL database on premises. The database stores data by using high IOPS Amazon Elastic Block Store (Amazon EBS) block storage. The daily peak I/O transactions per second do not exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision disk IOPS performance independent of disk storage capacity. Which solution will meet these requirements MOST cost-effectively?

    A. Configure the General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.

    B. Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.

    C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.

    D. Configure the EBS magnetic volume type to achieve maximum IOPS.

  • Question 424:

    A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The company has a high performance computing (HPC) environment in its data center and wants to expand its forecasting

    capabilities. A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained throughput. Files that are stored in the solution should be accessible to thousands of compute instances that

    will simultaneously access and process the entire dataset.

    What should the solutions architect do to meet these requirements?

    A. Use Amazon FSx for Lustre scratch file systems.

    B. Use Amazon FSx for Lustre persistent file systems.

    C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.

    D. Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

  • Question 425:

    A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to users across the world. The application needs to be highly available with an automated failover across AWS Regions. The

    company wants to minimize the latency of users without relying on IP address caching on user devices.

    What should a solutions architect do to meet these requirements?

    A. Use AWS Global Accelerator with health checks.

    B. Use Amazon Route 53 with a geolocation routing policy.

    C. Create an Amazon CloudFront distribution that includes multiple origins.

    D. Create an Application Load Balancer that uses path-based routing.

  • Question 426:

    A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed

    processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual

    postprocessing.

    Which solution will meet these requirements?

    A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.

    B. Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.

    C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.

    D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.

  • Question 427:

    A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.

    Which solution will meet these requirements with the LEAST operational overhead?

    A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.

    B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.

    C. Delete all expired and unused snapshots to reduce snapshot costs.

    D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company's snapshot policy requirements.

  • Question 428:

    A company runs applications on AWS that connect to the company's Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.

    Which solution will meet these requirements with the LEAST operational overhead?

    A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.

    B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.

    C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.

    D. Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.

  • Question 429:

    A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start of peak hours.

    Which solution will meet these requirements?

    A. Configure an Application Load Balancer to distribute traffic properly to the instances.

    B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization.

    C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.

    D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

  • Question 430:

    A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The existing data center has a Site-to- Site VPN connection to AWS that is 90% utilized.

    Which AWS service should a solutions architect use to meet these requirements?

    A. AWS DataSync with a VPC endpoint

    B. AWS Direct Connect

    C. AWS Snowball Edge Storage Optimized

    D. AWS Storage Gateway

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your SAA-C03 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.