Exam Details

  • Exam Code
    :MLS-C01
  • Exam Name
    :AWS Certified Machine Learning - Specialty (MLS-C01)
  • Certification
    :Amazon Certifications
  • Vendor
    :Amazon
  • Total Questions
    :394 Q&As
  • Last Updated
    :May 04, 2025

Amazon Amazon Certifications MLS-C01 Questions & Answers

  • Question 61:

    An insurance company developed a new experimental machine learning (ML) model to replace an existing model that is in production. The company must validate the quality of predictions from the new experimental model in a production environment before the company uses the new experimental model to serve general user requests.

    Which one model can serve user requests at a time. The company must measure the performance of the new experimental model without affecting the current live traffic

    Which solution will meet these requirements?

    A. A/B testing

    B. Canary release

    C. Shadow deployment

    D. Blue/green deployment

  • Question 62:

    A media company is building a computer vision model to analyze images that are on social media. The model consists of CNNs that the company trained by using images that the company stores in Amazon S3. The company used an Amazon SageMaker training job in File mode with a single Amazon EC2 On- Demand Instance.

    Every day, the company updates the model by using about 10,000 images that the company has collected in the last 24 hours. The company configures training with only one epoch. The company wants to speed up training and lower costs without the need to make any code changes.

    Which solution will meet these requirements?

    A. Instead of File mode, configure the SageMaker training job to use Pipe mode. Ingest the data from a pipe.

    B. Instead Of File mode, configure the SageMaker training job to use FastFile mode with no Other changes.

    C. Instead Of On-Demand Instances, configure the SageMaker training job to use Spot Instances. Make no Other changes.

    D. Instead Of On-Demand Instances, configure the SageMaker training job to use Spot Instances. Implement model checkpoints.

  • Question 63:

    A data engineer is preparing a dataset that a retail company will use to predict the number of visitors to stores. The data engineer created an Amazon S3 bucket. The engineer subscribed the S3 bucket to an AWS Data Exchange data product for general economic indicators. The data engineer wants to join the economic indicator data to an existing table in Amazon Athena to merge with the business data. All these transformations must finish running in 30-60 minutes.

    Which solution will meet these requirements MOST cost-effectively?

    A. Configure the AWS Data Exchange product as a producer for an Amazon Kinesis data stream. Use an Amazon Kinesis Data Firehose delivery stream to transfer the data to Amazon S3 Run an AWS Glue job that will merge the existing business data with the Athena table. Write the result set back to Amazon S3.

    B. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS Lambda function. Program the Lambda function to use Amazon SageMaker Data Wrangler to merge the existing business data with the Athena table. Write the result set back to Amazon S3.

    C. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS Lambda Function Program the Lambda function to run an AWS Glue job that will merge the existing business data with the Athena table Write the results back to Amazon S3.

    D. Provision an Amazon Redshift cluster. Subscribe to the AWS Data Exchange product and use the product to create an Amazon Redshift Table Merge the data in Amazon Redshift. Write the results back to Amazon S3.

  • Question 64:

    A machine learning (ML) specialist is using the Amazon SageMaker DeepAR forecasting algorithm to train a model on CPU-based Amazon EC2 On-Demand instances. The model currently takes multiple hours to train. The ML specialist wants to decrease the training time of the model.

    Which approaches will meet this requirement? (SELECT TWO )

    A. Replace On-Demand Instances with Spot Instances

    B. Configure model auto scaling dynamically to adjust the number of instances automatically.

    C. Replace CPU-based EC2 instances with GPU-based EC2 instances.

    D. Use multiple training instances.

    E. Use a pre-trained version of the model. Run incremental training.

  • Question 65:

    A company processes millions of orders every day. The company uses Amazon DynamoDB tables to store order information. When customers submit new orders, the new orders are immediately added to the DynamoDB tables. New orders arrive in the DynamoDB tables continuously.

    A data scientist must build a peak-time prediction solution. The data scientist must also create an Amazon OuickSight dashboard to display near real-lime order insights. The data scientist needs to build a solution that will give QuickSight access to the data as soon as new order information arrives.

    Which solution will meet these requirements with the LEAST delay between when a new order is processed and when QuickSight can access the new order information?

    A. Use AWS Glue to export the data from Amazon DynamoDB to Amazon S3. Configure OuickSight to access the data in Amazon S3.

    B. Use Amazon Kinesis Data Streams to export the data from Amazon DynamoDB to Amazon S3. Configure OuickSight to access the data in Amazon S3.

    C. Use an API call from OuickSight to access the data that is in Amazon DynamoDB directly

    D. Use Amazon Kinesis Data Firehose to export the data from Amazon DynamoDB to Amazon S3. Configure OuickSight to access the data in Amazon S3.

  • Question 66:

    A company deployed a machine learning (ML) model on the company website to predict real estate prices. Several months after deployment, an ML engineer notices that the accuracy of the model has gradually decreased.

    The ML engineer needs to improve the accuracy of the model. The engineer also needs to receive notifications for any future performance issues.

    Which solution will meet these requirements?

    A. Perform incremental training to update the model. Activate Amazon SageMaker Model Monitor to detect model performance issues and to send notifications.

    B. Use Amazon SageMaker Model Governance. Configure Model Governance to automatically adjust model hyper para meters. Create a performance threshold alarm in Amazon CloudWatch to send notifications.

    C. Use Amazon SageMaker Debugger with appropriate thresholds. Configure Debugger to send Amazon CloudWatch alarms to alert the team Retrain the model by using only data from the previous several months.

    D. Use only data from the previous several months to perform incremental training to update the model. Use Amazon SageMaker Model Monitor to detect model performance issues and to send notifications.

  • Question 67:

    A company wants to predict stock market price trends. The company stores stock market data each business day in Amazon S3 in Apache Parquet format. The company stores 20 GB of data each day for each stock code.

    A data engineer must use Apache Spark to perform batch preprocessing data transformations quickly so the company can complete prediction jobs before the stock market opens the next day. The company plans to track more stock market codes and needs a way to scale the preprocessing data transformations. Which AWS service or feature will meet these requirements with the LEAST development effort over time?

    A. AWS Glue jobs

    B. Amazon EMR cluster

    C. Amazon Athena

    D. AWS Lambda

  • Question 68:

    A data scientist uses Amazon SageMaker Data Wrangler to define and perform transformations and feature engineering on historical data. The data scientist saves the transformations to SageMaker Feature Store.

    The historical data is periodically uploaded to an Amazon S3 bucket. The data scientist needs to transform the new historic data and add it to the online feature store The data scientist needs to prepare the .....historic data for training and inference by using native integrations.

    Which solution will meet these requirements with the LEAST development effort?

    A. Use AWS Lambda to run a predefined SageMaker pipeline to perform the transformations on each new dataset that arrives in the S3 bucket.

    B. Run an AWS Step Functions step and a predefined SageMaker pipeline to perform the transformations on each new dalaset that arrives in the S3 bucket

    C. Use Apache Airflow to orchestrate a set of predefined transformations on each new dataset that arrives in the S3 bucket.

    D. Configure Amazon EventBridge to run a predefined SageMaker pipeline to perform the transformations when a new data is detected in the S3 bucket.

  • Question 69:

    A machine learning (ML) engineer has created a feature repository in Amazon SageMaker Feature Store for the company. The company has AWS accounts for development, integration, and production. The company hosts a feature store in the development account. The company uses Amazon S3 buckets to store feature values offline. The company wants to share features and to allow the integration account and the production account to reuse the features that are in the feature repository.

    Which combination of steps will meet these requirements? (Select TWO.)

    A. Create an IAM role in the development account that the integration account and production account can assume. Attach IAM policies to the role that allow access to the feature repository and the S3 buckets.

    B. Share the feature repository that is associated the S3 buckets from the development account to the integration account and the production account by using AWS Resource Access Manager (AWS RAM).

    C. Use AWS Security Token Service (AWS STS) from the integration account and the production account to retrieve credentials for the development account.

    D. Set up S3 replication between the development S3 buckets and the integration and production S3 buckets.

    E. Create an AWS PrivateLink endpoint in the development account for SageMaker.

  • Question 70:

    A retail company stores 100 GB of daily transactional data in Amazon S3 at periodic intervals. The company wants to identify the schema of the transactional data. The company also wants to perform transformations on the transactional data that is in Amazon S3.

    The company wants to use a machine learning (ML) approach to detect fraud in the transformed data.

    Which combination of solutions will meet these requirements with the LEAST operational overhead? (Select THREE.)

    A. Use Amazon Athena to scan the data and identify the schema.

    B. Use AWS Glue crawlers to scan the data and identify the schema.

    C. Use Amazon Redshift to store procedures to perform data transformations

    D. Use AWS Glue workflows and AWS Glue jobs to perform data transformations.

    E. Use Amazon Redshift ML to train a model to detect fraud.

    F. Use Amazon Fraud Detector to train a model to detect fraud.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your MLS-C01 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.