Exam Details

  • Exam Code
    :MLS-C01
  • Exam Name
    :AWS Certified Machine Learning - Specialty (MLS-C01)
  • Certification
    :Amazon Certifications
  • Vendor
    :Amazon
  • Total Questions
    :394 Q&As
  • Last Updated
    :May 04, 2025

Amazon Amazon Certifications MLS-C01 Questions & Answers

  • Question 31:

    An ecommerce company wants to update a production real-time machine learning (ML) recommendation engine API that uses Amazon SageMaker. The company wants to release a new model but does not want to make changes to applications that rely on the API. The company also wants to evaluate the performance of the new model in production traffic before the company fully rolls out the new model to all users.

    Which solution will meet these requirements with the LEAST operational overhead?

    A. Create a new SageMaker endpoint for the new model. Configure an Application Load Balancer (ALB) to distribute traffic between the old model and the new model.

    B. Modify the existing endpoint to use SageMaker production variants to distribute traffic between the old model and the new model.

    C. Modify the existing endpoint to use SageMaker batch transform to distribute traffic between the old model and the new model.

    D. Create a new SageMaker endpoint for the new model. Configure a Network Load Balancer (NLB) to distribute traffic between the old model and the new model.

  • Question 32:

    A machine learning (ML) specialist at a manufacturing company uses Amazon SageMaker DeepAR to forecast input materials and energy requirements for the company. Most of the data in the training dataset is missing values for the target

    variable. The company stores the training dataset as JSON files.

    The ML specialist develop a solution by using Amazon SageMaker DeepAR to account for the missing values in the training dataset.

    Which approach will meet these requirements with the LEAST development effort?

    A. Impute the missing values by using the linear regression method. Use the entire dataset and the imputed values to train the DeepAR model.

    B. Replace the missing values with not a number (NaN). Use the entire dataset and the encoded missing values to train the DeepAR model.

    C. Impute the missing values by using a forward fill. Use the entire dataset and the imputed values to train the DeepAR model.

    D. Impute the missing values by using the mean value. Use the entire dataset and the imputed values to train the DeepAR model.

  • Question 33:

    A machine learning (ML) engineer uses Bayesian optimization for a hyperpara meter tuning job in Amazon SageMaker. The ML engineer uses precision as the objective metric.

    The ML engineer wants to use recall as the objective metric. The ML engineer also wants to expand the hyperparameter range for a new hyperparameter tuning job. The new hyperparameter range will include the range of the previously

    performed tuning job.

    Which approach will run the new hyperparameter tuning job in the LEAST amount of time?

    A. Use a warm start hyperparameter tuning job.

    B. Use a checkpointing hyperparameter tuning job.

    C. Use the same random seed for the hyperparameter tuning job.

    D. Use multiple jobs in parallel for the hyperparameter tuning job.

  • Question 34:

    A news company is developing an article search tool for its editors. The search tool should look for the articles that are most relevant and representative for particular words that are queried among a corpus of historical news documents.

    The editors test the first version of the tool and report that the tool seems to look for word matches in general. The editors have to spend additional time to filter the results to look for the articles where the queried words are most important. A

    group of data scientists must redesign the tool so that it isolates the most frequently used words in a document. The tool also must capture the relevance and importance of words for each document in the corpus.

    Which solution meets these requirements?

    A. Extract the topics from each article by using Latent Dirichlet Allocation (LDA) topic modeling. Create a topic table by assigning the sum of the topic counts as a score for each word in the articles. Configure the tool to retrieve the articles where this topic count score is higher for the queried words.

    B. Build a term frequency for each word in the articles that is weighted with the article's length. Build an inverse document frequency for each word that is weighted with all articles in the corpus. Define a final highlight score as the product of both of these frequencies. Configure the tool to retrieve the articles where this highlight score is higher for the queried words.

    C. Download a pretrained word-embedding lookup table. Create a titles-embedding table by averaging the title's word embedding for each article in the corpus. Define a highlight score for each word as inversely proportional to the distance between its embedding and the title embedding. Configure the tool to retrieve the articles where this highlight score is higher for the queried words.

    D. Build a term frequency score table for each word in each article of the corpus. Assign a score of zero to all stop words. For any other words, assign a score as the word's frequency in the article. Configure the tool to retrieve the articles where this frequency score is higher for the queried words.

  • Question 35:

    A growing company has a business-critical key performance indicator (KPI) for the uptime of a machine learning (ML) recommendation system. The company is using Amazon SageMaker hosting services to develop a recommendation model

    in a single Availability Zone within an AWS Region.

    A machine learning (ML) specialist must develop a solution to achieve high availability. The solution must have a recovery time objective (RTO) of 5 minutes.

    Which solution will meet these requirements with the LEAST effort?

    A. Deploy multiple instances for each endpoint in a VPC that spans at least two Regions.

    B. Use the SageMaker auto scaling feature for the hosted recommendation models.

    C. Deploy multiple instances for each production endpoint in a VPC that spans least two subnets that are in a second Availability Zone.

    D. Frequently generate backups of the production recommendation model. Deploy the backups in a second Region.

  • Question 36:

    A developer at a retail company is creating a daily demand forecasting model. The company stores the historical hourly demand data in an Amazon S3 bucket. However, the historical data does not include demand data for some hours.

    The developer wants to verify that an autoregressive integrated moving average (ARIMA) approach will be a suitable model for the use case.

    How should the developer verify the suitability of an ARIMA approach?

    A. Use Amazon SageMaker Data Wrangler. Import the data from Amazon S3. Impute hourly missing data. Perform a Seasonal Trend decomposition.

    B. Use Amazon SageMaker Autopilot. Create a new experiment that specifies the S3 data location. Choose ARIMA as the machine learning (ML) problem. Check the model performance.

    C. Use Amazon SageMaker Data Wrangler. Import the data from Amazon S3. Resample data by using the aggregate daily total. Perform a Seasonal Trend decomposition.

    D. Use Amazon SageMaker Autopilot. Create a new experiment that specifies the S3 data location. Impute missing hourly values. Choose ARIMA as the machine learning (ML) problem. Check the model performance.

  • Question 37:

    A company decides to use Amazon SageMaker to develop machine learning (ML) models. The company will host SageMaker notebook instances in a VPC. The company stores training data in an Amazon S3 bucket. Company security policy states that SageMaker notebook instances must not have internet connectivity.

    Which solution will meet the company's security requirements?

    A. Connect the SageMaker notebook instances that are in the VPC by using AWS Site-to-Site VPN to encrypt all internet-bound traffic. Configure VPC flow logs. Monitor all network traffic to detect and prevent any malicious activity.

    B. Configure the VPC that contains the SageMaker notebook instances to use VPC interface endpoints to establish connections for training and hosting. Modify any existing security groups that are associated with the VPC interface endpoint to allow only outbound connections for training and hosting.

    C. Create an IAM policy that prevents access the internet. Apply the IAM policy to an IAM role. Assign the IAM role to the SageMaker notebook instances in addition to any IAM roles that are already assigned to the instances.

    D. Create VPC security groups to prevent all incoming and outgoing traffic. Assign the security groups to the SageMaker notebook instances.

  • Question 38:

    An insurance company is creating an application to automate car insurance claims. A machine learning (ML) specialist used an Amazon SageMaker Object Detection - TensorFlow built-in algorithm to train a model to detect scratches and dents in images of cars. After the model was trained, the ML specialist noticed that the model performed better on the training dataset than on the testing dataset.

    Which approach should the ML specialist use to improve the performance of the model on the testing data?

    A. Increase the value of the momentum hyperparameter.

    B. Reduce the value of the dropout_rate hyperparameter.

    C. Reduce the value of the learning_rate hyperparameter

    D. Increase the value of the L2 hyperparameter.

  • Question 39:

    A machine learning (ML) developer for an online retailer recently uploaded a sales dataset into Amazon SageMaker Studio. The ML developer wants to obtain importance scores for each feature of the dataset. The ML developer will use the importance scores to feature engineer the dataset.

    Which solution will meet this requirement with the LEAST development effort?

    A. Use SageMaker Data Wrangler to perform a Gini importance score analysis.

    B. Use a SageMaker notebook instance to perform principal component analysis (PCA).

    C. Use a SageMaker notebook instance to perform a singular value decomposition analysis.

    D. Use the multicollinearity feature to perform a lasso feature selection to perform an importance scores analysis.

  • Question 40:

    A company is setting up a mechanism for data scientists and engineers from different departments to access an Amazon SageMaker Studio domain. Each department has a unique SageMaker Studio domain.

    The company wants to build a central proxy application that data scientists and engineers can log in to by using their corporate credentials. The proxy application will authenticate users by using the company's existing Identity provider (IdP).

    The application will then route users to the appropriate SageMaker Studio domain.

    The company plans to maintain a table in Amazon DynamoDB that contains SageMaker domains for each department.

    How should the company meet these requirements?

    A. Use the SageMaker CreatePresignedDomainUrl API to generate a presigned URL for each domain according to the DynamoDB table. Pass the presigned URL to the proxy application.

    B. Use the SageMaker CreateHumanTaskUi API to generate a UI URL. Pass the URL to the proxy application.

    C. Use the Amazon SageMaker ListHumanTaskUis API to list all UI URLs. Pass the appropriate URL to the DynamoDB table so that the proxy application can use the URL.

    D. Use the SageMaker CreatePresignedNotebooklnstanceUrl API to generate a presigned URL. Pass the presigned URL to the proxy application.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your MLS-C01 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.