Exam Details

  • Exam Code
    :PROFESSIONAL-MACHINE-LEARNING-ENGINEER
  • Exam Name
    :Professional Machine Learning Engineer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :282 Q&As
  • Last Updated
    :May 24, 2025

Google Google Certifications PROFESSIONAL-MACHINE-LEARNING-ENGINEER Questions & Answers

  • Question 211:

    You have trained a model by using data that was preprocessed in a batch Dataflow pipeline. Your use case requires real-time inference. You want to ensure that the data preprocessing logic is applied consistently between training and serving. What should you do?

    A. Perform data validation to ensure that the input data to the pipeline is the same format as the input data to the endpoint.

    B. Refactor the transformation code in the batch data pipeline so that it can be used outside of the pipeline. Use the same code in the endpoint.

    C. Refactor the transformation code in the batch data pipeline so that it can be used outside of the pipeline. Share this code with the end users of the endpoint.

    D. Batch the real-time requests by using a time window and then use the Dataflow pipeline to preprocess the batched requests. Send the preprocessed requests to the endpoint.

  • Question 212:

    You need to develop a custom TensorFlow model that will be used for online predictions. The training data is stored in BigQuery You need to apply instance-level data transformations to the data for model training and serving. You want to use the same preprocessing routine during model training and serving. How should you configure the preprocessing routine?

    A. Create a BigQuery script to preprocess the data, and write the result to another BigQuery table.

    B. Create a pipeline in Vertex AI Pipelines to read the data from BigQuery and preprocess it using a custom preprocessing component.

    C. Create a preprocessing function that reads and transforms the data from BigQuery. Create a Vertex AI custom prediction routine that calls the preprocessing function at serving time.

    D. Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.

  • Question 213:

    You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop. Model training will use a large batch size, and you expect training to take several weeks. You need to configure a training architecture that minimizes both training time and compute costs. What should you do?

    A. Implement 8 workers of a2-megagpu-16g machines by using tf.distribute.MultiWorkerMirroredStrategy.

    B. Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.

    C. Implement 16 workers of c2d-highcpu-32 machines by using tf.distribute.MirroredStrategy.

    D. Implement 16 workers of a2-highgpu-8g machines by using tf.distribute.MultiWorkerMirroredStrategy.

  • Question 214:

    You are building a TensorFlow text-to-image generative model by using a dataset that contains billions of images with their respective captions. You want to create a low maintenance, automated workflow that reads the data from a Cloud Storage bucket collects statistics, splits the dataset into training/validation/test datasets performs data transformations trains the model using the training/validation datasets, and validates the model by using the test dataset. What should you do?

    A. Use the Apache Airflow SDK to create multiple operators that use Dataflow and Vertex AI services. Deploy the workflow on Cloud Composer.

    B. Use the MLFlow SDK and deploy it on a Google Kubernetes Engine cluster. Create multiple components that use Dataflow and Vertex AI services.

    C. Use the Kubeflow Pipelines (KFP) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.

    D. Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.

  • Question 215:

    You are developing an ML pipeline using Vertex AI Pipelines. You want your pipeline to upload a new version of the XGBoost model to Vertex AI Model Registry and deploy it to Vertex AI Endpoints for online inference. You want to use the simplest approach. What should you do?

    A. Use the Vertex AI REST API within a custom component based on a vertex-ai/prediction/xgboost-cpu image

    B. Use the Vertex AI ModelEvaluationOp component to evaluate the model

    C. Use the Vertex AI SDK for Python within a custom component based on a python:3.10 image

    D. Chain the Vertex AI ModelUploadOp and ModelDeployOp components together

  • Question 216:

    You work for an online retailer. Your company has a few thousand short lifecycle products. Your company has five years of sales data stored in BigQuery. You have been asked to build a model that will make monthly sales predictions for each product. You want to use a solution that can be implemented quickly with minimal effort. What should you do?

    A. Use Prophet on Vertex AI Training to build a custom model.

    B. Use Vertex AI Forecast to build a NN-based model.

    C. Use BigQuery ML to build a statistical ARIMA_PLUS model.

    D. Use TensorFlow on Vertex AI Training to build a custom model.

  • Question 217:

    You are creating a model training pipeline to predict sentiment scores from text-based product reviews. You want to have control over how the model parameters are tuned, and you will deploy the model to an endpoint after it has been trained. You will use Vertex AI Pipelines to run the pipeline. You need to decide which Google Cloud pipeline components to use. What components should you choose?

    A. TabularDatasetCreateOp, CustomTrainingJobOp, and EndpointCreateOp

    B. TextDatasetCreateOp, AutoMLTextTrainingOp, and EndpointCreateOp

    C. TabularDatasetCreateOp. AutoMLTextTrainingOp, and ModelDeployOp

    D. TextDatasetCreateOp, CustomTrainingJobOp, and ModelDeployOp

  • Question 218:

    Your team frequently creates new ML models and runs experiments. Your team pushes code to a single repository hosted on Cloud Source Repositories. You want to create a continuous integration pipeline that automatically retrains the models whenever there is any modification of the code. What should be your first step to set up the CI pipeline?

    A. Configure a Cloud Build trigger with the event set as "Pull Request"

    B. Configure a Cloud Build trigger with the event set as "Push to a branch"

    C. Configure a Cloud Function that builds the repository each time there is a code change

    D. Configure a Cloud Function that builds the repository each time a new branch is created

  • Question 219:

    You have built a custom model that performs several memory-intensive preprocessing tasks before it makes a prediction. You deployed the model to a Vertex AI endpoint, and validated that results were received in a reasonable amount of time. After routing user traffic to the endpoint, you discover that the endpoint does not autoscale as expected when receiving multiple requests. What should you do?

    A. Use a machine type with more memory

    B. Decrease the number of workers per machine

    C. Increase the CPU utilization target in the autoscaling configurations.

    D. Decrease the CPU utilization target in the autoscaling configurations

  • Question 220:

    Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user's cart. The workflow will include the following processes:

    1.

    The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub

    2.

    Predictions will be stored in BigQuery

    3.

    The model will be stored in a Cloud Storage bucket and will be updated frequently

    You want to minimize prediction latency and the effort required to update the model. How should you reconfigure the architecture?

    A. Write a Cloud Function that loads the model into memory for prediction. Configure the function to be triggered when messages are sent to Pub/Sub.

    B. Create a pipeline in Vertex AI Pipelines that performs preprocessing, prediction, and postprocessing. Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.

    C. Expose the model as a Vertex AI endpoint. Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.

    D. Use the RunInference API with WatchFilePattern in a Dataflow job that wraps around the model and serves predictions.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.