Exam Details

  • Exam Code
    :PROFESSIONAL-MACHINE-LEARNING-ENGINEER
  • Exam Name
    :Professional Machine Learning Engineer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :282 Q&As
  • Last Updated
    :May 16, 2025

Google Google Certifications PROFESSIONAL-MACHINE-LEARNING-ENGINEER Questions & Answers

  • Question 181:

    You recently used XGBoost to train a model in Python that will be used for online serving. Your model prediction service will be called by a backend service implemented in Golang running on a Google Kubernetes Engine (GKE) cluster. Your model requires pre and postprocessing steps. You need to implement the processing steps so that they run at serving time. You want to minimize code changes and infrastructure maintenance, and deploy your model into production as quickly as possible. What should you do?

    A. Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, and deploy it on your organization's GKE cluster.

    B. Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, Upload the image to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.

    C. Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint.

    D. Use the XGBoost prebuilt serving container when importing the trained model into Vertex AI. Deploy the model to a Vertex AI endpoint. Work with the backend engineers to implement the pre- and postprocessing steps in the Golang backend service.

  • Question 182:

    You recently deployed a pipeline in Vertex AI Pipelines that trains and pushes a model to a Vertex AI endpoint to serve real-time traffic. You need to continue experimenting and iterating on your pipeline to improve model performance. You plan to use Cloud Build for CI/CD You want to quickly and easily deploy new pipelines into production, and you want to minimize the chance that the new pipeline implementations will break in production. What should you do?

    A. Set up a CI/CD pipeline that builds and tests your source code. If the tests are successful, use the Google. Cloud console to upload the built container to Artifact Registry and upload the compiled pipeline to Vertex AI Pipelines.

    B. Set up a CI/CD pipeline that builds your source code and then deploys built artifacts into a pre-production environment. Run unit tests in the pre-production environment. If the tests are successful deploy the pipeline to production.

    C. Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, deploy the pipeline to production.

    D. Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, rebuild the source code and deploy the artifacts to production.

  • Question 183:

    You work for a bank with strict data governance requirements. You recently implemented a custom model to detect fraudulent transactions. You want your training code to download internal data by using an API endpoint hosted in your project's network. You need the data to be accessed in the most secure way, while mitigating the risk of data exfiltration. What should you do?

    A. Enable VPC Service Controls for peerings, and add Vertex AI to a service perimeter.

    B. Create a Cloud Run endpoint as a proxy to the data. Use Identity and Access Management (IAM) authentication to secure access to the endpoint from the training job.

    C. Configure VPC Peering with Vertex AI, and specify the network of the training job.

    D. Download the data to a Cloud Storage bucket before calling the training job.

  • Question 184:

    You are deploying a new version of a model to a production Vertex Al endpoint that is serving traffic. You plan to direct all user traffic to the new model. You need to deploy the model with minimal disruption to your application. What should you do?

    A. 1. Create a new endpoint

    2.

    Create a new model. Set it as the default version. Upload the model to Vertex AI Model Registry

    3.

    Deploy the new model to the new endpoint

    4.

    Update Cloud DNS to point to the new endpoint

    B. 1. Create a new endpoint

    2.

    Create a new model. Set the parentModel parameter to the model ID of the currently deployed model and set it as the default version. Upload the model to Vertex AI Model Registry

    3.

    Deploy the new model to the new endpoint, and set the new model to 100% of the traffic.

    C. 1. Create a new model. Set the parentModel parameter to the model ID of the currently deployed model. Upload the model to Vertex AI Model Registry.

    2. Deploy the new model to the existing endpoint, and set the new model to 100% of the traffic

    D. 1. Create a new model. Set it as the default version. Upload the model to Vertex AI Model Registry

    2. Deploy the new model to the existing endpoint

  • Question 185:

    You are training an ML model on a large dataset. You are using a TPU to accelerate the training process. You notice that the training process is taking longer than expected. You discover that the TPU is not reaching its full capacity. What should you do?

    A. Increase the learning rate

    B. Increase the number of epochs

    C. Decrease the learning rate

    D. Increase the batch size

  • Question 186:

    You work for a retail company. You have a managed tabular dataset in Vertex AI that contains sales data from three different stores. The dataset includes several features, such as store name and sale timestamp. You want to use the data to train a model that makes sales predictions for a new store that will open soon. You need to split the data between the training, validation, and test sets. What approach should you use to split the data?

    A. Use Vertex AI manual split, using the store name feature to assign one store for each set

    B. Use Vertex AI default data split

    C. Use Vertex AI chronological split, and specify the sales timestamp feature as the time variable

    D. Use Vertex AI random split, assigning 70% of the rows to the training set, 10% to the validation set, and 20% to the test set

  • Question 187:

    You have developed a BigQuery ML model that predicts customer chum, and deployed the model to Vertex AI Endpoints. You want to automate the retraining of your model by using minimal additional code when model feature values change. You also want to minimize the number of times that your model is retrained to reduce training costs. What should you do?

    A. 1 Enable request-response logging on Vertex AI Endpoints

    2.

    Schedule a TensorFlow Data Validation job to monitor prediction drift

    3.

    Execute model retraining if there is significant distance between the distributions

    B. 1. Enable request-response logging on Vertex AI Endpoints

    2.

    Schedule a TensorFlow Data Validation job to monitor training/serving skew

    3.

    Execute model retraining if there is significant distance between the distributions

    C. 1. Create a Vertex AI Model Monitoring job configured to monitor prediction drift

    2.

    Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected

    3.

    Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery

    D. 1. Create a Vertex AI Model Monitoring job configured to monitor training/serving skew

    2.

    Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected

    3.

    Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery

  • Question 188:

    You have been tasked with deploying prototype code to production. The feature engineering code is in PySpark and runs on Dataproc Serverless. The model training is executed by using a Vertex AI custom training job. The two steps are not connected, and the model training must currently be run manually after the feature engineering step finishes. You need to create a scalable and maintainable production process that runs end-to-end and tracks the connections between steps. What should you do?

    A. Create a Vertex AI Workbench notebook. Use the notebook to submit the Dataproc Serverless feature engineering job. Use the same notebook to submit the custom model training job. Run the notebook cells sequentially to tie the steps together end-to-end.

    B. Create a Vertex AI Workbench notebook. Initiate an Apache Spark context in the notebook and run the PySpark feature engineering code. Use the same notebook to run the custom model training job in TensorFlow. Run the notebook cells sequentially to tie the steps together end-to-end.

    C. Use the Kubeflow pipelines SDK to write code that specifies two components:

    1.

    The first is a Dataproc Serverless component that launches the feature engineering job

    2.

    The second is a custom component wrapped in the create_custom_training_job_from_component utility that launches the custom model training job

    Create a Vertex AI Pipelines job to link and run both components

    D. Use the Kubeflow pipelines SDK to write code that specifies two components

    1.

    The first component initiates an Apache Spark context that runs the PySpark feature engineering code

    2.

    The second component runs the TensorFlow custom model training code

    Create a Vertex AI Pipelines job to link and run both components.

  • Question 189:

    You recently deployed a scikit-learn model to a Vertex AI endpoint. You are now testing the model on live production traffic. While monitoring the endpoint, you discover twice as many requests per hour than expected throughout the day. You want the endpoint to efficiently scale when the demand increases in the future to prevent users from experiencing high latency. What should you do?

    A. Deploy two models to the same endpoint, and distribute requests among them evenly

    B. Configure an appropriate minReplicaCount value based on expected baseline traffic

    C. Set the target utilization percentage in the autoscailngMetricSpecs configuration to a higher value

    D. Change the model's machine type to one that utilizes GPUs

  • Question 190:

    You work at a bank. You have a custom tabular ML model that was provided by the bank's vendor. The training data is not available due to its sensitivity. The model is packaged as a Vertex AI Model serving container, which accepts a string as input for each prediction instance. In each string, the feature values are separated by commas. You want to deploy this model to production for online predictions and monitor the feature distribution over time with minimal effort. What should you do?

    A. 1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint

    2. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective, and provide an instance schema

    B. 1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint

    2. Create a Vertex AI Model Monitoring job with feature skew detection as the monitoring objective, and provide an instance schema

    C. 1. Refactor the serving container to accept key-value pairs as input format

    2.

    Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint

    3.

    Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective.

    D. 1. Refactor the serving container to accept key-value pairs as input format

    2.

    Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint

    3.

    Create a Vertex AI Model Monitoring job with feature skew detection as the monitoring objective

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.