Exam Details

  • Exam Code
    :PROFESSIONAL-MACHINE-LEARNING-ENGINEER
  • Exam Name
    :Professional Machine Learning Engineer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :282 Q&As
  • Last Updated
    :May 16, 2025

Google Google Certifications PROFESSIONAL-MACHINE-LEARNING-ENGINEER Questions & Answers

  • Question 191:

    You are implementing a batch inference ML pipeline in Google Cloud. The model was developed using TensorFlow and is stored in SavedModel format in Cloud Storage. You need to apply the model to a historical dataset containing 10 TB of data that is stored in a BigQuery table. How should you perform the inference?

    A. Export the historical data to Cloud Storage in Avro format. Configure a Vertex AI batch prediction job to generate predictions for the exported data

    B. Import the TensorFlow model by using the CREATE MODEL statement in BigQuery ML. Apply the historical data to the TensorFlow model

    C. Export the historical data to Cloud Storage in CSV format. Configure a Vertex AI batch prediction job to generate predictions for the exported data

    D. Configure a Vertex AI batch prediction job to apply the model to the historical data in BigQuery

  • Question 192:

    You recently deployed a model to a Vertex AI endpoint. Your data drifts frequently, so you have enabled request-response logging and created a Vertex AI Model Monitoring job. You have observed that your model is receiving higher traffic than expected. You need to reduce the model monitoring cost while continuing to quickly detect drift. What should you do?

    A. Replace the monitoring job with a DataFlow pipeline that uses TensorFlow Data Validation (TFDV)

    B. Replace the monitoring job with a custom SQL script to calculate statistics on the features and predictions in BigQuery

    C. Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job

    D. Increase the monitor_interval parameter in the ScheduleConfig of the monitoring job

  • Question 193:

    You work for a retail company. You have created a Vertex AI forecast model that produces monthly item sales predictions. You want to quickly create a report that will help to explain how the model calculates the predictions. You have one month of recent actual sales data that was not included in the training dataset. How should you generate data for your report?

    A. Create a batch prediction job by using the actual sales data. Compare the predictions to the actuals in the report.

    B. Create a batch prediction job by using the actual sales data, and configure the job settings to generate feature attributions. Compare the results in the report.

    C. Generate counterfactual examples by using the actual sales data. Create a batch prediction job using the actual sales data and the counterfactual examples. Compare the results in the report.

    D. Train another model by using the same training dataset as the original, and exclude some columns. Using the actual sales data create one batch prediction job by using the new model and another one with the original model. Compare the two sets of predictions in the report.

  • Question 194:

    Your team has a model deployed to a Vertex AI endpoint. You have created a Vertex AI pipeline that automates the model training process and is triggered by a Cloud Function. You need to prioritize keeping the model up-to-date, but also minimize retraining costs. How should you configure retraining?

    A. Configure Pub/Sub to call the Cloud Function when a sufficient amount of new data becomes available

    B. Configure a Cloud Scheduler job that calls the Cloud Function at a predetermined frequency that fits your team's budget

    C. Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when anomalies are detected

    D. Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when feature drift is detected

  • Question 195:

    Your company stores a large number of audio files of phone calls made to your customer call center in an on-premises database. Each audio file is in wav format and is approximately 5 minutes long. You need to analyze these audio files for customer sentiment. You plan to use the Speech-to-Text API You want to use the most efficient approach. What should you do?

    A. 1. Upload the audio files to Cloud Storage

    2.

    Call the speech:longrunningrecognize API endpoint to generate transcriptions

    3.

    Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions.

    B. 1. Upload the audio files to Cloud Storage.

    2.

    Call the speech:longrunningrecognize API endpoint to generate transcriptions

    3.

    Create a Cloud Function that calls the Natural Language API by using the analyzeSentiment method

    C. 1. Iterate over your local files in Python

    2.

    Use the Speech-to-Text Python library to create a speech.RecognitionAudio object, and set the content to the audio file data

    3.

    Call the speech:recognize API endpoint to generate transcriptions

    4.

    Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions.

    D. 1. Iterate over your local files in Python

    2.

    Use the Speech-to-Text Python Library to create a speech.RecognitionAudio object and set the content to the audio file data

    3.

    Call the speech:longrunningrecognize API endpoint to generate transcriptions.

    4.

    Call the Natural Language API by using the analyzeSentiment method

  • Question 196:

    You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories. You have a labeled dataset in Cloud Storage. You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency. What should you do?

    A. Train the model by using AutoML, and register the model in Vertex AI Model Registry. Configure your mobile application to send batch requests during prediction.

    B. Train the model by using AutoML Edge, and export it as a Core ML model. Configure your mobile application to use the .mlmodel file directly.

    C. Train the model by using AutoML Edge, and export the model as a TFLite model. Configure your mobile application to use the .tflite file directly.

    D. Train the model by using AutoML, and expose the model as a Vertex AI endpoint. Configure your mobile application to invoke the endpoint during prediction.

  • Question 197:

    You work for a retail company. You have been asked to develop a model to predict whether a customer will purchase a product on a given day. Your team has processed the company's sales data, and created a table with the following rows:

    1.

    Customer_id

    2.

    Product_id

    3.

    Date

    4.

    Days_since_last_purchase (measured in days)

    5.

    Average_purchase_frequency (measured in 1/days)

    6.

    Purchase (binary class, if customer purchased product on the Date)

    You need to interpret your model's results for each individual prediction. What should you do?

    A. Create a BigQuery table. Use BigQuery ML to build a boosted tree classifier. Inspect the partition rules of the trees to understand how each prediction flows through the trees.

    B. Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint and enable feature attributions. Use the “explain” method to get feature attribution values for each individual prediction.

    C. Create a BigQuery table. Use BigQuery ML to build a logistic regression classification model. Use the values of the coefficients of the model to interpret the feature importance, with higher values corresponding to more importance

    D. Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint. At each prediction, enable L1 regularization to detect non-informative features.

  • Question 198:

    You work for a company that captures live video footage of checkout areas in their retail stores. You need to use the live video footage to build a model to detect the number of customers waiting for service in near real time. You want to implement a solution quickly and with minimal effort. How should you build the model?

    A. Use the Vertex AI Vision Occupancy Analytics model.

    B. Use the Vertex AI Vision Person/vehicle detector model.

    C. Train an AutoML object detection model on an annotated dataset by using Vertex AutoML.

    D. Train a Seq2Seq+ object detection model on an annotated dataset by using Vertex AutoML.

  • Question 199:

    You work as an analyst at a large banking firm. You are developing a robust scalable ML pipeline to tram several regression and classification models. Your primary focus for the pipeline is model interpretability. You want to productionize the pipeline as quickly as possible. What should you do?

    A. Use Tabular Workflow for Wide and Deep through Vertex AI Pipelines to jointly train wide linear models and deep neural networks

    B. Use Google Kubernetes Engine to build a custom training pipeline for XGBoost-based models

    C. Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models

    D. Use Cloud Composer to build the training pipelines for custom deep learning-based models

  • Question 200:

    You developed a Transformer model in TensorFlow to translate text. Your training data includes millions of documents in a Cloud Storage bucket. You plan to use distributed training to reduce training time. You need to configure the training job while minimizing the effort required to modify code and to manage the cluster's configuration. What should you do?

    A. Create a Vertex AI custom training job with GPU accelerators for the second worker pool. Use tf.distribute.MultiWorkerMirroredStrategy for distribution.

    B. Create a Vertex AI custom distributed training job with Reduction Server. Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.

    C. Create a training job that uses Cloud TPU VMs. Use tf.distribute.TPUStrategy for distribution.

    D. Create a Vertex AI custom training job with a single worker pool of A2 GPU machine type instances. Use tf.distribute.MirroredStrategv for distribution.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.