Exam Details

  • Exam Code
    :PROFESSIONAL-MACHINE-LEARNING-ENGINEER
  • Exam Name
    :Professional Machine Learning Engineer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :282 Q&As
  • Last Updated
    :May 24, 2025

Google Google Certifications PROFESSIONAL-MACHINE-LEARNING-ENGINEER Questions & Answers

  • Question 251:

    You work at an ecommerce startup. You need to create a customer churn prediction model. Your company's recent sales records are stored in a BigQuery table. You want to understand how your initial model is making predictions. You also want to iterate on the model as quickly as possible while minimizing cost. How should you build your first model?

    A. Export the data to a Cloud Storage bucket. Load the data into a pandas DataFrame on Vertex AI Workbench and train a logistic regression model with scikit-learn.

    B. Create a tf.data.Dataset by using the TensorFlow BigQueryClient. Implement a deep neural network in TensorFlow.

    C. Prepare the data in BigQuery and associate the data with a Vertex AI dataset. Create an AutoMLTabularTrainingJob to tram a classification model.

    D. Export the data to a Cloud Storage bucket. Create a tf.data.Dataset to read the data from Cloud Storage. Implement a deep neural network in TensorFlow.

  • Question 252:

    You are developing a training pipeline for a new XGBoost classification model based on tabular data. The data is stored in a BigQuery table. You need to complete the following steps:

    1.

    Randomly split the data into training and evaluation datasets in a 65/35 ratio

    2.

    Conduct feature engineering

    3.

    Obtain metrics for the evaluation dataset

    4.

    Compare models trained in different pipeline executions

    How should you execute these steps?

    A. 1. Using Vertex AI Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering.

    2.

    Enable autologging of metrics in the training component.

    3.

    Compare pipeline runs in Vertex AI Experiments.

    B. 1. Using Vertex AI Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering.

    2.

    Enable autologging of metrics in the training component.

    3.

    Compare models using the artifacts’ lineage in Vertex ML Metadata.

    C. 1. In BigQuery ML, use the CREATE MODEL statement with BOOSTED_TREE_CLASSIFIER as the model type and use BigQuery to handle the data splits.

    2.

    Use a SQL view to apply feature engineering and train the model using the data in that view.

    3.

    Compare the evaluation metrics of the models by using a SQL query with the ML.TRAINING_INFO statement.

    D. 1. In BigQuery ML, use the CREATE MODEL statement with BOOSTED_TREE_CLASSIFIER as the model type and use BigQuery to handle the data splits.

    2.

    Use ML TRANSFORM to specify the feature engineering transformations and tram the model using the data in the table.

    3.

    Compare the evaluation metrics of the models by using a SQL query with the ML.TRAINING_INFO statement.

  • Question 253:

    You work for a company that sells corporate electronic products to thousands of businesses worldwide. Your company stores historical customer data in BigQuery. You need to build a model that predicts customer lifetime value over the next three years. You want to use the simplest approach to build the model and you want to have access to visualization tools. What should you do?

    A. Create a Vertex AI Workbench notebook to perform exploratory data analysis. Use IPython magics to create a new BigQuery table with input features. Use the BigQuery console to run the CREATE MODEL statement. Validate the results by using the ML.EVALUATE and ML.PREDICT statements.

    B. Run the CREATE MODEL statement from the BigQuery console to create an AutoML model. Validate the results by using the ML.EVALUATE and ML.PREDICT statements.

    C. Create a Vertex AI Workbench notebook to perform exploratory data analysis and create input features. Save the features as a CSV file in Cloud Storage. Import the CSV file as a new BigQuery table. Use the BigQuery console to run the CREATE MODEL statement. Validate the results by using the ML.EVALUATE and ML.PREDICT statements.

    D. Create a Vertex AI Workbench notebook to perform exploratory data analysis. Use IPython magics to create a new BigQuery table with input features, create the model, and validate the results by using the CREATE MODEL, ML.EVALUATE, and ML.PREDICT statements.

  • Question 254:

    You work for a delivery company. You need to design a system that stores and manages features such as parcels delivered and truck locations over time. The system must retrieve the features with low latency and feed those features into a model for online prediction. The data science team will retrieve historical data at a specific point in time for model training. You want to store the features with minimal effort. What should you do?

    A. Store features in Bigtable as key/value data.

    B. Store features in Vertex AI Feature Store.

    C. Store features as a Vertex AI dataset, and use those features to train the models hosted in Vertex AI endpoints.

    D. Store features in BigQuery timestamp partitioned tables, and use the BigQuery Storage Read API to serve the features.

  • Question 255:

    You are working on a prototype of a text classification model in a managed Vertex AI Workbench notebook. You want to quickly experiment with tokenizing text by using a Natural Language Toolkit (NLTK) library. How should you add the library to your Jupyter kernel?

    A. Install the NLTK library from a terminal by using the pip install nltk command.

    B. Write a custom Dataflow job that uses NLTK to tokenize your text and saves the output to Cloud Storage.

    C. Create a new Vertex AI Workbench notebook with a custom image that includes the NLTK library.

    D. Install the NLTK library from a Jupyter cell by using the !pip install nltk --user command.

  • Question 256:

    You have recently used TensorFlow to train a classification model on tabular data. You have created a Dataflow pipeline that can transform several terabytes of data into training or prediction datasets consisting of TFRecords. You now need to productionize the model, and you want the predictions to be automatically uploaded to a BigQuery table on a weekly schedule. What should you do?

    A. Import the model into Vertex AI and deploy it to a Vertex AI endpoint. On Vertex AI Pipelines, create a pipeline that uses the DataflowPythonJobOp and the ModelBacthPredictOp components.

    B. Import the model into Vertex AI and deploy it to a Vertex AI endpoint. Create a Dataflow pipeline that reuses the data processing logic sends requests to the endpoint, and then uploads predictions to a BigQuery table.

    C. Import the model into Vertex AI. On Vertex AI Pipelines, create a pipeline that uses the DataflowPvthonJobOp and the ModelBatchPredictOp components.

    D. Import the model into BigQuery. Implement the data processing logic in a SQL query. On Vertex AI Pipelines create a pipeline that uses the BigquervQueryJobOp and the BigqueryPredictModelJobOp components.

  • Question 257:

    You work for an online grocery store. You recently developed a custom ML model that recommends a recipe when a user arrives at the website. You chose the machine type on the Vertex AI endpoint to optimize costs by using the queries per second (QPS) that the model can serve, and you deployed it on a single machine with 8 vCPUs and no accelerators.

    A holiday season is approaching and you anticipate four times more traffic during this time than the typical daily traffic. You need to ensure that the model can scale efficiently to the increased demand. What should you do?

    A. 1. Maintain the same machine type on the endpoint.

    2.

    Set up a monitoring job and an alert for CPU usage.

    3.

    If you receive an alert, add a compute node to the endpoint.

    B. 1. Change the machine type on the endpoint to have 32 vCPUs.

    2.

    Set up a monitoring job and an alert for CPU usage.

    3.

    If you receive an alert, scale the vCPUs further as needed.

    C. 1. Maintain the same machine type on the endpoint Configure the endpoint to enable autoscaling based on vCPU usage.

    2.

    Set up a monitoring job and an alert for CPU usage.

    3.

    If you receive an alert, investigate the cause.

    D. 1. Change the machine type on the endpoint to have a GPU. Configure the endpoint to enable autoscaling based on the GPU usage.

    2.

    Set up a monitoring job and an alert for GPU usage.

    3.

    If you receive an alert, investigate the cause.

  • Question 258:

    You recently trained an XGBoost model on tabular data. You plan to expose the model for internal use as an HTTP microservice. After deployment, you expect a small number of incoming requests. You want to productionize the model with the least amount of effort and latency. What should you do?

    A. Deploy the model to BigQuery ML by using CREATE MODEL with the BOOSTED_TREE_REGRESSOR statement, and invoke the BigQuery API from the microservice.

    B. Build a Flask-based app. Package the app in a custom container on Vertex AI, and deploy it to Vertex AI Endpoints.

    C. Build a Flask-based app. Package the app in a Docker image, and deploy it to Google Kubernetes Engine in Autopilot mode.

    D. Use a prebuilt XGBoost Vertex container to create a model, and deploy it to Vertex AI Endpoints.

  • Question 259:

    You work for an international manufacturing organization that ships scientific products all over the world. Instruction manuals for these products need to be translated to 15 different languages. Your organization's leadership team wants to start using machine learning to reduce the cost of manual human translations and increase translation speed. You need to implement a scalable solution that maximizes accuracy and minimizes operational overhead. You also want to include a process to evaluate and fix incorrect translations. What should you do?

    A. Create a workflow using Cloud Function triggers. Configure a Cloud Function that is triggered when documents are uploaded to an input Cloud Storage bucket. Configure another Cloud Function that translates the documents using the Cloud Translation API, and saves the translations to an output Cloud Storage bucket. Use human reviewers to evaluate the incorrect translations.

    B. Create a Vertex AI pipeline that processes the documents launches, an AutoML Translation training job, evaluates the translations and deploys the model to a Vertex AI endpoint with autoscaling and model monitoring. When there is a predetermined skew between training and live data, re-trigger the pipeline with the latest data.

    C. Use AutoML Translation to train a model. Configure a Translation Hub project, and use the trained model to translate the documents. Use human reviewers to evaluate the incorrect translations.

    D. Use Vertex AI custom training jobs to fine-tune a state-of-the-art open source pretrained model with your data. Deploy the model to a Vertex AI endpoint with autoscaling and model monitoring. When there is a predetermined skew between the training and live data, configure a trigger to run another training job with the latest data.

  • Question 260:

    You are developing a model to predict whether a failure will occur in a critical machine part. You have a dataset consisting of a multivariate time series and labels indicating whether the machine part failed. You recently started experimenting with a few different preprocessing and modeling approaches in a Vertex AI Workbench notebook. You want to log data and track artifacts from each run. How should you set up your experiments?

    A. 1. Use the Vertex AI SDK to create an experiment and set up Vertex ML Metadata.

    2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_merrics function to log loss values.

    B. 1. Use the Vertex AI SDK to create an experiment and set up Vertex ML Metadata.

    2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_metrics function to log loss values.

    C. 1. Create a Vertex AI TensorBoard instance and use the Vertex AI SDK to create an experiment and associate the TensorBoard instance.

    2. Use the assign_input_artifact method to track the preprocessed data and use the log_time_series_metrics function to log loss values.

    D. 1. Create a Vertex AI TensorBoard instance, and use the Vertex AI SDK to create an experiment and associate the TensorBoard instance.

    2. Use the log_time_series_metrics function to track the preprocessed data, and use the log_metrics function to log loss values.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.