Exam Details

  • Exam Code
    :PROFESSIONAL-MACHINE-LEARNING-ENGINEER
  • Exam Name
    :Professional Machine Learning Engineer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :282 Q&As
  • Last Updated
    :May 18, 2024

Google Google Certifications PROFESSIONAL-MACHINE-LEARNING-ENGINEER Questions & Answers

  • Question 11:

    Your team is building an application for a global bank that will be used by millions of customers. You built a forecasting model that predicts customers' account balances 3 days in the future. Your team will use the results in a new feature that will notify users when their account balance is likely to drop below $25. How should you serve your predictions?

    A. 1. Create a Pub/Sub topic for each user.

    2. Deploy a Cloud Function that sends a notification when your model predicts that a user's account balance will drop below the $25 threshold.

    B. 1. Create a Pub/Sub topic for each user.

    2. Deploy an application on the App Engine standard environment that sends a notification when your model predicts that a user's account balance will drop below the $25 threshold.

    C. 1. Build a notification system on Firebase.

    2. Register each user with a user ID on the Firebase Cloud Messaging server, which sends a notification when the average of all account balance predictions drops below the $25 threshold.

    D. 1. Build a notification system on Firebase.

    2. Register each user with a user ID on the Firebase Cloud Messaging server, which sends a notification when your model predicts that a user's account balance will drop below the $25 threshold.

  • Question 12:

    You have trained a text classification model in TensorFlow using AI Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead. What should you do?

    A. Export the model to BigQuery ML.

    B. Deploy and version the model on AI Platform.

    C. Use Dataflow with the SavedModel to read the data from BigQuery.

    D. Submit a batch prediction job on AI Platform that points to the model location in Cloud Storage.

  • Question 13:

    You work with a data engineering team that has developed a pipeline to clean your dataset and save it in a Cloud Storage bucket. You have created an ML model and want to use the data to refresh your model as soon as new data is available. As part of your CI/CD workflow, you want to automatically run a Kubeflow Pipelines training job on Google Kubernetes Engine (GKE). How should you architect this workflow?

    A. Configure your pipeline with Dataflow, which saves the files in Cloud Storage. After the file is saved, start the training job on a GKE cluster.

    B. Use App Engine to create a lightweight python client that continuously polls Cloud Storage for new files. As soon as a file arrives, initiate the training job.

    C. Configure a Cloud Storage trigger to send a message to a Pub/Sub topic when a new file is available in a storage bucket. Use a Pub/Sub-triggered Cloud Function to start the training job on a GKE cluster.

    D. Use Cloud Scheduler to schedule jobs at a regular interval. For the first step of the job, check the timestamp of objects in your Cloud Storage bucket. If there are no new files since the last run, abort the job.

  • Question 14:

    You are developing models to classify customer support emails. You created models with TensorFlow Estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud. What should you do?

    A. Use AI Platform for distributed training.

    B. Create a cluster on Dataproc for training.

    C. Create a Managed Instance Group with autoscaling.

    D. Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster.

  • Question 15:

    You are building a model to predict daily temperatures. You split the data randomly and then transformed the training and test datasets. Temperature data for model training is uploaded hourly. During testing, your model performed with 97% accuracy; however, after deploying to production, the model's accuracy dropped to 66%. How can you make your production model more accurate?

    A. Normalize the data for the training, and test datasets as two separate steps.

    B. Split the training and test data based on time rather than a random split to avoid leakage.

    C. Add more data to your test set to ensure that you have a fair distribution and sample for testing.

    D. Apply data transformations before splitting, and cross-validate to make sure that the transformations are applied to both the training and test sets.

  • Question 16:

    You need to design a customized deep neural network in Keras that will predict customer purchases based on their purchase history. You want to explore model performance using multiple model architectures, store training data, and be able to compare the evaluation metrics in the same dashboard. What should you do?

    A. Create multiple models using AutoML Tables.

    B. Automate multiple training runs using Cloud Composer.

    C. Run multiple training jobs on AI Platform with similar job names.

    D. Create an experiment in Kubeflow Pipelines to organize multiple runs.

  • Question 17:

    You are developing a Kubeflow pipeline on Google Kubernetes Engine. The first step in the pipeline is to issue a query against BigQuery. You plan to use the results of that query as the input to the next step in your pipeline. You want to achieve this in the easiest way possible. What should you do?

    A. Use the BigQuery console to execute your query, and then save the query results into a new BigQuery table.

    B. Write a Python script that uses the BigQuery API to execute queries against BigQuery. Execute this script as the first step in your Kubeflow pipeline.

    C. Use the Kubeflow Pipelines domain-specific language to create a custom component that uses the Python BigQuery client library to execute queries.

    D. Locate the Kubeflow Pipelines repository on GitHub. Find the BigQuery Query Component, copy that component's URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery.

  • Question 18:

    You have a demand forecasting pipeline in production that uses Dataflow to preprocess raw data prior to model training and prediction. During preprocessing, you employ Z-score normalization on data stored in BigQuery and write it back to BigQuery. New training data is added every week. You want to make the process more efficient by minimizing computation time and manual intervention. What should you do?

    A. Normalize the data using Google Kubernetes Engine.

    B. Translate the normalization algorithm into SQL for use with BigQuery.

    C. Use the normalizer_fn argument in TensorFlow's Feature Column API.

    D. Normalize the data with Apache Spark using the Dataproc connector for BigQuery.

  • Question 19:

    You need to train a computer vision model that predicts the type of government ID present in a given image using a GPU-powered virtual machine on Compute Engine. You use the following parameters: Optimizer: SGD Batch size = 64 Epochs = 10 Verbose =2

    During training you encounter the following error: ResourceExhaustedError: Out Of Memory (OOM) when allocating tensor. What should you do?

    A. Change the optimizer.

    B. Reduce the batch size.

    C. Change the learning rate.

    D. Reduce the image shape.

  • Question 20:

    You developed an ML model with AI Platform, and you want to move it to production. You serve a few thousand queries per second and are experiencing latency issues. Incoming requests are served by a load balancer that distributes them across multiple Kubeflow CPU-only pods running on Google Kubernetes Engine (GKE). Your goal is to improve the serving latency without changing the underlying infrastructure. What should you do?

    A. Significantly increase the max_batch_size TensorFlow Serving parameter.

    B. Switch to the tensorflow-model-server-universal version of TensorFlow Serving.

    C. Significantly increase the max_enqueued_batches TensorFlow Serving parameter.

    D. Recompile TensorFlow Serving using the source to support CPU-specific optimizations. Instruct GKE to choose an appropriate baseline minimum CPU platform for serving nodes.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.