You are training models in Vertex AI by using data that spans across multiple Google Cloud projects. You need to find, track, and compare the performance of the different versions of your models. Which Google Cloud services should you include in your ML workflow?
A. Dataplex, Vertex AI Feature Store, and Vertex AI TensorBoard
B. Vertex AI Pipelines, Vertex AI Feature Store, and Vertex AI Experiments
C. Dataplex, Vertex AI Experiments, and Vertex AI ML Metadata
D. Vertex AI Pipelines, Vertex AI Experiments, and Vertex AI Metadata
You are using Keras and TensorFlow to develop a fraud detection model. Records of customer transactions are stored in a large table in BigQuery. You need to preprocess these records in a cost-effective and efficient way before you use them to train the model. The trained model will be used to perform batch inference in BigQuery. How should you implement the preprocessing workflow?
A. Implement a preprocessing pipeline by using Apache Spark, and run the pipeline on Dataproc. Save the preprocessed data as CSV files in a Cloud Storage bucket.
B. Load the data into a pandas DataFrame. Implement the preprocessing steps using pandas transformations, and train the model directly on the DataFrame.
C. Perform preprocessing in BigQuery by using SQL. Use the BigQueryClient in TensorFlow to read the data directly from BigQuery.
D. Implement a preprocessing pipeline by using Apache Beam, and run the pipeline on Dataflow. Save the preprocessed data as CSV files in a Cloud Storage bucket.
You need to use TensorFlow to train an image classification model. Your dataset is located in a Cloud Storage directory and contains millions of labeled images. Before training the model, you need to prepare the data. You want the data preprocessing and model training workflow to be as efficient, scalable, and low maintenance as possible. What should you do?
A. 1. Create a Dataflow job that creates sharded TFRecord files in a Cloud Storage directory.
2.
Reference tf.data.TFRecordDataset in the training script.
3.
Train the model by using Vertex AI Training with a V100 GPU.
B. 1. Create a Dataflow job that moves the images into multiple Cloud Storage directories, where each directory is named according to the corresponding label
2.
Reference tfds.folder_dataset:ImageFolder in the training script.
3.
Train the model by using Vertex AI Training with a V100 GPU.
C. 1. Create a Jupyter notebook that uses an nt-standard-64 V100 GPU Vertex AI Workbench instance.
2.
Write a Python script that creates sharded TFRecord files in a directory inside the instance.
3.
Reference tf.data.TFRecordDataset in the training script.
4.
Train the model by using the Workbench instance.
D. 1. Create a Jupyter notebook that uses an n1-standard-64, V100 GPU Vertex AI Workbench instance.
2.
Write a Python script that copies the images into multiple Cloud Storage directories, where each. directory is named according to the corresponding label.
3.
Reference tfds.foladr_dataset.ImageFolder in the training script.
4.
Train the model by using the Workbench instance.
You are building a custom image classification model and plan to use Vertex AI Pipelines to implement the end-to-end training. Your dataset consists of images that need to be preprocessed before they can be used to train the model. The preprocessing steps include resizing the images, converting them to grayscale, and extracting features. You have already implemented some Python functions for the preprocessing tasks. Which components should you use in your pipeline?
A. DataprocSparkBatchOp and CustomTrainingJobOp
B. DataflowPythonJobOp, WaitGcpResourcesOp, and CustomTrainingJobOp
C. dsl.ParallelFor, dsl.component, and CustomTrainingJobOp
D. ImageDatasetImportDataOp, dsl.component, and AutoMLImageTrainingJobRunOp
You work for a retail company that is using a regression model built with BigQuery ML to predict product sales. This model is being used to serve online predictions. Recently you developed a new version of the model that uses a different architecture (custom model). Initial analysis revealed that both models are performing as expected. You want to deploy the new version of the model to production and monitor the performance over the next two months. You need to minimize the impact to the existing and future model users. How should you deploy the model?
A. Import the new model to the same Vertex AI Model Registry as a different version of the existing model. Deploy the new model to the same Vertex AI endpoint as the existing model, and use traffic splitting to route 95% of production traffic to the BigQuery ML model and 5% of production traffic to the new model.
B. Import the new model to the same Vertex AI Model Registry as the existing model. Deploy the models to one Vertex AI endpoint. Route 95% of production traffic to the BigQuery ML model and 5% of production traffic to the new model.
C. Import the new model to the same Vertex AI Model Registry as the existing model. Deploy each model to a separate Vertex AI endpoint.
D. Deploy the new model to a separate Vertex AI endpoint. Create a Cloud Run service that routes the prediction requests to the corresponding endpoints based on the input feature values.
You are using Vertex AI and TensorFlow to develop a custom image classification model. You need the model's decisions and the rationale to be understandable to your company's stakeholders. You also want to explore the results to identify any issues or potential biases. What should you do?
A. 1. Use TensorFlow to generate and visualize features and statistics.
2. Analyze the results together with the standard model evaluation metrics.
B. 1. Use TensorFlow Profiler to visualize the model execution.
2. Analyze the relationship between incorrect predictions and execution bottlenecks.
C. 1. Use Vertex Explainable AI to generate example-based explanations.
2. Visualize the results of sample inputs from the entire dataset together with the standard model evaluation metrics.
D. 1. Use Vertex Explainable AI to generate feature attributions. Aggregate feature attributions over the entire dataset.
2. Analyze the aggregation result together with the standard model evaluation metrics.
You work for a large retailer, and you need to build a model to predict customer chum. The company has a dataset of historical customer data, including customer demographics purchase history, and website activity. You need to create the model in BigQuery ML and thoroughly evaluate its performance. What should you do?
A. Create a linear regression model in BigQuery ML, and register the model in Vertex AI Model Registry. Evaluate the model performance in Vertex AI .
B. Create a logistic regression model in BigQuery ML and register the model in Vertex AI Model Registry. Evaluate the model performance in Vertex AI .
C. Create a linear regression model in BigQuery ML. Use the ML.EVALUATE function to evaluate the model performance.
D. Create a logistic regression model in BigQuery ML. Use the ML.CONFUSION_MATRIX function to evaluate the model performance.
You are developing a model to identify traffic signs in images extracted from videos taken from the dashboard of a vehicle. You have a dataset of 100,000 images that were cropped to show one out of ten different traffic signs. The images have been labeled accordingly for model training, and are stored in a Cloud Storage bucket. You need to be able to tune the model during each training run. How should you train the model?
A. Train a model for object detection by using Vertex AI AutoML.
B. Train a model for image classification by using Vertex AI AutoML.
C. Develop the model training code for object detection, and train a model by using Vertex AI custom training.
D. Develop the model training code for image classification, and train a model by using Vertex AI custom training.
You have deployed a scikit-team model to a Vertex AI endpoint using a custom model server. You enabled autoscaling: however, the deployed model fails to scale beyond one replica, which led to dropped requests. You notice that CPU utilization remains low even during periods of high load. What should you do?
A. Attach a GPU to the prediction nodes
B. Increase the number of workers in your model server
C. Schedule scaling of the nodes to match expected demand
D. Increase the minReplicaCount in your DeployedModel configuration
You work for a pet food company that manages an online forum. Customers upload photos of their pets on the forum to share with others. About 20 photos are uploaded daily. You want to automatically and in near real time detect whether each uploaded photo has an animal. You want to prioritize time and minimize cost of your application development and deployment. What should you do?
A. Send user-submitted images to the Cloud Vision API. Use object localization to identify all objects in the image and compare the results against a list of animals.
B. Download an object detection model from TensorFlow Hub. Deploy the model to a Vertex AI endpoint. Send new user-submitted images to the model endpoint to classify whether each photo has an animal.
C. Manually label previously submitted images with bounding boxes around any animals. Build an AutoML object detection model by using Vertex AI. Deploy the model to a Vertex AI endpoint Send new user-submitted images to your model endpoint to detect whether each photo has an animal.
D. Manually label previously submitted images as having animals or not. Create an image dataset on Vertex AI. Train a classification model by using Vertex AutoML to distinguish the two classes. Deploy the model to a Vertex AI endpoint. Send new user-submitted images to your model endpoint to classify whether each photo has an animal.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.