You are developing a model to detect fraudulent credit card transactions. You need to prioritize detection, because missing even one fraudulent transaction could severely impact the credit card holder. You used AutoML to tram a model on users' profile information and credit card transaction data After training the initial model, you notice that the model is failing to detect many fraudulent transactions. How should you adjust the training parameters in AutoML to improve model performance? (Choose two.)
A. Increase the score threshold
B. Decrease the score threshold.
C. Add more positive examples to the training set
D. Add more negative examples to the training set
E. Reduce the maximum number of node hours for training
You need to deploy a scikit-leam classification model to production. The model must be able to serve requests 24/7, and you expect millions of requests per second to the production application from 8 am to 7 pm. You need to minimize the cost of deployment. What should you do?
A. Deploy an online Vertex AI prediction endpoint. Set the max replica count to 1
B. Deploy an online Vertex AI prediction endpoint. Set the max replica count to 100
C. Deploy an online Vertex AI prediction endpoint with one GPU per replica. Set the max replica count to 1
D. Deploy an online Vertex AI prediction endpoint with one GPU per replica. Set the max replica count to 100
You work with a team of researchers to develop state-of-the-art algorithms for financial analysis. Your team develops and debugs complex models in TensorFlow. You want to maintain the ease of debugging while also reducing the model training time. How should you set up your training environment?
A. Configure a v3-8 TPU VM. SSH into the VM to train and debug the model.
B. Configure a v3-8 TPU node. Use Cloud Shell to SSH into the Host VM to train and debug the model.
C. Configure a n1 -standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use ParameterServerStraregv to train the model.
D. Configure a n1-standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use MultiWorkerMirroredStrategy to train the model.
You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are:
1.
Input dataset
2.
Max tree depth of the boosted tree regressor
3.
Optimizer learning rate
You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train, and model complexity. You want your approach to be reproducible, and track all pipeline runs on the same platform. What should you do?
A. 1. Use BigQueryML to create a boosted tree regressor, and use the hyperparameter tuning capability.
2. Configure the hyperparameter syntax to select different input datasets: max tree depths, and optimizer learning rates. Choose the grid search option.
B. 1. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline's parameters to include those you are investigating.
2. In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize.
C. 1. Create a Vertex AI Workbench notebook for each of the different input datasets.
2.
In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters.
3.
After each notebook finishes, append the results to a BigQuery table.
D. 1. Create an experiment in Vertex AI Experiments.
2.
Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline's parameters to include those you are investigating.
3.
Submit multiple runs to the same experiment, using different values for the parameters.
You received a training-serving skew alert from a Vertex AI Model Monitoring job running in production. You retrained the model with more recent training data, and deployed it back to the Vertex AI endpoint, but you are still receiving the same alert. What should you do?
A. Update the model monitoring job to use a lower sampling rate.
B. Update the model monitoring job to use the more recent training data that was used to retrain the model.
C. Temporarily disable the alert. Enable the alert again after a sufficient amount of new production traffic has passed through the Vertex AI endpoint.
D. Temporarily disable the alert until the model can be retrained again on newer training data. Retrain the model again after a sufficient amount of new production traffic has passed through the Vertex AI endpoint.
You developed a custom model by using Vertex AI to forecast the sales of your company's products based on historical transactional data. You anticipate changes in the feature distributions and the correlations between the features in the near future. You also expect to receive a large volume of prediction requests. You plan to use Vertex AI Model Monitoring for drift detection and you want to minimize the cost. What should you do?
A. Use the features for monitoring. Set a monitoring-frequency value that is higher than the default.
B. Use the features for monitoring. Set a prediction-sampling-rate value that is closer to 1 than 0.
C. Use the features and the feature attributions for monitoring. Set a monitoring-frequency value that is lower than the default.
D. Use the features and the feature attributions for monitoring. Set a prediction-sampling-rate value that is closer to 0 than 1.
You have recently trained a scikit-learn model that you plan to deploy on Vertex AI. This model will support both online and batch prediction. You need to preprocess input data for model inference. You want to package the model for deployment while minimizing additional code. What should you do?
A. 1. Upload your model to the Vertex AI Model Registry by using a prebuilt scikit-ieam prediction container.
2. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data.
B. 1. Wrap your model in a custom prediction routine (CPR). and build a container image from the CPR local model.
2.
Upload your scikit learn model container to Vertex AI Model Registry.
3.
Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job
C. 1. Create a custom container for your scikit learn model.
2.
Define a custom serving function for your model.
3.
Upload your model and custom container to Vertex AI Model Registry.
4.
Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job.
D. 1. Create a custom container for your scikit learn model.
2.
Upload your model and custom container to Vertex AI Model Registry.
3.
Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data.
You work for a food product company. Your company's historical sales data is stored in BigQuery.You need to use Vertex AI's custom training service to train multiple TensorFlow models that read the data from BigQuery and predict future sales. You plan to implement a data preprocessing algorithm that performs mm-max scaling and bucketing on a large number of features before you start experimenting with the models. You want to minimize preprocessing time, cost, and development effort. How should you configure this workflow?
A. Write the transformations into Spark that uses the spark-bigquery-connector, and use Dataproc to preprocess the data.
B. Write SQL queries to transform the data in-place in BigQuery.
C. Add the transformations as a preprocessing layer in the TensorFlow models.
D. Create a Dataflow pipeline that uses the BigQuerylO connector to ingest the data, process it, and write it back to BigQuery.
You have created a Vertex AI pipeline that includes two steps. The first step preprocesses 10 TB data completes in about 1 hour, and saves the result in a Cloud Storage bucket. The second step uses the processed data to train a model. You need to update the model's code to allow you to test different algorithms. You want to reduce pipeline execution time and cost while also minimizing pipeline changes. What should you do?
A. Add a pipeline parameter and an additional pipeline step. Depending on the parameter value, the pipeline step conducts or skips data preprocessing, and starts model training.
B. Create another pipeline without the preprocessing step, and hardcode the preprocessed Cloud Storage file location for model training.
C. Configure a machine with more CPU and RAM from the compute-optimized machine family for the data preprocessing step.
D. Enable caching for the pipeline job, and disable caching for the model training step.
You work for a bank. You have created a custom model to predict whether a loan application should be flagged for human review. The input features are stored in a BigQuery table. The model is performing well, and you plan to deploy it to production. Due to compliance requirements the model must provide explanations for each prediction. You want to add this functionality to your model code with minimal effort and provide explanations that are as accurate as possible. What should you do?
A. Create an AutoML tabular model by using the BigQuery data with integrated Vertex Explainable AI.
B. Create a BigQuery ML deep neural network model and use the ML.EXPLAIN_PREDICT method with the num_integral_steps parameter.
C. Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
D. Update the custom serving container to include sampled Shapley-based explanations in the prediction outputs.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-MACHINE-LEARNING-ENGINEER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.