You are developing an event-driven application. You have created a topic to receive messages sent to Pub/Sub. You want those messages to be processed in real time. You need the application to be independent from any other system and only incur compute costs when new messages arrive. You want to configure the simplest and most efficient architecture What should you do?
A. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to invoke the Cloud Function. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it.
B. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to handle new messages in the topic.
C. Deploy the application on Google Kubernetes Engine. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it
D. Deploy the application on Compute Engine. Use a Pub/Sub push subscription to process new messages in the topic.
You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have Compute Engine virtual machine instances that will connect with this Cloud SQL instance. You do not want to whitelist IPs for the Compute Engine instances to be able to access Cloud SQL.
What should you do?
A. Enable private IP for the Cloud SQL instance.
B. Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project.
C. Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that role.
D. Create a CloudSQL instance on one project. Create Compute engine instances in a different project. Create a VPN between these two projects to allow internal access to CloudSQL.
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage
bucket owned by project B. However, the write call is failing with the error "403 Forbidden".
What should you do to correct the problem?
A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
B. Grant your user account the roles/iam.serviceAccountUser role for the service- PROJECTA@gcf-adminrobot. iam.gserviceaccount.com service account.
C. Grant the [email protected] service account the roles/ storage.objectCreator role for the Cloud Storage bucket.
D. Enable the Cloud Storage API in project B.
You have an application that uses an HTTP Cloud Function to process user activity from both desktop browser and mobile application clients. This function will serve as the endpoint for all metric submissions using HTTP POST.
Due to legacy restrictions, the function must be mapped to a domain that is separate from the domain requested by users on web or mobile sessions. The domain for the Cloud Function is https://fn.example.com. Desktop and mobile clients
use the domain https://www.example.com. You need to add a header to the function's HTTP response so that only those browser and mobile sessions can submit metrics to the Cloud Function.
Which response header should you add?
A. Access-Control-Allow-Origin: *
B. Access-Control-Allow-Origin: https://*.example.com
C. Access-Control-Allow-Origin: https://fn.example.com
D. Access-Control-Allow-origin: https://www.example.com
You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?
A. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.
B. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
C. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
D. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics:
?Each customer phone call is associated with a unique IVR session.
?The IVR system creates a separate persistent gRPC connection to the backend for each session.
?If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call.
You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that:
?Call duration ranges from 1 to 30 minutes.
?Calls are typically made during business hours.
?There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur.
You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?
A. Compute Engine
B. Google Kubernetes Engine cluster in Standard mode
C. Cloud Functions
D. Cloud Run
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now
find that your notification system is too slow for time critical problems.
What should you do?
A. Replace your entire monitoring platform with Stackdriver.
B. Install the Stackdriver agents on your Compute Engine instances.
C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform.
D. Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently.
Your team recently deployed an application on Google Kubernetes Engine (GKE). You are monitoring your application and want to be alerted when the average memory consumption of your containers is under 20% or above 80% How should you configure the alerts?
A. Create a Cloud Function that consumes the Monitoring API. Create a schedule to trigger the Cloud Function hourly and alert you if the average memory consumption is outside the defined range
B. In Cloud Monitoring, create an alerting policy to notify you if the average memory consumption is outside the defined range
C. Create a Cloud Function that runs on a schedule, executes kubect1 top on all the workloads on the cluster, and sends an email alert if the average memory consumption is outside the defined range
D. Write a script that pulls the memory consumption of the instance at the OS level and sends an email alert if the average memory consumption is outside the defined range
You have an analytics application that runs hundreds of queries on BigQuery every few minutes using BigQuery API. You want to find out how much time these queries take to execute. What should you do?
A. Use Stackdriver Monitoring to plot slot usage.
B. Use Stackdriver Trace to plot API execution time.
C. Use Stackdriver Trace to plot query execution time.
D. Use Stackdriver Monitoring to plot query execution times.
Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template?
A. Set the Max Surge to 100%.
B. Set the Update mode to Opportunistic.
C. Set the Maximum Unavailable to 100%.
D. Set the Minimum Wait time to 0 seconds.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-CLOUD-DEVELOPER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.