Exam Details

  • Exam Code
    :PROFESSIONAL-CLOUD-DEVELOPER
  • Exam Name
    :Professional Cloud Developer
  • Certification
    :Google Certifications
  • Vendor
    :Google
  • Total Questions
    :254 Q&As
  • Last Updated
    :May 17, 2025

Google Google Certifications PROFESSIONAL-CLOUD-DEVELOPER Questions & Answers

  • Question 171:

    You are designing a chat room application that will host multiple rooms and retain the message history for each room. You have selected Firestore as your database. How should you represent the data in Firestore?

    A. Create a collection for the rooms. For each room, create a document that lists the contents of the messages

    B. Create a collection for the rooms. For each room, create a collection that contains a document for each message

    C. Create a collection for the rooms. For each room, create a document that contains a collection for documents, each of which contains a message.

    D. Create a collection for the rooms, and create a document for each room. Create a separate collection for messages, with one document per message. Each room's document contains a list of references to the messages.

  • Question 172:

    You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?

    A. Google Cloud Armor

    B. Debugger

    C. Web Security Scanner

    D. Error Reporting

  • Question 173:

    Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you do?

    A. Create a new Cloud Storage bucket, and mount it via FUSE in the container.

    B. Create a new persistent disk, and mount the volume as a shared PersistentVolume.

    C. Create a new Filestore instance, and mount the volume as an NFS PersistentVolume.

    D. Create a new ConfigMap and volumeMount to store the contents of the configuration file.

  • Question 174:

    Your application is controlled by a managed instance group. You want to share a large read-only data set

    between all the instances in the managed instance group. You want to ensure that each instance can start

    quickly and can access the data set via its filesystem with very low latency. You also want to minimize the total

    cost of the solution.

    What should you do?

    A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.

    B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script.

    C. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine instances.

    D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its own instance.

  • Question 175:

    You want to create "fully baked" or "golden" Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

    A. Embed the appropriate database connection string in the image. Create a different image for each environment.

    B. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.

    C. When creating the Compute Engine instance, create a metadata item with a key of "DATABASE" and a value for the appropriate database connection string. In your application, read the "DATABASE" environment variable, and use the value to connect to the appropriate database.

    D. When creating the Compute Engine instance, create a metadata item with a key of "DATABASE" and a value for the appropriate database connection string. In your application, query the metadata server for the "DATABASE" value, and use the value to connect to the appropriate database.

  • Question 176:

    Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data?

    A. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the results

    B. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to display the results

    C. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then display the results

    D. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then display the results

  • Question 177:

    You are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic patterns and large variations in the number of concurrent users. Your application must meet the following requirements:

    ?Scales automatically during popular events and maintains high availability

    ?Is resilient in the event of hardware failures

    How should you configure the deployment parameters? (Choose two.)

    A. Distribute your workload evenly using a multi-zonal node pool.

    B. Distribute your workload evenly using multiple zonal node pools.

    C. Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload.

    D. Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group.

    E. Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.

  • Question 178:

    You are developing an application that consists of several microservices running in a Google Kubernetes Engine cluster. One microservice needs to connect to a third-party database running on-premises. You need to store credentials to the database and ensure that these credentials can be rotated while following security best practices. What should you do?

    A. Store the credentials in a sidecar container proxy, and use it to connect to the third-party database.

    B. Configure a service mesh to allow or restrict traffic from the Pods in your microservice to the database.

    C. Store the credentials in an encrypted volume mount, and associate a Persistent Volume Claim with the client Pod.

    D. Store the credentials as a Kubernetes Secret, and use the Cloud Key Management Service plugin to handle encryption and decryption.

  • Question 179:

    Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?

    A. Process the messages with a Dataproc job, and write the output to storage.

    B. Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.

    C. Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.

    D. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.

  • Question 180:

    You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch. What should you do?

    A. Use canary deployment

    B. Use blue/green deployment

    C. Use rolling updates deployment

    D. Use A/B testing with traffic mirroring during deployment

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Google exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your PROFESSIONAL-CLOUD-DEVELOPER exam preparations and Google certification application, do not hesitate to visit our Vcedump.com to find your solutions here.