Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER Online Practice
Questions and Exam Preparation
PROFESSIONAL-CLOUD-DATABASE-ENGINEER Exam Details
Exam Code
:PROFESSIONAL-CLOUD-DATABASE-ENGINEER
Exam Name
:Google Cloud Certified - Professional Cloud Database Engineer
Certification
:Google Certifications
Vendor
:Google
Total Questions
:132 Q&As
Last Updated
:Jan 19, 2026
Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER Online Questions &
Answers
Question 1:
You want to migrate your PostgreSQL database from another cloud provider to Cloud SQL. You plan on using Database Migration Service and need to assess the impact of any known limitations. What should you do? (Choose two.)
A. Identify whether the database has over 512 tables. B. Identify all tables that do not have a primary key. C. Identity all tables that do not have at least one foreign key. D. Identify whether the source database is encrypted using pgcrypto extension. E. Identify whether the source database uses customer-managed encryption keys (CMEK).
C. Identity all tables that do not have at least one foreign key. E. Identify whether the source database uses customer-managed encryption keys (CMEK).
Question 2:
Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?
A. Use workload identity federation to impersonate a service account. B. Ask existing users to set their Google password to match their corporate password. C. Migrate the application to Google Cloud, and use Identity and Access Management (IAM). D. Use Google Workspace Password Sync to replicate passwords into Google Cloud.
A. Use workload identity federation to impersonate a service account. explanation:
Explanation/Reference:
Updating passwords represents user disruption. Eliminate B. Eliminate C for the same reason. D doesn't make sense, leaves A. From Google's documentation, "Traditionally, applications running outside Google Cloud can use service account keys to access Google Cloud resources. However, service account keys are powerful credentials, and can present a security risk if they are not managed correctly. With identity federation, you can use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This approach eliminates the maintenance and security burden associated with service account keys." https://cloud.google.com/ iam/docs/workload-identity-federation
Question 3:
Your organization has a busy transactional Cloud SQL for MySQL instance. Your analytics team needs access to the data so they can build monthly sales reports. You need to provide data access to the analytics team without adversely affecting performance. What should you do?
A. Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team. B. Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team. C. Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team. D. Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.
B. Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team. explanation:
Explanation/Reference:
"Read replicas do not have the cloudsql.iam_authentication flag enabled automatically when it is enabled on the primary instance." https://cloud.google.com/sql/docs/postgres/replication/create-replica#configure_iam_replicas
Question 4:
You are configuring a brand new PostgreSQL database instance in Cloud SQL. Your application team wants to have an optimal and highly available environment with automatic failover to avoid any unplanned outage. What should you do?
A. Create one regional Cloud SQL instance with a read replica in another region. B. Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region. C. Create two read-write Cloud SQL instances in two different zones with a standby instance in another region. D. Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.
B. Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region. explanation:
Explanation/Reference:
This answer is correct because it meets the requirements of having an optimal and highly available environment with automatic failover. According to the Google Cloud documentation1, a regional Cloud SQL instance is an instance that has a primary server in one zone and a standby server in another zone within the same region. The primary and standby servers are kept in sync using synchronous replication, which ensures zero data loss and minimal downtime in case of a zonal outage or an instance failure. If the primary server becomes unavailable, Cloud SQL automatically fails over to the standby server, which becomes the new primary server1.
Question 5:
You are migrating a telehealth care company's on-premises data center to Google Cloud. The migration plan specifies:
PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios.
MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level.
You want to set up the environment with minimal administrative effort. What should you do?
A. Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region. B. Set up different organizations for each database type, and apply policy constraints at the organization level. C. Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region. D. Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.
D. Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level. explanation:
Explanation/Reference:
Question 6:
You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?
A. Migrate to Compute Engine. B. Migrate to Bare Metal Solution for Oracle. C. Migrate to Google Kubernetes Engine (GKE) D. Migrate to Google Cloud VMware Engine
B. Migrate to Bare Metal Solution for Oracle. explanation:
Explanation/Reference:
Oracle is neither licensed nor supported in GCE. The only platform which supports RAC and all existing Oracle features is BMS.
Question 7:
Your customer has a global chat application that uses a multi-regional Cloud Spanner instance. The application has recently experienced degraded performance after a new version of the application was launched. Your customer asked you for assistance. During initial troubleshooting, you observed high read latency. What should you do?
A. Use query parameters to speed up frequently executed queries. B. Change the Cloud Spanner configuration from multi-region to single region. C. Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables. D. Use SQL statements to analyze SPANNER_SYS.QUERY_STATS* tables.
C. Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables. explanation:
Explanation/Reference:
To troubleshoot high read latency, you can use SQL statements to analyze the SPANNER_SYS.READ_STATS* tables. These tables contain statistics about read operations in Cloud Spanner, including the number of reads, read latency, and the number of read errors. By analyzing these tables, you can identify the cause of the high read latency and take appropriate action to resolve the issue. Other options, such as using query parameters to speed up frequently executed queries or changing the Cloud Spanner configuration from multi-region to single region, may not be directly related to the issue of high read latency. Similarly, analyzing the SPANNER_SYS.QUERY_STATS* tables, which contain statistics about query operations, may not be relevant to the issue of high read latency.
Question 8:
You work for a financial services company that wants to use fully managed database services. Traffic volume for your consumer services products has increased annually at a constant rate with occasional spikes around holidays. You frequently need to upgrade the capacity of your database. You want to use Cloud Spanner and include an automated method to increase your hardware capacity to support a higher level of concurrency. What should you do?
A. Use linear scaling to implement the Autoscaler-based architecture B. Use direct scaling to implement the Autoscaler-based architecture. C. Upgrade the Cloud Spanner instance on a periodic basis during the scheduled maintenance window. D. Set up alerts that are triggered when Cloud Spanner utilization metrics breach the threshold, and then schedule an upgrade during the scheduled maintenance window.
A. Use linear scaling to implement the Autoscaler-based architecture explanation:
Explanation/Reference:
Linear scaling is best used with load patterns that change more gradually or have a few large peaks. The method calculates the minimum number of nodes or processing units required to keep utilization below the scaling threshold. The number of nodes or processing units added or removed in each scaling event is not limited to a fixed step amount. https://cloud.google.com/spanner/docs/autoscaling-overview#linear
Question 9:
Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.
What should you do? (Choose two.)
A. Use an external read replica to migrate the databases to Cloud SQL. B. Use a read replica to migrate the databases to Cloud SQL. C. Use Database Migration Service to migrate the databases to Cloud SQL. D. Use a cross-region read replica to migrate the databases to Cloud SQL. E. Use replication from an external server to migrate the databases to Cloud SQL.
C. Use Database Migration Service to migrate the databases to Cloud SQL. E. Use replication from an external server to migrate the databases to Cloud SQL. explanation:
Explanation/Reference:
Question 10:
You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?
A. Use Cloud SQL instance monitoring. B. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL. C. Use Cloud SQL logs. D. Use the SQL Server Always On Availability Group dashboard.
D. Use the SQL Server Always On Availability Group dashboard. explanation:
Explanation/Reference:
Note, you cannot create a read replica in Cloud SQL for SQL Server unless you use an Enterprise Edition. Which is also a requirement for configuring SQL Server AG. That's not a coincidence. That's how Cloud SQL for SQL Server creates SQL Server read replicas. To find out about the replication, use the AG Dashboard in SSMS. https://cloud.google.com/sql/docs/sqlserver/replication/manage-replicas#promote-replica
Nowadays, the certification exams become more and more important and required by more and more
enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare
for the exam in a short time with less efforts? How to get a ideal result and how to find the
most reliable resources? Here on Vcedump.com, you will find all the answers.
Vcedump.com provide not only Google exam questions,
answers and explanations but also complete assistance on your exam preparation and certification
application. If you are confused on your PROFESSIONAL-CLOUD-DATABASE-ENGINEER exam preparations
and Google certification application, do not hesitate to visit our
Vcedump.com to find your solutions here.