Salesforce PDII Online Practice
Questions and Exam Preparation
PDII Exam Details
Exam Code
:PDII
Exam Name
:Salesforce Certification for Platform Developer II
Certification
:Salesforce Certifications
Vendor
:Salesforce
Total Questions
:445 Q&As
Last Updated
:Jan 19, 2026
Salesforce PDII Online Questions &
Answers
Question 1:
Consider the following code snippet:
public with sharing class AccountsController {
@AuraEnabled
public List getAllAccounts() {
return [SELECT Id, Name, Industry FROM Account];
}
}
As part of the deployment cycle, a developer creates the following test class:
@IsTest
private class AccountsController_Test {
@TestSetup
private static void makeData() {
User user1 = [
SELECT Id
FROM User
WHERE Profile.Name = 'System Administrator'
LIMIT 1
];
User user2 = [
SELECT Id
FROM User
WHERE Profile.Name = 'Standard User'
LIMIT 1
];
TestUtils.insertAccounts(10, user1.Id);
TestUtils.insertAccounts(20, user2.Id);
}
@IsTest
private static void testGetAllAccounts() {
// Query the Standard User into memory
List result = AccountsController.getAllAccounts();
System.assertEquals(20, result.size());
}
}
When the test class runs, the assertion fails. Which change should the developer implement in the Apex test method to ensure the test method executes successfully?
A. Add @IsTest(seeAllData=true) to line 12 and enclose lines 15 and 16 within Test.startTest() and Test.stopTest(). B. Add System.runAs(user) to line 14 and enclose line 15 within Test.startTest() and Test.stopTest(). C. Query the Standard User into memory and enclose lines 15 and 16 within the System.runAs(user) method. D. Query the Administrator user into memory and enclose lines 15 and 16 within the System.runAs(user) method.
D. Query the Administrator user into memory and enclose lines 15 and 16 within the System.runAs(user) method. explanation:
Explanation
The failure of the assertion is caused by the with sharing keyword used in the AccountsController class and the context in which the test is running. The with sharing keyword enforces the sharing rules of the current user. In a test context, the code defaults to running as a System Administrator unless otherwise specified.
In the @TestSetup method, the developer created 10 accounts owned by a System Administrator (user1) and 20 accounts owned by a Standard User (user2). When AccountsController.getAllAccounts() is called without a specific user context, it runs as the administrator and returns all 30 records. The assertion expects exactly 20 records, which corresponds to the number of records owned by the Standard User.
To make the test pass, the developer must execute the method within the context of the Standard User. By querying the Standard User and using System.runAs(user) (Option C), the sharing rules are enforced according to that user's perspective. Under with sharing, the Standard User will only see the 20 Account records they own, assuming private organization-wide defaults and no additional sharing rules. Option A is incorrect because seeAllData=true bypasses test isolation. Option D would still result in an incorrect record count and fail the assertion.
Question 2:
A query using OR between a Date and a RecordType is performing poorly in a Large Data Volume environment. How can the developer optimize this?
A. Break down the query into two individual queries and join the two result sets. B. Annotate the method with the @Future annotation. C. Use the Database.querySelector method to retrieve the accounts. D. Create a formula field to combine the CreatedDate and RecordType value, then filter based on the formula.
A. Break down the query into two individual queries and join the two result sets. explanation:
Explanation
In SOQL, using the OR operator often prevents the Query Optimizer from using indexes effectively, especially if the fields involve different types of data. This is known as a "non-selective" query structure in LDV environments.
By breaking the query into two separate SOQL statements (Option A), the developer allows each query to be evaluated independently.
SELECT ... FROM Account WHERE CreatedDate = :thisDate (Uses the standard index on CreatedDate).
SELECT ... FROM Account WHERE RecordTypeId = :goldenRT (Uses the standard index on RecordTypeId).
The developer can then combine the results into a Map to ensure uniqueness (preventing duplicates if a record meets both criteria). This approach ensures both queries are "selective" and run much faster than a single query with an OR filter.
Option D is a common anti-pattern because formulas are generally not indexed unless they are "deterministic" and a custom index is requested from Salesforce support; even then, filtering on formulas is often slower than direct field filters. Option B and C do not address the underlying database performance issue.
Question 3:
Which code snippet represents the optimal Apex trigger logic for assigning a Lead's Region based on its PostalCode, using a custom Region__c object?
A. Set zips = new Set(); for (Lead l : Trigger.new) { if (l.PostalCode != null) { zips.add(l.PostalCode); } } List regions = [ SELECT Zip_Code__c, Region_Name__c FROM Region__c WHERE Zip_Code__c IN :zips ]; Map zipMap = new Map(); for (Region__c r : regions) { zipMap.put(r.Zip_Code__c, r.Region_Name__c); } for (Lead l : Trigger.new) { if (l.PostalCode != null) { Region__c = zipMap.get(l.PostalCode); } } B. Set zips = new Set(); for (Lead l : Trigger.new) { if (l.PostalCode != null) { zips.add(l.PostalCode); } } for (Lead l : Trigger.new) { List regions = [ SELECT Zip_Code__c, Region_Name__c FROM Region__c WHERE Zip_Code__c IN :zips ]; for (Region__c r : regions) { if (l.PostalCode == r.Zip_Code__c) { Region__c = r.Region_Name__c; } } C. for (Lead l : Trigger.new) { Region__c reg = [ SELECT Region_Name__c FROM Region__c WHERE Zip_Code__c = :l.PostalCode ]; Region__c = reg.Region_Name__c; } D. Set zips = new Set(); for (Lead l : Trigger.new) { if (l.PostalCode != null) { zips.add(l.PostalCode); } } for (Lead l : Trigger.new) { List regions = [ SELECT Zip_Code__c, Region_Name__c FROM Region__c WHERE Zip_Code__c IN :zips ]; for (Region__c r : regions) { if (l.PostalCode == r.Zip_Code__c) { Region__c = r.Region_Name__c; } } }
A. Set zips = new Set(); for (Lead l : Trigger.new) { if (l.PostalCode != null) { zips.add(l.PostalCode); } } List regions = [ SELECT Zip_Code__c, Region_Name__c FROM Region__c WHERE Zip_Code__c IN :zips ]; Map zipMap = new Map(); for (Region__c r : regions) { zipMap.put(r.Zip_Code__c, r.Region_Name__c); } for (Lead l : Trigger.new) { if (l.PostalCode != null) { Region__c = zipMap.get(l.PostalCode); } } explanation:
Explanation
The optimal Apex trigger logic is defined by its bulkification and strict adherence to Salesforce governor limits. In Apex, a trigger can process up to 200 records in a single batch. If a developer places a SOQL query inside a loop, the transaction will fail with a System.LimitException if more than 100 records are processed, as the limit is 100 synchronous SOQL queries.
Option A follows the industry-standard "bulkify, query, and map" design pattern. It iterates through the input records once to gather all relevant search criteria (ZIP codes) into a Set. It then performs a single SOQL query outside of any loop to fetch all matching Region__c records for the entire batch. The results are organized into a Map that provides constant-time lookup performance. Finally, it iterates through the Leads again and assigns the Region using the Map.
Options B, C, and D are anti-patterns. Option C is the most dangerous because it executes a SOQL query for every single record. Options B and D are slightly better but still inefficient, as they execute a query inside a loop that retrieves data repeatedly. Only Option A ensures the code is efficient, bulk-safe, and remains within governor limits even under heavy data loads.
Question 4:
Universal Containers uses Salesforce to track orders in an Order__c object. The Order__c object has private organization-wide defaults. The Order__c object has a custom field, Quality_Controller__c, that is a lookup to User and is used to indicate that the specified user is performing quality control on the Order__c. What should be used to automatically give read-only access to the user set in the Quality_Controller__c field?
A. User-managed sharing B. Record ownership C. Apex-managed sharing D. Criteria-based sharing
C. Apex-managed sharing explanation:
In Salesforce, when organization-wide defaults (OWD) are set to Private, users only have access to records they own or those shared with them through the role hierarchy or sharing rules. In this scenario, the requirement is to grant access dynamically based on a value in a lookup field (Quality_Controller__c).
Criteria-based sharing rules (Option D) are generally used for field values that are static or belong to a predefined set, but they cannot dynamically target a specific user identified in a lookup field on the record itself.
Apex-managed sharing is the optimal solution for this requirement. It allows developers to programmatically create sharing records to grant access to specific users or groups. When a record is created or the Quality_Controller__c field is updated, an Apex trigger can insert a record into the Order__Share object. This share record would specify the UserOrGroupId as the ID from the lookup field and the AccessLevel as Read. This approach is highly flexible and ensures that as the quality controller changes,
the sharing access is updated accordingly.
User-managed sharing (Option A), often referred to as manual sharing, requires manual intervention by the record owner and cannot be fully automatic in the context of complex business logic without Apex. Therefore, Apex-managed sharing provides the necessary automation and precision for record-level security based on dynamic lookup values.
Question 5:
Universal Containers is leading a development team that follows the source-driven development approach in Salesforce. As part of their continuous integration and delivery (CI/CD) process, they need to automatically deploy changes to multiple environments, including sandbox and production. Which mechanism or tool would best support their CI/CD pipeline in source-driven development?
A. Salesforce CLI with Salesforce DX B. Change Sets C. Salesforce Extensions for Visual Studio Code D. Ant Migration Tool
A. Salesforce CLI with Salesforce DX explanation:
Source-driven development shifts the "source of truth" from the Salesforce org to a version control system (like Git). Salesforce DX (Developer Experience) was specifically designed to support this paradigm by introducing the Salesforce CLI (Command Line Interface). The Salesforce CLI is the primary engine for modern CI/CD pipelines because it allows for the scripted creation of scratch orgs, automated testing, and the deployment of source code to any environment (sandboxes, scratch orgs, or production) using simple command-line instructions.
Unlike traditional tools, the Salesforce CLI supports the "source" format, which is more granular and easier to manage in version control than the traditional metadata format used by the Ant Migration Tool. Change Sets (Option B) are manual, UI-driven tools that cannot be automated in a CI/CD pipeline and are strictly org-to-org. Salesforce Extensions for Visual Studio Code (Option C) provide a powerful IDE interface for developers, but the extensions themselves rely on the underlying Salesforce CLI to perform deployments; they are not the automation tool used by the pipeline itself. The Ant Migration Tool (Option D) is an older technology that uses the Metadata API and is generally considered legacy compared to the more flexible and powerful Salesforce DX commands. Therefore, Salesforce CLI with Salesforce DX is the optimal choice for a robust, automated source-driven deployment strategy.
Question 6:
Universal Containers develops a Visualforce page that requires the inclusion of external JavaScript and CSS files. They want to ensure efficient loading and caching of the page. Which feature should be utilized to achieve this goal?
A. Static resources B. RemoteAction C. PageBlockTable D. ActionFunction
A. Static resources explanation:
To optimize the performance of a Visualforce page that uses external assets, Static Resources (Option A) is the platform-native best practice. Static resources allow you to upload content such as archives (.zip or .jar files), images, stylesheets, and JavaScript files that you can reference in a Visualforce page.
The primary benefits of using Static Resources over other methods (like hosting files externally or hardcoding styles) include:
Caching: Salesforce serves static resources from a Content Delivery Network (CDN). This means once a user's browser loads the JavaScript or CSS file, it is cached locally, significantly reducing page load times for subsequent requests.
Relative Referencing: Using the $Resource global variable (e.g., {!URLFOR($Resource.MyZipFile, 'styles/main.css')}) ensures that your page references the correct version of the file across different environments (Sandbox, Production) without changing hardcoded URLs.
Efficiency: By bundling related files into a single ZIP archive as a static resource, you reduce the number of HTTP requests the browser must make to render the page.
Options B and D are related to AJAX and JavaScript-to-Apex communication, and Option C is a component used for rendering data tables; none of these address the storage or caching of external front-end assets.
Question 7:
Universal Containers analyzes a Lightning web component and its Apex controller. Based on the code snippets, what change should be made to display the contacts' mailing addresses in the Lightning web component?
Apex controller class:
Java
public with sharing class AccountContactsController {
@AuraEnabled
public static List getAccountContacts(String accountId) {
return [SELECT Id, Name, Email, Phone FROM Contact WHERE AccountId = :accountId];
}
}
A. Add a new method in the Apex controller class to retrieve the mailing addresses separately. B. Modify the SOQL query in the getAccountContacts method to include the MailingAddress field. C. Extend the lightning-datatable component to include a column for the MailingAddress field. D. Modify the SOQL query in the getAccountContacts method to include the MailingAddress field and update the columns attribute in the javascript file to add Mailing address fields.
D. Modify the SOQL query in the getAccountContacts method to include the MailingAddress field and update the columns attribute in the javascript file to add Mailing address fields. explanation:
To display new data in a lightning-datatable, changes are required at both the Data Retrieval (Apex) layer and the UI Definition (JavaScript) layer. First, the SOQL query in the Apex controller must be updated to include the address fields. While MailingAddress is a compound field in Salesforce, in a SOQL query intended for a datatable, it is often best to retrieve the individual components (e.g., MailingStreet, MailingCity, MailingState).
Second, the JavaScript controller for the LWC must be updated. The columns array in the JS file defines which data properties the lightning-datatable looks for. Even if the Apex method returns the address data, the datatable will not render it unless there is a corresponding column definition with the correct fieldName property.
Option D correctly identifies this two-step process. Option B is incomplete because the UI wouldn't know to show the new data. Option C is incorrect because you don't "extend" the component to add columns; you simply pass a configuration array. Option A is inefficient as it creates extra server round-trips for data that should be fetched in a single query. By aligning the SOQL fields with the datatable column definitions, the mailing address can be displayed seamlessly.
Question 8:
Which Salesforce feature allows a developer to see when a user last logged in to Salesforce if real-time notification is not required?
A. Asynchronous Data Capture Events B. Calendar Events C. Event Monitoring Log D. Developer Log
C. Event Monitoring Log explanation:
To perform historical auditing and analysis of user behavior--such as tracking the last login time--Salesforce provides Event Monitoring (Option C) . This feature captures a wide range of "Event Types," including logins, logouts, URI (page) clicks, and API calls.
These events are stored in EventLogFile objects. Each day, Salesforce generates log files for the previous day's activity. A developer can query these files or use tools like the Salesforce Event Log File Browser to see a comprehensive history of user logins. This is the correct choice when real-time alerts are not necessary and the goal is to analyze patterns over time.
Option A refers to Change Data Capture, which is for tracking record changes, not user sessions. Option B is for the standard Salesforce Calendar. Option D (Developer Log) is used for debugging Apex and system execution in the short term and does not provide a reliable or long-term history of user login events. Event Monitoring provides the high-fidelity audit trail required for compliance and security analysis.
Question 9:
A developer is responsible for formulating the deployment process for a Salesforce project. The project follows a source-driven development approach, and the developer wants to ensure efficient deployment and version control of the metadata changes. Which tool or mechanism should be utilized for managing the source- driven deployment process?
A. Data Loader B. Change Sets C. Salesforce CLI with Salesforce DX D. Unmanaged Packages
C. Salesforce CLI with Salesforce DX explanation:
Source-driven development shifts the "source of truth" from the Salesforce Org to a Version Control System (like Git). To bridge the gap between local source code and the Salesforce platform, Salesforce CLI with Salesforce DX (Option C) is the required mechanism.
Salesforce DX (Developer Experience) introduced a source-centric metadata format that is more granular and easier to track in version control than the traditional Metadata API. The Salesforce CLI provides the command-line tools necessary to automate the deployment process, create scratch orgs for isolated testing, and perform "source tracking" to identify exactly which files have changed. This is the foundation of modern CI/CD (Continuous Integration/Continuous Delivery) pipelines in the Salesforce
ecosystem.
In contrast, Change Sets (Option B) are org-centric and manual, making them incompatible with automated version control. Data Loader (Option A) is for record data, not metadata. Unmanaged Packages (Option D) are used for distribution but do not support the iterative, source-controlled deployment workflow required for professional project management.
Question 10:
Universal Charities (UC) uses Salesforce to collect electronic donations in the form of credit card deductions from individuals and corporations. When a customer service agent enters the credit card information, it must be sent to a 3rd-party payment processor for the donation to be processed. UC uses one payment processor for individuals and a different one for corporations. What should a developer use to store the payment processor settings for the different payment processors, so that their system administrator can modify the settings once they are deployed, if needed?
A. Hierarchy custom setting B. Custom label C. Custom metadata D. List custom setting
C. Custom metadata explanation:
For storing application configurations and integration settings that need to be easily modified by administrators and deployed across environments, Custom Metadata Types are the preferred solution. Unlike Custom Settings (Options A and D), records within a Custom Metadata Type are considered metadata rather than data. This is a critical distinction for the development lifecycle because these records can be included in Change Sets or deployment packages. This eliminates the manual overhead and risk associated with recreating configuration records in production after a sandbox deployment.
In this scenario, UC needs to manage settings for two different payment processors. A Custom Metadata Type can be created with fields for API endpoints, merchant IDs, and security keys. An administrator can then create and edit the specific records for the "Individual" and "Corporate" processors directly in the Setup menu. Furthermore, Custom Metadata queries are efficient and do not count against standard SOQL governor limits in many contexts. While Custom Labels (Option B) are useful for translating text, they are not intended for complex, structured configuration data. Hierarchy Custom Settings are designed for user-specific overrides, which is not applicable here. Therefore, Custom Metadata provides the most robust, deployable, and administrator-friendly way to manage external service configurations.
Nowadays, the certification exams become more and more important and required by more and more
enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare
for the exam in a short time with less efforts? How to get a ideal result and how to find the
most reliable resources? Here on Vcedump.com, you will find all the answers.
Vcedump.com provide not only Salesforce exam questions,
answers and explanations but also complete assistance on your exam preparation and certification
application. If you are confused on your PDII exam preparations
and Salesforce certification application, do not hesitate to visit our
Vcedump.com to find your solutions here.