by HelloMavens
Salesforce Security Review
A security assessment of your Salesforce environment against the Security Benchmark for Salesforce (SBS).
Section 1
Executive summary
A one-page read on where you stand.
See the Appendix for what the passes mean and why some controls weren't applicable →
Risk tier breakdown
Top critical findings
- SBS-ACS-003 — Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
- SBS-ACS-006 — Respondent attests the `Use Any API Client` permission is NOT restricted to a few highly-trusted users with documented justification. This permission bypasses Connected App allow-listing.
- SBS-AUTH-001 — Respondent attests the org-wide SSO enforcement setting is NOT enabled. Without it, users can still authenticate with Salesforce passwords, bypassing the IdP.
Risk by business impact
- Data breach exposure: 17 access / authentication / data-protection controls failed.
- Compliance gap: 9 categories scored below 65 / 100 — likely weak spots in audit conversations.
Section 2
Category scorecards
One card per SBS category. Each shows the 0–100 score plus the OWASP and regulation citations relevant to that category.
Section 3
Remediation detail
Every failed control with what to fix, why it matters, how to fix it, and how to verify the fix worked.
Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
The Approve Uninstalled Connected Apps permission allows users to bypass Connected App usage restrictions and self-authorize any OAuth application without administrator approval. This establishes an uncontrolled security boundary: users with this permission can grant external applications access to Salesforce data without oversight, enabling data exfiltration, unauthorized integrations, and potential account compromise. Unlike other permissions that require additional failures to exploit, this permission directly enables unauthorized third-party access the moment it is misassigned—making it a primary security boundary that must be tightly controlled.
- Remove the
Approve Uninstalled Connected Appspermission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users. - For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
- Ensure that connected apps required for business operations are properly installed and allowlisted rather than relying on this permission for end-user access.
- Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
- Enumerate all profiles, permission sets, and permission set groups that include the
Approve Uninstalled Connected Appspermission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner. - Compare the enumerated list against the organization's designated system of record for this permission.
- Verify that every profile, permission set, and permission set group granting "Approve Uninstalled Connected Apps" has a corresponding entry in the system of record.
- Confirm that each entry includes:
- A clear business or technical justification for requiring this permission,
- Identification of the user role or persona (e.g., administrator, developer, integration manager),
- Any applicable exception or approval documentation, and
- Confirmation that the use case is limited to testing or managing connected app integrations.
- Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
- Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Respondent attests the `Use Any API Client` permission is NOT restricted to a few highly-trusted users with documented justification. This permission bypasses Connected App allow-listing.
The Use Any API Client permission allows users to bypass API Access Control entirely, authorizing any OAuth-connected application without requiring it to be pre-vetted or allowlisted. This establishes an uncontrolled security boundary: users with this permission can grant data access to arbitrary external applications, enabling data exfiltration, unauthorized integrations, and potential account compromise without administrator oversight. Granting this permission to unauthorized personnel completely defeats the purpose of API Access Control, creating a direct path to unauthorized third-party access that requires no other control to fail.
- Remove the
Use Any API Clientpermission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users. - For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
- Ensure that connected apps required for business operations are properly vetted and allowlisted rather than relying on this permission for end-user access.
- Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
- Enumerate all profiles, permission sets, and permission set groups that include the
Use Any API Clientpermission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner. - Compare the enumerated list against the organization's designated system of record for this permission.
- Verify that every profile, permission set, and permission set group granting
Use Any API Clienthas a corresponding entry in the system of record. - Confirm that each entry includes:
- A clear business or technical justification for requiring this permission,
- Identification of the user role or persona (e.g., administrator, developer, integration manager),
- Any applicable exception or approval documentation, and
- Confirmation that the use case is limited to testing or managing connected app integrations.
- Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
- Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Respondent attests the org-wide SSO enforcement setting is NOT enabled. Without it, users can still authenticate with Salesforce passwords, bypassing the IdP.
Without the org-level SSO enforcement setting enabled, users can authenticate directly to Salesforce using local credentials—creating a parallel authentication path outside centralized identity management. This establishes an uncontrolled security boundary: password-based attacks (credential stuffing, phishing, brute force) can target Salesforce directly, enabling unauthorized access without requiring any other control to fail. Attackers bypass organizational identity controls, MFA policies, and session management enforced at the IdP layer. This setting is the primary technical control that establishes the SSO security boundary.
- Navigate to Setup → Single Sign-On Settings.
- Enable Disable login with Salesforce credentials.
- Validate that SSO is properly configured and functional before enabling this setting to prevent lockout.
- Ensure approved break-glass or administrative accounts have the "Is Single Sign-On Enabled" permission removed via their profiles or permission sets so they can still authenticate if needed.
- Retrieve
SingleSignOnSettings(part ofSecuritySettings) via Metadata API or navigate to Setup → Single Sign-On Settings in the UI. - Verify that
isLoginWithSalesforceCredentialsDisabledis set totrue. - Flag the org if the setting is not enabled.
Respondent attests external users with access to sensitive data are NOT required to use strong multi-factor authentication. External-user MFA gaps are a frequent source of customer data breaches.
Without enforced multi-factor authentication, external users with substantial access to sensitive data can authenticate using only a password—establishing a single point of failure for the authentication boundary. External users present elevated credential risk due to weaker identity proofing, less organizational oversight, and exposure to consumer-grade phishing attacks. Attackers who compromise a single password through phishing, credential stuffing, or account takeover gain direct access to sensitive data without requiring any other control to fail. This creates an unprotected authentication path to high-value data that bypasses the defense-in-depth protections applied to internal users.
- Apply the “Multi-Factor Authentication for User Interface Logins” permission through profiles or permission sets for all active external users with substantial access to sensitive data.
- Configure suitable strong second-factor options in Setup -> Identity -> Identity Verification (e.g., authenticator app, FIDO2 security key).
- Enumerate all active external human users with substantial access to sensitive data.
- Validate that in-scope users have the “Multi-Factor Authentication for User Interface Logins” permission through profiles or permission sets.
- Flag any in-scope users who lack the “Multi-Factor Authentication for User Interface Logins” permission.
Respondent attests they cannot confirm application logs are free of passwords, tokens, or sensitive personal data. Log exfiltration is a canonical breach vector.
When application logs capture sensitive data, attackers who compromise low-privilege accounts with Read access to log storage can exfiltrate credentials, PII, or regulated data without triggering access controls on the original source objects—transforming a logging framework into a data leakage vector. In regulated industries, a compromised administrator querying log objects can extract thousands of customer records in minutes, with the audit trail showing only "legitimate" queries. During breach investigations, logs become evidence of regulatory violations rather than forensic tools, triggering consent orders and significant financial penalties.
- Implement mechanisms to prevent sensitive data from being written to logs:
public class SecureLogger { public static void logInfo(String message, Map<String, Object> context) { // Sanitize context before logging Map<String, Object> sanitized = sanitizeContext(context); Logger.info(message, sanitized); } private static Map<String, Object> sanitizeContext(Map<String, Object> ctx) { Map<String, Object> result = new Map<String, Object>(); for (String key : ctx.keySet()) { // Mask sensitive fields, log IDs instead of full records if (key.containsIgnoreCase('password') || key.containsIgnoreCase('token') || key.containsIgnoreCase('ssn')) { result.put(key, '***REDACTED***'); } else if (ctx.get(key) instanceof SObject) { result.put(key, ((SObject)ctx.get(key)).Id); } else { result.put(key, ctx.get(key)); } } return result; } } - Audit existing log records in custom objects and purge Salesforce debug logs containing sensitive data.
- Update logging calls to avoid capturing sensitive data:
// BAD - logs full account with SSN field System.debug('Processing: ' + acc); Logger.info('Processing account', new Map<String, Object>{'account' => acc}); // GOOD - logs only record ID System.debug('Processing account: ' + acc.Id); SecureLogger.logInfo('Processing account', new Map<String, Object>{ 'accountId' => acc.Id, 'recordCount' => 1 }); - Consider implementing compensating controls such as automated testing that validates log outputs for sensitive data patterns, code review checks for logging security, or static analysis rules that detect common sensitive data exposure patterns.
- Sample representative Apex classes from high-risk areas (customer-facing functionality, payment processing, authentication flows) to identify logging statements in both custom frameworks and
System.debug()calls. - Examine log message construction to detect patterns that may capture the types of sensitive data listed above.
- Query recent log records stored in custom objects and review Salesforce debug logs to inspect actual log content for sensitive data:
- Search for patterns matching SSNs, credit card numbers, email addresses, phone numbers
- Identify authentication tokens, session IDs, or API keys in log messages
- Flag any log records containing regulated data or PII
- Verify that mechanisms exist to prevent sensitive data from being logged (such as sanitization functions, code review checks, or automated validation).
Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.
When portal-exposed methods accept user-controlled parameters to determine record access, external users can manipulate these inputs to bypass intended access controls and exfiltrate unauthorized data. By iterating record IDs, modifying query filters, or injecting field names, attackers gain direct access to records beyond their sharing-based permissions—even when proper sharing keywords are declared. This vulnerability is especially severe in customer portal contexts where thousands of external users with adversarial intent can systematically enumerate organizational data. A single method accepting a record ID parameter from the frontend creates a SQL injection-equivalent vulnerability in the Salesforce trust model—allowing attackers to read, modify, or delete data without authentication or authorization checks, independent of any other security control failures. This constitutes a Critical boundary violation: unauthorized users access data they should never see, with no compensating controls required to fail.
- Refactor portal-exposed methods to eliminate all parameters that control record access, query scope, or field selection.
- Implement server-side logic that determines accessible records based on the running user's context:
@AuraEnabled public static Account getMyAccount() { Id userId = UserInfo.getUserId(); User currentUser = [SELECT ContactId FROM User WHERE Id = :userId]; Contact portalContact = [SELECT AccountId FROM Contact WHERE Id = :currentUser.ContactId]; return [SELECT Id, Name FROM Account WHERE Id = :portalContact.AccountId]; } - For methods that must operate on specific records, validate ownership using
UserRecordAccessbefore returning data:List<UserRecordAccess> access = [SELECT RecordId, HasReadAccess FROM UserRecordAccess WHERE UserId = :UserInfo.getUserId() AND RecordId = :recordId]; if (access.isEmpty() || !access[0].HasReadAccess) { throw new AuraHandledException('Access denied'); } - Establish code review requirements that specifically check for parameter-based access control in portal-exposed methods.
- Identify all Apex classes containing
@AuraEnabledmethods or other methods exposed to customer portal sites. - For each method, examine the parameter list to identify parameters of type
Id,String,List<Id>,List<String>,Set<Id>, orMap<String, Object>that could control record access. - Review the method implementation to determine if parameters influence SOQL query construction (WHERE clauses, record IDs, field selection, relationship traversal).
- Verify that methods derive record access from
UserInfo.getUserId()or related user context rather than accepting frontend-supplied identifiers. - Conduct penetration testing by attempting to pass unauthorized record IDs or manipulated parameters from portal user sessions.
- Flag any method that accepts user-supplied parameters controlling record access as noncompliant.
Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.
Guest users represent the highest-risk trust boundary in Salesforce portals—they are unauthenticated, have zero accountability, generate minimal audit trail, and operate with potential adversarial intent. When guest users are granted object permissions or can invoke custom Apex methods, attackers can systematically enumerate organizational data without even creating an account. Historical Salesforce security updates have repeatedly addressed guest user permission defaults because vendors consistently misconfigure this boundary. A single guest-accessible method that queries user records, cases, accounts, or custom objects creates a public API for data exfiltration accessible to anyone on the internet. This constitutes a Critical boundary violation: unauthenticated attackers access organizational data with no authentication required.
- Remove all object-level permissions from guest user profiles except those explicitly required for authentication flows.
- Audit and remove guest user access to any custom Apex methods that query or return organizational data.
- For public data requirements (knowledge articles, case submission), implement service layer pattern:
@AuraEnabled public static List<Knowledge__kav> getPublicArticles() { if (UserInfo.getUserType() == 'Guest') { // Allowlist-based, no parameters accepted return [SELECT Id, Title, Summary FROM Knowledge__kav WHERE PublicationStatus = 'Online' AND IsVisibleInPkb = true LIMIT 10]; } throw new AuraHandledException('Access denied'); } - Implement network-level rate limiting and CAPTCHA for guest-accessible endpoints.
- Review Salesforce security updates and apply guest user permission restrictions from recent releases.
- Identify all guest user profiles used by customer portal sites (typically named "Site Guest User" or similar).
- Review object-level permissions for guest user profiles and verify that all business-related standard and custom objects have Read, Create, Edit, Delete permissions set to disabled.
- Enumerate all custom Apex classes containing
@AuraEnabledmethods and verify that none are accessible to guest users (either by checking profile permissions or testing invocation from guest context). - For any guest-accessible functionality beyond authentication flows, verify implementation of service layer architecture with explicit access controls.
- Test by accessing the portal without authentication and attempting to invoke Apex methods or query objects via built-in Lightning controllers.
- Flag any guest user object permissions or method access as noncompliant.
Respondent attests they do NOT scan source repositories for committed secrets. Leaked Salesforce credentials are a leading source of supply-chain breaches.
Exposed Salesforce credentials in source repositories represent a direct path to unauthorized production access—a supply chain attack vector that bypasses all other access controls. Contractors, consultants, or any party with repository access can extract hardcoded tokens and authenticate directly to production orgs with the full permissions of the deployment identity. Attackers who compromise source control systems or CI/CD infrastructure gain immediate access to production Salesforce environments. Unlike other credential exposures, Salesforce access tokens often have broad administrative permissions and long validity periods, making them high-value targets. Organizations cannot detect this exposure through Salesforce audit logs alone—the attacker authenticates with valid credentials, and their activity appears legitimate.
- Enable secret scanning on all repositories containing Salesforce code, metadata, or deployment configurations using platform-native tools or third-party secret scanning solutions.
- Configure scanning rules to detect Salesforce-specific credential patterns in addition to general secrets.
- Implement pre-commit hooks or CI checks that block commits containing detected secrets.
- Immediately rotate any Salesforce access tokens, refresh tokens, or credentials that have been committed to version control—even if subsequently removed, as they persist in git history.
- Migrate credential storage to secure secrets management solutions (e.g., CI/CD platform secrets, vault systems) and remove all hardcoded credentials from repositories.
- Establish a periodic rotation schedule for Salesforce deployment credentials to limit the window of exposure if a secret is leaked.
- Identify all repositories containing Salesforce metadata, SFDX projects, deployment scripts, or CI/CD pipeline configurations.
- Verify that automated secret scanning is enabled on each repository—either through the source control platform's native capabilities (e.g., GitHub Secret Scanning, GitLab Secret Detection) or through third-party tooling.
- Confirm that the scanning configuration includes patterns for Salesforce-specific secrets (access tokens, refresh tokens, consumer keys/secrets, session IDs).
- Review scanning logs or dashboards to verify the tool is actively running and producing results.
- Verify that detected secrets trigger alerts and block merges or deployments until remediated.
- Flag noncompliance if any Salesforce-related repository lacks active secret scanning coverage.
Respondent attests at least some Connected Apps are authorized ad-hoc by individual users rather than formally installed. Ad-hoc OAuth grants bypass admin oversight.
Without formal installation, Connected Apps operate outside organizational control—inheriting security configuration from the external app developer rather than the Salesforce administrator. This establishes an unmanaged security boundary: refresh token lifetimes, session policies, and IP restrictions cannot be enforced, allowing tokens to persist indefinitely and enabling unauthorized access from any location. Attackers who compromise a user-authorized OAuth token gain persistent access that administrators cannot revoke or constrain through standard Connected App policies.
- Formally install any connected app that appears only as a user-authorized OAuth connection.
- Configure the installed connected app's policies, including refresh token and session security settings.
- Remove the user-authorized OAuth connections that are now superseded by the installed connected app.
- Enumerate all user-authorized OAuth connected apps via Setup or the Tooling/Metadata API.
- Identify all connected apps that are not formally installed as managed or unmanaged connected apps.
- Flag any connected app that is used but not formally installed as noncompliant.
Respondent attests at least one Connected App is set to "available to all users" rather than gated by profile or permission set.
Without explicit profile or permission set access control, Connected Apps may allow any user in the org to authenticate—bypassing the principle of least privilege and creating an uncontrolled access boundary. This enables unauthorized users to establish OAuth sessions with external systems, potentially exfiltrating data or performing actions beyond their intended scope. The lack of access scoping also eliminates audit visibility into who is authorized to use each integration, preventing detection of unauthorized access patterns.
- For each connected app lacking profile or permission set access control, create or update profiles or permission sets to define which users are authorized to access the app.
- Assign the appropriate profiles or permission sets to the connected app configuration.
- Verify that no users can access the connected app without explicit authorization.
- Enumerate all formally installed connected apps via Setup or the Metadata API.
- For each installed connected app, verify that access is granted only through assigned profiles or permission sets.
- Flag any connected app that lacks access scoping via profiles or permission sets as noncompliant.
Respondent attests they do NOT maintain a documented permission set model. Auditors expect a written, enforced model in a system of record.
Without a documented and enforced permission set model, organizations lose visibility into their authorization structure—accumulating ad hoc permission constructs created for one-time needs that are never reviewed or removed. This results in privilege sprawl, inconsistent access patterns, and inability to audit who has what access and why. Security teams cannot assess authorization posture, detect drift, or investigate access-related incidents when no authoritative model exists to compare against. The lack of continuous enforcement means unauthorized or excessive permissions can persist indefinitely without detection.
- Update or deprecate noncompliant profiles, permission sets, and permission set groups to align with the documented permission set model.
- Migrate users off legacy or misaligned authorization constructs.
- Implement or enhance automated enforcement to ensure continuous alignment with the defined model.
- Update the system-of-record documentation as the model changes.
- Obtain the organization's documented permission set model from the designated system of record.
- Enumerate all Profiles, Permission Sets, and Permission Set Groups using Salesforce Setup, Metadata API, or Tooling API.
- Compare each enumerated item against the documented model to determine whether:
- Its purpose or persona aligns with the model.
- Its included permissions conform to the model's structure and boundaries.
- Its naming and classification match the documented conventions.
- Identify any profiles, permission sets, or permission set groups that do not conform to the model.
- Verify that the organization has a process or automation that enforces model compliance in near real time (e.g., continuous scanning, pipelines, or governance workflows).
Respondent attests they do NOT have documented justification for every `API Enabled` user. Programmatic access without documented need expands the attack surface.
Without documented justification for API-enabled authorizations, organizations lose visibility into which users and systems can programmatically access Salesforce data at scale. The API Enabled permission enables large-scale data extraction, bulk modification, and automated operations—capabilities that create significant exposure when granted without oversight. Undocumented API access paths accumulate over time, preventing security teams from assessing data exfiltration risk, investigating suspicious API activity, or enforcing least privilege across automated access patterns.
- Remove the
API Enabledpermission from any profile, permission set, or permission set group that lacks a documented justification and is not required for business operations. - For any authorization that legitimately requires API access, add or update the rationale in the system of record to clearly justify the need.
- Reconcile and update the system of record to ensure complete and accurate inventory of all API-enabled authorizations.
- Enumerate all profiles, permission sets, and permission set groups that include the
API Enabledpermission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner. - Compare the enumerated list against the organization’s designated system of record for API-enabled authorizations.
- Verify that every profile, permission set, and permission set group granting “API Enabled” has a corresponding entry in the system of record.
- Confirm that each entry includes:
- A clear business or technical justification for API access, and
- Any applicable exception or approval documentation.
- Flag as noncompliant any authorizations lacking documentation or justification.
Respondent attests they do NOT have documented justification for super-admin-equivalent users.
Without documented justification for Super Admin–equivalent users, organizations lose visibility into who possesses unrestricted access to the entire Salesforce environment. These users can read and modify all data, manage user accounts, and alter the security posture of the org without oversight. Undocumented Super Admin access prevents security teams from assessing breach impact, investigating administrative actions, or maintaining accountability for the most sensitive operations. The inability to identify and justify these users also prevents effective access reviews and creates persistent exposure from forgotten or orphaned administrative accounts.
- Remove one or more of the Super Admin–equivalent permissions from any user who does not have a documented business or technical justification.
- For users who legitimately require this level of access, add or update rationale within the system of record.
- Reassess user access to ensure alignment with least privilege, reducing broad permissions where narrower privileges are sufficient.
- Enumerate all users who simultaneously possess the following permissions through any profile, permission set, or permission set group:
View All DataModify All DataManage Users
- Compile a list of all users meeting the criteria for Super Admin–equivalent access.
- Compare the list against the organization’s system of record.
- Verify that each Super Admin–equivalent user has corresponding documentation that includes:
- A clear business or technical justification for requiring this level of access, and
- Any relevant exception or approval records.
- Flag as noncompliant any users with Super Admin–equivalent access lacking documentation or justification.
Respondent attests they have active users on the standard `Standard User` profile, which cannot be tightened safely without breaking other users on the same profile.
Standard profiles are managed by Salesforce, not the organization—meaning Salesforce can enable permissions and object access on these profiles when features are released or platform updates occur without administrator approval. This creates an uncontrolled change vector: users assigned to standard profiles may gain new capabilities unexpectedly, bypassing the organization's authorization governance. Standard profiles are also overly permissive by default (e.g., "Standard User" grants "View Setup," "System Administrator" grants developer-level permissions), making it impossible to enforce least privilege. Without custom profiles, organizations cannot investigate authorization changes or maintain accountability for who approved which permissions.
- Setup a custom profile for each standard profile that is used
- Manage permissions and object access on these profiles to be compliant with the other controls of the SBS
- Assign the new custom profiles to your active users, following the principle of "least privilege access"
- Enumerate all human users that are "Active" (
IsActive = trueon the user flag) - Flag all users noncompliant that use a standard profile (
IsCustom = falseon the profile metadata)
Respondent attests they do NOT maintain a current inventory of non-human accounts. Without an inventory, no other NHI control can be enforced.
Without a comprehensive inventory of non-human identities, organizations cannot detect, investigate, or respond to security incidents involving automated access. Non-human identities are frequently created for integrations or automation projects and then forgotten—accumulating as orphaned accounts with persistent credentials and elevated access. Security teams cannot assess which automated systems access Salesforce data, identify compromised integration credentials, or scope the impact of a vendor breach. This loss of visibility prevents effective governance of automated access and creates persistent security exposure from untracked machine accounts.
- Query Salesforce to identify all potential non-human identities using the criteria in the audit procedure
- For each identified identity, document: name, type (integration/bot/API), purpose, business owner, creation date
- Establish a process to update the inventory when non-human identities are created, modified, or deactivated
- Implement quarterly reviews of the inventory to identify and deactivate unused accounts
- Store the inventory in an authoritative system of record accessible to security and compliance teams
- Request the organization's inventory of non-human identities
- Query Salesforce for all users where
IsActive = trueand any of the following conditions apply:- Username contains "integration", "api", "bot", "automation", or "service"
- Profile name contains "Integration", "API", or similar indicators
- User has "API Only User" permission enabled
- User is associated with Einstein Bot or Flow automation
- Compare the inventory to the query results to identify discrepancies
- Verify the inventory includes: identity name, type, purpose, business owner, creation date, and last login date
- Confirm the inventory is reviewed and updated at least quarterly
Respondent attests their non-human accounts have broader permissions than they need. Over-privileged service accounts are a leading source of breach blast-radius.
Without documented justification for broad non-human identity privileges, organizations lose visibility into which automated systems can bypass sharing rules or perform administrative operations. Non-human identities operate without human judgment, making over-privileged automation a high-impact target—compromised credentials can result in complete data extraction, system-wide configuration changes, or persistent backdoor access. Many non-human identities are granted excessive permissions during initial setup and never reviewed, creating long-lived security exposure that security teams cannot detect, investigate, or remediate without knowing which identities have which privileges and why.
- For each non-human identity with broad privileges, evaluate whether the permission is genuinely required for the identity's function
- Remove broad privileges that are not necessary; replace with more granular permissions where possible
- For non-human identities that legitimately require broad privileges, document:
- Specific business function requiring the permission
- Why more granular permissions cannot satisfy the requirement
- Business owner and technical owner
- Approval from security or compliance team
- Implement a formal approval process for granting broad privileges to non-human identities
- Establish periodic review (at least annually) of all non-human identities with broad privileges
- Using the non-human identity inventory from SBS-ACS-006, identify all non-human identities
- For each non-human identity, query assigned permissions through profiles, permission sets, and permission set groups
- Flag any non-human identity with one or more of the following permissions:
- View All Data
- Modify All Data
- Manage Users
- Author Apex
- Customize Application
- Any permission that bypasses sharing rules or grants administrative access
- For each flagged identity, verify that documented business justification exists explaining why the permission is required
- Confirm the justification was approved by appropriate stakeholders (security, compliance, or management)
Respondent attests access changes do NOT go through a documented approval process with an audit trail. Without governance, the permission model drifts silently.
Without enforced governance over access changes, organizations lose visibility and control over how privileges are granted and modified. Ad hoc access changes increase the risk of excessive privileges, unauthorized access, and violations of least-privilege principles. The absence of approval, justification, or auditability impairs incident investigation, undermines access reviews, and weakens compliance evidence for audits involving identity, access management, and change control.
- Establish and document a formal governance process for access and authorization changes.
- Require approval and business justification for all access modifications.
- Ensure access changes are recorded in an auditable system of record.
- Retrieve evidence of the organization's documented process governing access and authorization changes.
- Identify access-related changes made during a representative review period.
- For a sample of changes, verify:
- An approval record exists prior to implementation
- Business justification is documented
- The change is traceable to an identifiable request
- The implemented change is recorded in available audit or change history records
- Identify any access changes lacking approval, justification, or auditability as noncompliant.
Respondent attests they do NOT have a persistent Apex logging framework. Without it, security-relevant application events are unrecoverable.
Salesforce debug logs are transient, size-limited, and automatically purged—making them unsuitable for forensic analysis or security investigations. Without persistent application logging, organizations cannot reliably reconstruct access patterns, detect anomalous behavior, or investigate security incidents after the fact. This impairs the ability to identify compromise, attribute malicious activity, or understand the scope of a breach—significantly extending attacker dwell time and reducing accountability for actions taken within the system.
- Implement or install an Apex logging framework designed for persistent log storage.
- Create or configure a custom object (or equivalent durable storage) to store log records.
- Update Apex code to route log events through the framework.
- Train engineering and security teams to use persistent logs instead of debug logs for investigations.
- Review the Salesforce org for the presence of an Apex logging framework implemented as one or more Apex classes dedicated to log generation and persistence.
- Verify that the framework writes logs to durable storage, such as a custom object purpose-built for log retention.
- Confirm that operational and security investigations rely on this persistent logging mechanism rather than Salesforce debug logs.
- Inspect recent log records to ensure the framework is actively capturing runtime events.
Respondent attests they do NOT have a mechanism to detect regulated data in Long Text Area fields. Free-text fields are a common, hard-to-find regulated-data hiding spot.
Long Text Area fields often contain unstructured, user-entered information that may include sensitive personal data. Without a detection mechanism, regulated data accumulates in unknown locations—obstructing compliance with GDPR Right to Erasure, CCPA deletion requests, and similar privacy obligations. During a security incident, the inability to identify which fields contain personal information makes it impossible to accurately assess exposure, determine the scope of compromised records, or fulfill breach notification requirements. This governance gap significantly impairs incident response and creates ongoing regulatory liability.
- Deploy or configure a tool, script, or process capable of analyzing the contents of LTA fields for regulated data.
- Ensure scans run continuously or on a recurring schedule.
- Confirm all applicable fields across all objects are included.
- Document the scanning process and store execution evidence for audit support.
- Identify all Long Text Area fields using Salesforce metadata.
- Determine whether the organization has a mechanism that scans the contents of each LTA field for regulated data.
- Confirm that scanning occurs continuously or on a defined recurring schedule.
- Review scan logs, detection outputs, or configuration details to verify that the mechanism is operational.
- Validate that all LTA fields across all objects are included in scope.
- Determine compliance based on whether such a mechanism exists and is functioning.
Respondent attests their Salesforce data/metadata backup is NOT tested with a scheduled restore. Untested backups commonly fail when actually needed.
Without reliable backups and tested restoration procedures, organizations cannot recover from accidental deletion, malicious data destruction, configuration corruption, or ransomware-like events. This impairs incident response, business continuity, and the ability to validate data integrity after security events or outages.
- Implement or configure a backup solution for Salesforce data and metadata.
- Define backup frequency, retention, and storage protections.
- Execute and document restoration tests on the defined schedule.
- Update recovery procedures based on test results.
- Obtain the documented backup and recovery policy covering Salesforce data and metadata.
- Verify that backups are performed on a defined schedule and retained per policy.
- Review evidence of a completed restoration test within the defined testing interval.
- Confirm that backup storage is protected with appropriate access controls.
Respondent attests at least one sensitive field is NOT covered by Field History Tracking. Without it, unauthorized changes go undetected.
Without field history tracking on sensitive fields, unauthorized or accidental changes cannot be reliably detected or investigated. This reduces auditability, impairs incident response, and weakens accountability for changes to regulated or high-impact data.
- Enable Field History Tracking for all listed sensitive fields.
- Update the sensitive-field list as schemas evolve.
- Re-verify tracking coverage after changes.
- Obtain the organization’s documented list of sensitive fields and in-scope objects.
- Enumerate Field History Tracking settings for those objects.
- Verify that each listed sensitive field has Field History Tracking enabled.
- Flag any sensitive field without tracking.
Respondent attests deployments are spread across individual admin accounts. A designated deployment identity is needed for clean change attribution.
Without a designated deployment identity, organizations cannot reliably attribute production changes—any administrator can deploy metadata, making it impossible to distinguish authorized CI/CD deployments from unauthorized manual changes. This loss of provenance prevents security teams from detecting unauthorized modifications, investigating configuration drift, or determining whether a change was part of an approved release. Attackers or malicious insiders can make direct production changes that blend into legitimate administrative activity, and incident responders cannot reconstruct the timeline of configuration changes during a breach investigation.
- Create or identify a dedicated deployment identity.
- Reconfigure CI/CD pipelines, release management tooling, and automated deployment scripts to authenticate exclusively with the deployment identity.
- Revoke deployment permissions from all human users.
- Re-deploy any metadata last deployed by a human user to restore provenance.
- Identify the user account designated as the deployment identity.
- Enumerate all recent metadata deployments using tooling such as Deployment Status, Metadata API logs, or audit logs.
- Verify that all deployments were executed by the designated deployment identity.
- Flag any metadata deployment performed by a human user or non-deployment identity.
Respondent attests they do NOT maintain a written list of high-risk metadata types prohibited from direct production editing.
Without an explicit list of high-risk metadata types, organizations cannot define or enforce deployment governance boundaries—leaving critical configuration categories (Apex code, authentication settings, outbound connectivity, permissions) open to uncontrolled direct production editing. Security teams cannot distinguish between metadata that requires strict deployment controls and metadata that can be safely edited manually, resulting in inconsistent governance and gaps in change attribution. The absence of a defined list also prevents effective monitoring (SBS-DEP-003), as there is no baseline to compare against when detecting unauthorized changes.
- Adopt the SBS baseline list of prohibited direct-in-production metadata changes.
- Add any organization-specific items or exceptions as needed.
- Remove modify permissions for these metadata types from all human users.
- Ensure all future changes to listed metadata types are performed exclusively by the deployment identity.
- Obtain the organization's documented list of high-risk metadata types prohibited from direct production editing.
- Confirm that the list, at minimum, includes all SBS baseline categories.
- Review the list for any documented exceptions and verify they are formally approved.
- Verify that only the deployment identity has modify permissions for metadata types on the list.
Respondent attests they do NOT receive alerts on unauthorized high-risk metadata changes in production. Without alerts, unauthorized changes go undetected.
Without monitoring for unauthorized metadata changes, organizations cannot detect when high-risk configuration is modified outside the approved deployment process—allowing malicious changes, accidental drift, or insider threats to persist undetected. Security teams lose the ability to identify unauthorized modifications to authentication settings, permission structures, Apex code, or outbound connectivity until a breach or incident reveals the gap. This impairs detection, investigation, and response capabilities for configuration-related security events, extending attacker dwell time and preventing timely remediation of unauthorized changes.
- Implement a monitoring mechanism capable of identifying modifications to high-risk metadata and attributing them to the responsible user. Acceptable approaches include:
- Manual periodic review of the Salesforce Setup Audit Trail,
- Exporting audit logs for review,
- Scheduled API or CLI queries comparing metadata changes,
- Custom scripts,
- Vendor-based monitoring tools.
- Ensure the monitoring method covers all high-risk metadata types listed in the organization’s defined prohibited-direct-edit list.
- Define a repeatable review interval and assign responsibility for conducting the review.
- Document the monitoring approach and maintain records of reviews and findings.
- Interview system owners to identify the monitoring method(s) used for detecting changes to high-risk metadata.
- Review documentation describing how the monitoring process works—whether manual log review, automated scripts, API queries, CLI workflows, scheduled exports, or vendor tools.
- Verify that the monitoring process includes:
- Coverage of all high-risk metadata types defined by the organization and required by SBS-DEP-002.
- A review interval appropriate to the organization's change-management expectations (e.g., daily, weekly, or aligned with release cycles).
- A method for identifying the user who performed each change.
- Examine historical monitoring records or logs to confirm the process has been performed consistently.
- Flag noncompliance if no monitoring system exists or if the system cannot detect unauthorized human modifications to high-risk metadata.
Respondent attests they do NOT have branch protection or CI/CD controls on their Salesforce source repository. Without them, anyone with repo access can push directly to production.
Without a source-driven deployment process, organizations lose the verifiable audit trail that connects production configuration to approved changes—making it impossible to determine what changed, when, by whom, and whether it was authorized. Security teams cannot investigate configuration-related incidents, restore known-good state during outages, or attribute changes during forensic analysis. Manual production changes bypass code review, testing, and approval workflows, enabling unauthorized, accidental, or malicious modifications to security-sensitive settings without accountability or detection.
- Establish and maintain a centralized version control repository for Salesforce metadata.
- Implement or enforce an automated deployment pipeline that deploys changes exclusively from version control.
- Restrict direct production changes for metadata types that support programmatic deployment.
- Document and periodically review any required manual production changes for metadata types lacking deployment support.
- Identify the organization’s standard deployment process and designated deployment identity as defined in SBS-CHG-001.
- Review recent production metadata changes and their associated deployment records.
- Verify that changes deployable through Salesforce’s programmatic deployment mechanisms originated from centralized version control.
- Confirm that any manual production changes are limited to metadata types that Salesforce does not support for programmatic deployment.
- Flag any manually applied changes that could have been deployed through the source-driven process.
Respondent attests the Salesforce CLI Connected App is NOT configured with strict token expiration policies. Long-lived CLI credentials become silent attack vectors when developer machines are compromised.
Salesforce CLI token files stored on local workstations represent a persistent credential exposure risk. If a laptop is stolen, reassigned without proper cleanup, or compromised by malware, attackers can extract token files that provide direct access to Salesforce orgs—including production environments. With the default Connected App configuration, these tokens never expire, giving attackers indefinite access that persists even after the original user's password is changed or their account is deactivated. The attack surface expands with each org a developer authenticates to, as token files accumulate credentials to sandboxes, Dev Hubs, and production orgs. Organizations cannot detect this credential theft through Salesforce audit logs because the attacker authenticates with valid tokens.
- Determine whether to use the default "Salesforce CLI" Connected App or create a dedicated Connected App for CLI authentication.
- If using the default app:
- From Setup, navigate to Connected Apps OAuth Usage.
- Locate "Salesforce CLI" and click Install (if not already installed), then Edit Policies.
- Set Refresh Token Policy to "Expire refresh token after: 90 Days" (or less).
- Set Session Policies Timeout Value to "15 minutes" (or less).
- If creating a dedicated Connected App:
- Create a new Connected App with OAuth enabled and appropriate callback URL.
- Configure refresh token expiry to 90 days or less and access token timeout to 15 minutes or less.
- Distribute the Consumer Key to developers and require use of
--client-idflag.
- Communicate to developers that they will need to re-authenticate periodically when refresh tokens expire.
- Consider implementing compensating controls to protect locally stored token files, such as:
- Requiring full disk encryption (FileVault, BitLocker) on developer workstations.
- Enabling remote wipe capability for managed devices.
- Including Salesforce CLI token file cleanup in device offboarding procedures.
- Training developers to run
sf org logout --allbefore returning or transferring devices.
- From Setup, navigate to Connected Apps OAuth Usage (or Apps → Connected Apps → Connected Apps OAuth Usage).
- Identify the Connected App(s) used for Salesforce CLI authentication—either the default "Salesforce CLI" app or a custom Connected App.
- Review the OAuth Policies for each CLI-related Connected App:
- Verify that Refresh Token Policy is set to "Expire refresh token after" with a value of 90 days or less.
- Verify that Session Policies Timeout Value is set to 15 minutes or less.
- If a custom Connected App is used, verify that developers are instructed to use the
--client-idflag when authenticating. - Flag noncompliance if any CLI-related Connected App has tokens set to never expire or exceeds the maximum allowed durations.
Respondent attests they retain less than 30 days of `ApiTotalUsage` event logs. Without sufficient retention, anomalous API behavior is invisible after the fact.
Without retained API Total Usage logs, organizations lose visibility into REST, SOAP, and Bulk API activity—including user identity, connected app, client IP, resource accessed, and status codes. This materially degrades the ability to detect anomalous API behavior, investigate security incidents, attribute unauthorized access, and determine the scope of potential breaches. The absence of this visibility creates a significant gap in incident detection and response capabilities.
- If the organization has only 1-day ApiTotalUsage EventLogFile availability in Salesforce, implement an automated daily export that downloads newly available ApiTotalUsage log files and stores them externally for at least 30 days.
- If the organization uses Salesforce-native retention, ensure the configured retention period for Event Log Files is not less than 30 days.
- Restrict access to the retained logs (Salesforce-native or external) to authorized personnel and designated service identities.
- Determine whether the organization relies on Salesforce-native retention (Event Monitoring/Shield/Event Monitoring add-on) or an external log store as the system of record for ApiTotalUsage EventLogFile data.
- If the organization relies on Salesforce-native retention, verify that EventLogFile data is retained for at least 30 days (for example, confirm the org is entitled to and configured for Event Log File retention that is at least 30 days and can retrieve ApiTotalUsage EventLogFile data within the preceding 30-day window).
- If the organization relies on an external log store (including all orgs with only 1-day ApiTotalUsage availability in Salesforce):
- Verify an automated process exists that retrieves EventLogFile entries where EventType='ApiTotalUsage' and downloads the associated log files at least once every 24 hours.
- Inspect job schedules/run history and confirm successful executions covering at least the last 30 days (no missed days).
- From the external log store, retrieve ApiTotalUsage logs for (a) the oldest day in the preceding 30-day window and (b) the most recent day, and confirm both are accessible and attributable to the organization.
- Verify access to the external log store is restricted to authorized roles and service identities responsible for monitoring and investigations.
Respondent attests they do NOT maintain a criticality classification for OAuth-enabled Connected Apps. Without it, vendor-risk reviews cannot be prioritized.
Without a complete inventory and criticality classification, organizations lose visibility into their third-party integration landscape—preventing effective risk assessment, prioritization of security controls, and governance of external system connectivity. Security teams cannot identify which integrations access sensitive data, scope the impact of a vendor compromise, or respond effectively to incidents involving Connected Apps. This impairs detection, investigation, and response capabilities for integration-related security events.
- Add any missing OAuth-enabled Connected Apps to the system of record.
- Document and assign a vendor criticality rating to each Connected App based on operational importance and data sensitivity.
- Implement a recurring process to synchronize Connected App changes with the system of record.
- Retrieve a list of all Connected Apps with active OAuth configurations from Salesforce Setup.
- Retrieve the organization's authoritative system of record for integration and vendor management.
- Compare the Salesforce Connected App list to the system of record and confirm every OAuth-enabled Connected App appears in the inventory.
- Verify each listed Connected App has an assigned vendor criticality rating documented in the system of record.
- Flag any apps missing from the inventory or lacking a documented criticality rating as noncompliant.
Respondent attests they do NOT have a written Health Check baseline. Without one, configuration drift is invisible.
Without a defined Health Check baseline, organizations have no authoritative reference for what their security configuration should be—making it impossible to detect drift, evaluate deviations, or determine whether current settings reflect intentional decisions or accumulated neglect. Security teams cannot assess configuration-related risk, investigate whether settings were deliberately changed, or demonstrate compliance with security requirements. The absence of a baseline also prevents effective use of Health Check deviation monitoring (SBS-SECCONF-002), as there is no standard to measure against.
- Create or select a Health Check baseline (Salesforce default, SBS-recommended baseline, or a custom-defined XML).
- Upload the baseline XML into Setup → Health Check.
- Document ownership of the baseline and establish a process for periodic review and updates.
- Communicate the baseline's purpose and implications to system owners and security stakeholders.
- Navigate to Setup → Health Check and confirm that a baseline template is uploaded and active.
- Review the XML baseline directly (via UI or API) to verify that the baseline exists and contains intentional values rather than defaults left unexamined.
- Interview administrators to confirm the baseline was deliberately chosen or customized and is understood as the organization's configuration standard.
- If the organization lacks a baseline, flag the control as noncompliant.
Respondent attests Health Check results are NOT regularly reviewed and acted on.
Without periodic review and remediation of Health Check deviations, configuration drift accumulates undetected—weakening security posture over time as settings diverge from the intended baseline. Security teams cannot identify when critical platform settings (authentication, session management, content security) have been changed or misconfigured, preventing timely response to emerging vulnerabilities. Unaddressed deviations may persist indefinitely, creating exploitable gaps that remain invisible until a breach or audit reveals the exposure.
- Establish a recurring review process using any reliable method, including:
- Salesforce Health Check UI,
- API exports,
- CLI automation,
- Scheduled scripts,
- Vendor tooling.
- Assign ownership for conducting the review and maintaining documentation.
- Review current deviations and either remediate them or document exceptions.
- Implement tracking to ensure deviations are remediated or re-reviewed in future cycles.
- Interview system owners to identify the established Health Check review process and review interval (e.g., monthly, quarterly).
- Examine evidence of recent Health Check reviews, such as documented review artifacts, exported reports, tickets, changes, or exception records.
- Verify that deviations were:
- Remediated within the review window, or
- Documented as exceptions with clear justification and approval.
- Confirm that the review process is repeatable, assigned to an owner, and actually followed.
- Flag noncompliance if:
- No review process exists,
- No review evidence can be produced, or
- Deviations occur without remediation or exception documentation.
Respondent attests user access is NOT formally reviewed and re-approved at least annually with documented results. Recertification is a SOC 2 CC6.2 expectation.
Access review is the foundational control for preventing privilege creep, detecting unauthorized access, and remediating execessive permissions. Without periodic review, users accumulate access over time -- permissions granted for past roles remain after job changes, access added for temporary projects becomes permanent, and no formal mechanism ensures access remains least-privilege. Periodic formal recertification by business stakeholders ensures that access governance remains aligned with organizational reality. Documentation of review creates an audit trail and ensures accountability. Regular remediation prevents drift and maintains the integrity of the permission set model defined in SBS-ACS-001.
- If no access review process exists, establish documented policy including frequency, reviewers, scope, and remediation SLAs.
- Conduct an initial comprehensive access review of all active users, with business unit or department ownership of sign-off.
- Identify and remediate all access determined excessive, unauthorized, or inappropriate during the initial review.
- Implement a system of record (spreadsheet, governance tool, or integrated platform) to track reviews, findings, and remediation.
- Schedule recurring access reviews at minimum annual frequency, with quarterly reviews for sensitive roles or high-risk data.
- Document the review process, including templates, stakeholder roles, and escalation procedures.
- Establish accountability for reviewers and tie review completion to performance management or audit requirements.
- Understand the organization's dcoumented access review policy, including:
- Defined frequency and review cycle
- Designated reviewers and escalation path
- Intended coverage scope and access types included
- Expected remediation timeframe for findings
- System of record for tracking review activity and findings.
- Assess the recency and regularity of access review execution. Locate the most recent completed acccess review cycle and evaluate whether it aligns with the organization's stated review frequency.
- Examine a representative sample of access review documentation to asssess consistency of execution:
- Evidence of review and approval by the designated stakeholder
- Documentation of review date and scope
- Any findings, exceptions, or questions raised during the review
- Appropriateness of sample size relative to the organization's user population and complexity
- For any access identified as excessive, unauthorized, or not recertified:
- Assess whether the finding was documented
- Evaluate what remediation action was taken or whether exceptions were formally approved
- Compare remediation timing against the defined SLA
- Assess whether the organization maintains a traceable system of record that documents:
- Who reviewed what access
- When the review occurred
- What was approved or questioned
- What remediation was required and its completion status
- Evaluate whether the access review process adequately addresses the organization's primary access constructs, which may include the following types of assignment:
- User profiles
- Permission sets
- Permission set groups
- Role and role hierarchies
- Public group
- Queues
- Sales Territories
- Delegated administration or elevated permissions
Respondent attests there are users permitted to bypass single sign-on without documented business reasons. SSO-bypass accounts are the canonical break-glass attack target.
Users permitted to bypass SSO represent exceptions to centralized identity governance. Without formal documentation and approval, these accounts can proliferate unnoticed—reducing visibility into access patterns and undermining audit readiness. However, this control provides assurance and governance rather than establishing a security boundary. Undocumented exceptions increase operational risk and reduce audit readiness but require credential compromise for direct security impact.
- Create or update a formal inventory documenting all SSO-bypass users with their business justification, owner, and approval date.
- For any undocumented or unjustified users: assign the "Is Single Sign-On Enabled" permission via their profile or permission sets to remove SSO-bypass capability.
- Ensure all documented exceptions adhere to least-privilege access and strong authentication controls.
- Establish periodic (e.g., quarterly) review of all SSO-bypass accounts.
- Query all user records to identify users who do not have the "Is Single Sign-On Enabled" (
PermissionsIsSsoEnabled) permission assigned through their profile or permission sets. - Verify each identified user appears in the approved system-of-record inventory with documented business justification, owner, and approval.
- Confirm each exception is authorized for administrative or break-glass purposes only.
- Validate that these accounts follow strong local authentication controls (e.g., strong password policies, MFA if applicable).
- Flag any user without documented approval.
- (Optional) Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to monitor SSO bypass account activity:
- Filter API activity by users identified as SSO bypass accounts.
- Review frequency and timing of API calls to verify usage aligns with documented break-glass purposes.
- Flag any SSO bypass accounts with regular or unexpected API activity for review against documented justifications.
Respondent attests at least one profile has an unrestricted login IP range (e.g., `0.0.0.0/0`) that defeats the purpose of IP allow-listing.
Overly broad login IP ranges effectively disable network-based access controls, allowing authentication from any location on the internet. However, exploitation requires credentials to be compromised first—this control provides defense-in-depth rather than establishing a primary security boundary. When authentication controls (SBS-AUTH-001) are enforced, IP restrictions serve as an additional layer that limits the blast radius of credential compromise.
- Remove any profile login IP ranges that effectively grant unrestricted global access.
- Replace them with IP ranges that correspond to approved corporate networks, office locations, VPN ingress points, or other authorized infrastructure.
- Validate that updated network restrictions do not block legitimate access paths and that users can authenticate through sanctioned networks.
- Establish an internal governance process to review and approve all future additions of profile login IP ranges.
- Retrieve all profile login IP ranges via Setup → Profiles → Login IP Ranges or by querying the Profile metadata (
loginIpRangesfield) using the Metadata API. - For each profile, enumerate all configured login IP ranges.
- Identify any ranges that:
- Cover the entire IPv4 space, or
- Represent effectively unrestricted access (e.g.,
0.0.0.0–255.255.255.255,1.1.1.1–255.255.255.255, or similar patterns).
- Confirm that all IP ranges align with organizational security policy and defined network boundaries.
- Flag any profile with an impermissible or overly broad range.
- Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to validate IP restrictions are effective:
- Extract unique
CLIENT_IPvalues from recent API activity. - Compare against documented approved organizational network ranges.
- Identify any new or unexpected IP addresses making API calls.
- Cross-reference unusual IPs with profile assignments to identify potential policy gaps.
- Extract unique
Respondent attests code changes do NOT all go through mandatory peer review before reaching production.
Without mandatory peer review, a single developer—whether compromised, malicious, or simply mistaken—can introduce insecure or flawed code directly into the deployment pipeline. This eliminates shared oversight of changes to sensitive business logic, allowing vulnerabilities, backdoors, or destructive changes to reach production without independent human verification before deployment.
- Update branch protection rules to require peer review before merge.
- Train developers on the peer review workflow, including security checks such as identifying sensitive data in logging statements.
- Block direct commits to production-bound branches.
- Inspect source control settings to confirm merge rules require peer review on production-bound branches.
- Review merge history or representative pull requests to verify peer approvals were recorded.
- Confirm that peer review processes include security checks such as verifying logging statements do not expose sensitive data.
- Flag any repositories or branches that allow merging without peer approval.
Respondent attests pre-merge static security analysis is NOT in place. SOQL injection and other code-level issues commonly slip through without it.
Without enforced static code analysis, known vulnerability patterns in Apex and LWC—such as SOQL injection, insecure data exposure, and improper access control—may enter production undetected. This increases the likelihood of exploitable flaws persisting in deployed code, creating potential vectors for data breaches or unauthorized access that human reviewers may not catch.
- Integrate static code analysis into the CI/CD pipeline for all production-bound branches.
- Enable Apex and LWC security rulesets within the scanning tool.
- Configure pipelines to block merges when static analysis fails.
- Inspect CI/CD pipeline configuration to confirm a static code analysis step runs before merges.
- Verify the SAST tool includes security rulesets for Apex and Lightning Web Components.
- Review pipeline logs from representative merges to ensure scans executed and passed.
- Flag pipelines or branches missing enforced pre-merge scanning.
Respondent attests browser extensions interacting with Salesforce are NOT centrally governed. User-installed extensions can read every page and exfiltrate data silently.
Without centralized governance over browser extensions, malicious or cloned extensions—increasingly common with AI-generated code—can harvest session tokens, exfiltrate data, and execute unauthorized operations within authenticated Salesforce sessions. However, this control provides governance and defense-in-depth rather than establishing a Salesforce-native security boundary: exploitation requires a malicious extension to be installed and an authenticated session to exist. Other controls (SSO, session management) still provide protection, and this governance mechanism operates outside Salesforce via MDM or browser management.
- Implement a centrally managed browser or device management solution capable of enforcing extension restrictions (e.g., Chrome Browser Cloud Management, Intune, Jamf, or GPO-based controls).
- Define and apply an allow-list or blocklist policy governing which extensions are permitted to interact with Salesforce.
- Remove or disable any unapproved browser extensions from managed devices.
- Apply enforcement policies to all corporate-managed devices accessing Salesforce.
- Request evidence of a browser-extension governance mechanism applied to user devices (e.g., Chrome Browser Cloud Management, Intune configuration profile, Jamf configuration profile, Active Directory GPO, or equivalent).
- Require a screenshot, exported policy file, or screen capture demonstrating that extension controls are active and enforceable (e.g., an allow-list or blocklist configuration for Chrome extensions).
- Verify that the mechanism explicitly restricts installation or execution of unapproved extensions that can access Salesforce domains.
- Flag the organization as noncompliant if no enforceable governance mechanism exists or if extension governance is based solely on policy, awareness, or voluntary user behavior.
- Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) and analyze for indicators of unauthorized browser extension activity:
- Review
USER_AGENTfield for patterns indicating browser extensions (e.g., extension identifiers, non-standard user agents). - Identify API call patterns characteristic of auto-refresh extensions (e.g., Inspector Reloader) such as regular-interval repeated requests.
- Flag any anomalous patterns for investigation against approved extension inventory.
- Review
Respondent attests they do NOT maintain an inventory + justification list for Named Credentials. Forgotten Named Credentials with valid stored secrets are a hidden third-party access surface.
Without a documented inventory and justification for Named Credentials, undocumented or unjustified configurations may expose the organization to data leakage, unauthorized integrations, or reliance on insecure or untrusted endpoints. However, this control provides governance documentation rather than detection or prevention capability: it supports audit readiness and informed decision-making about authenticated external connections, but other controls are required to detect or prevent actual misuse of approved credentials.
- Add any undocumented Named Credentials to the system of record.
- Document a valid business justification for each Named Credential.
- Remove, disable, or reconfigure any Named Credentials that cannot be justified or that reference untrusted endpoints.
- Establish a recurring reconciliation process to ensure Named Credentials remain fully inventoried and justified.
- Enumerate all Named Credentials using Salesforce Setup, Metadata API, Tooling API, or Connect REST API.
- Retrieve the organization’s system of record for approved external endpoints and integration credentials.
- Compare the Salesforce list to the system of record to confirm all Named Credentials are documented.
- Verify that each documented Named Credential includes:
- The external endpoint URL
- The authentication type (named principal or per-user)
- The business justification for the integration
- Flag any Named Credentials missing from the inventory or lacking justification as noncompliant.
Respondent attests they do NOT maintain security documentation for high-risk Connected App vendors. Vendor due diligence gaps are SOC 2 and ISO supplier-management findings waiting to happen.
Without documented due diligence for high-risk vendors, organizations may onboard integrations without understanding the vendor's security posture, data handling practices, or contractual obligations. This increases the likelihood of undiscovered risks but does not directly enable unauthorized access—other controls (SBS-OAUTH-001, SBS-OAUTH-002) still govern the technical security boundary. Missing documentation primarily impacts audit readiness, risk assessment accuracy, and the organization's ability to make informed decisions about vendor relationships.
- Collect and store all required documentation for each high-risk vendor.
- Where documentation does not exist, record this absence in the vendor assessment.
- Update the vendor management process to ensure ongoing due diligence for high-risk vendors.
- Retrieve the list of Connected App vendors classified as high-risk from the organization’s system of record.
- For each high-risk vendor, verify that the following documents, where available, are stored in the designated repository:
- Terms of use
- Privacy policy
- Trust center or security overview
- Published information security guidelines
- Confirm that any missing documentation is explicitly recorded as unavailable in the vendor assessment.
- Flag any high-risk vendor lacking required documentation or missing explicit acknowledgment of unavailable documents as noncompliant.
Your roadmap is clear. Implementation is where most security efforts stall.
Our cofounders led product and engineering teams at GrubHub and Rocket, and led all of product and engineering at National Debt Relief — and have built and audited Salesforce environments at every scale, from Series A startups to Fortune 500 scale enterprises.
- Walk through your top remediation priorities together.
- Get a fixed-scope plan to close the highest-impact gaps first.
- Optional: bring us in to ship the fixes and keep them from regressing.
No obligation. We'll review your top three findings with you and tell you whether HelloMavens is the right fit.
Appendix
Methodology, disclaimer, and glossary
Each control was evaluated against the Security Benchmark for Salesforce v0.4.1. Controls scored as pass / fail / inconclusive / not applicable. Category scores are weighted by risk tier (Critical 5, High 3, Moderate 2). Overall score is a weighted average across categories proportional to each category's share of Critical-tier and High-tier controls. Inconclusive and not-applicable controls are excluded from denominators. If any Critical-tier control failed, the overall grade is capped at C regardless of other scores.
Engine version 0.0.0-alpha.3. SBS v0.4.1. Disclaimer version 2026-05-02-placeholder-1.
The HelloMavens Salesforce Security Audit produces a directional security assessment based on questionnaire responses you provide.
This report is not a substitute for a formal security audit, penetration test, or compliance certification.
HelloMavens LLC makes no warranty, express or implied, regarding the completeness, accuracy, or fitness for any particular purpose of this report.
You confirm that you are authorized to submit this information about your organization.
HelloMavens LLC will process your responses solely to generate this report and will not retain raw scan data after report generation. Aggregate, anonymized scoring data may be retained for benchmarking.
Any remediation actions you take based on this report are at your own risk and discretion.
- SBS
- Security Benchmark for Salesforce — an open standard of audit-ready controls maintained at github.com/Salesforce-Security-Benchmark.
- OWASP Top 10
- A standard awareness document for developers and web application security maintained by the Open Web Application Security Project. The 2021 edition is referenced throughout this report.
- HIPAA Security Rule
- U.S. federal regulation governing security standards for protected health information.
- SOC 2
- Service Organization Control 2 — an audit framework focused on the AICPA Trust Services Criteria (security, availability, processing integrity, confidentiality, privacy).
- Inconclusive
- A control that could not be evaluated from the evidence provided (e.g. you answered "I don't know"). Inconclusive controls are excluded from scoring denominators per spec §8.
- Critical-fail cap
- If any Critical-tier control fails, the overall risk grade is capped at C regardless of other scores. Highlights catastrophic risk areas.
- Consultant scan
- An evidence-based audit run by a security consultant against your Salesforce org via the upcoming sbs-scan CLI. Resolves inconclusive controls with high-confidence findings.
What these passing controls protect against, in plain language.
Access controls
SBS-ACS-009 — Implement Compensating Controls for Privileged Non-Human Identities
Privileged non-human identities have compensating controls layered on top of credentials, so secret leakage isn't a single point of failure for your most sensitive integrations.
SBS-ACS-012 — Classify Users for Login Hours Restrictions
Privileged accounts are classified for login-hours restrictions, adding a defense-in-depth layer — so even a compromised credential can't be exploited freely during off-hours when detection is thin.
Data protection
SBS-DATA-002 — Maintain an Inventory of Long Text Area Fields Containing Regulated Data
Your inventory of regulated-data fields is current, so you know exactly where personal data lives and can enforce protection, retention, and deletion obligations without scrambling to find it.
Integrations
SBS-INT-002 — Inventory and Justification of Remote Site Settings
Your Remote Site Settings inventory is documented with justification, so unvetted endpoints can't be authorized for Apex callouts and you know exactly which external services your code talks to.