We saw this trend in enterprise environments and worked with their stakeholders to build a custom solution to solve it: the attacks that cause the most damage don’t break in through your perimeter.They log in through integrations you’ve already approved.In 2025, over 700 organizations were compromised through stolen OAuth tokens from trusted Salesforce integrations. The attackers didn’t exploit Salesforce itself. They abused the access paths organizations had intentionally created—OAuth apps, browser extensions, and third-party plugins that inherited user privileges and operated within policy.The traffic looked completely normal. API monitoring saw it. Gateway logs recorded it. SIEM ingested it.Nobody flagged it as dangerous because the integration itself had become a trusted user.The Integration Becomes the Attack VectorAcross incidents we’ve investigated, the pattern repeats with uncomfortable consistency.Attackers land through a seemingly helpful OAuth app or browser extension that asks for broad scopes during a standard consent flow. The user clicks “Allow” because the request looks legitimate. Every security control sees a sanctioned user authorizing a sanctioned app.What happens next is quiet and API-native.The integration starts mapping what its token can access—listing drives, mailboxes, channels, repositories—using standard API calls at low volume to avoid rate-based alerts. It pivots through misconfigured sharing links and group resources, expanding from one user’s install into every workspace that user can touch.Data moves out via export, sync, or backup-like behavior. Traffic looks like heavy but plausible usage. For browser extensions, the attacker reads page content and session tokens, then forwards them to external servers using the combination of “read content” plus “send to remote service” permissions.The blind spot exists because every tool sees a slice of this behavior, but no single system owns the full identity story.SaaS logs show a sanctioned app accessing files. Browser tooling sees an approved extension injecting scripts. API monitoring sees authenticated, policy-compliant calls. None of these systems alone has the context to say “this identity now has a toxic combination of scopes and behavior.”Discovery Happens Weeks Too LateThe breaking point almost never happens during the attack.It happens days or weeks later, when the team realizes that what looked like normal integration traffic was actually the main attack path. Most teams first notice a business symptom—data exposure, strange changes in SaaS data, or an external notification—then trace it back to a “trusted” app they had allowed months earlier.During post-incident review, they pull API and gateway logs and see that the traffic was fully visible, authenticated, and policy-compliant the entire time.That’s when the gap between “seen” and “understood as dangerous” becomes painfully obvious.The moment that really breaks is when they realize nobody was responsible for the integration’s lifecycle. No one owned risk review, re-certification, or behavior monitoring. No one ever asked: “Should this app still have this much access?”In many environments we’ve analyzed, risky apps, misconfigured sharing, or extensions leak data for weeks before anyone calls it an incident. The investigation simply retroactively defines when the “attack” started.What Security Teams Thought They HadSecurity teams that have been through one of these incidents describe a consistent gap between the controls they thought covered integrations and the lifecycle-level controls they actually needed.They assumed network and identity already covered this. SSO, MFA, API gateways, and CASB were supposed to give them sufficient control over who accesses SaaS data and how. Any malicious use of an integration should either fail auth or trigger an anomaly alert.They relied on vendor security plus one-time reviews. Marketplace vetting, vendor security questionnaires, and app reviews were expected to keep dangerous integrations out. “Approved” apps should stay safe, and any real problem should show up quickly in SIEM or DLP alerts.What they discovered they were missing was a real inventory of nonhuman identities.After an incident, teams realize they never had a living catalog of OAuth apps, extensions, AI agents, and service accounts (who installed them, what scopes they hold, what data they can touch) across all SaaS environments. They lacked a defined owner and process for integration onboarding, periodic risk review, and offboarding.Tokens, scopes, and access for old projects and tools accumulated unchecked for years.The teams that break this pattern move from “Did we see the traffic?” to “Which nonhuman identities have toxic combinations of scopes and behavior?” They enforce continuous discovery, risk scoring, and automated revocation for integrations, not just better API dashboards.Adding More Tools Widens the GapAdding another point solution often makes security teams feel safer, but structurally it tends to widen the attack surface faster than it closes the specific gap they’re worried about.The intent is usually straightforward: “We’ll bolt on an API or app monitoring tool to watch this specific class of traffic.” Teams expect that if a tool can see third-party app calls or browser extensions, then those integrations are “covered.”What actually happens: every new point solution brings its own service accounts, API keys, OAuth grants, and admin consoles. Each becomes another high-value identity that can be phished, misconfigured, or over-privileged.Now the same integration appears in multiple systems (SSPM, API monitor, browser tool, SIEM) without a unifying identity model. This makes it harder to see toxic combinations of scopes and permissions across tools.Point tools excel at collecting and flagging events, but they usually don’t own the lifecycle question: “Should this integration exist, with these scopes, for this long?” Risky apps stay installed and trusted.Each tool adds another queue of alerts and dashboards, but the team’s ability to correlate, triage, and act stays the same. Subtle integration abuse becomes even more likely to drown in the noise.Before tool sprawl, attackers mainly had to compromise user accounts and core SaaS platforms. After, there are dozens more interconnected admin panels and machine identities that, if abused, have broad reach into SaaS data.The Operational Challenge Nobody Talks AboutThe hardest part of responding isn’t technical capability. It’s operational ownership and coordination.Once the team realizes “this integration is the attacker,” someone still has to decide who can revoke the app, kill its tokens, remove the extension, and change SaaS configs without breaking production workflows. That authority is usually spread across security, IT, app owners, and business units.High blast-radius fear paralyzes response. Because integrations often sit in the middle of critical workflows, teams worry that disabling them will stop sales, support, or engineering. Decisions bounce between stakeholders while the malicious identity remains active.Practically, they must touch multiple planes (IdP, each SaaS admin console, browser management, backup, SSPM) to fully evict the integration and close follow-on paths. These controls rarely live in one place or one playbook.What’s missing in most incidents is a pre-agreed governance model for nonhuman identities: who owns integration risk, what the default response actions are, and how quickly they must be executed when an app crosses a risk threshold.Speed Changes the Risk CalculusSpeed is the only thing that turns “blast-radius fear” from a reason to hesitate into a reason to act.In SaaS, the longer a malicious integration is active, the more data it can encrypt, exfiltrate, or corrupt. Organizations with slow recovery see weeks of disruption and dramatically higher business impact. According to industry data, average ransomware recovery downtime for SaaS environments reaches 21-24 days due to API limitations.Teams hesitate to pull the plug on a suspicious integration because they assume that recovery could take days or weeks thanks to SaaS API limits, manual restores, and incomplete backups.A two-hour recovery window fundamentally changes that trade-off.When you know you can detect, isolate, and restore affected SaaS data in under two hours, the cost of disabling an integration drops from “existential outage” to “brief, managed interruption.” The safe default becomes: revoke access first, investigate second.With automated ransomware and integration response that can identify the source app, cut its access, and roll back encrypted or modified data within that two-hour SLA, the focus shifts from debating impact to minimizing blast radius in real time.A guaranteed fast recovery allows security leaders to pre-authorize “kill” actions for high-risk integrations—revoking tokens, removing extensions, freezing affected data sets—because everyone understands that normal operations can be restored quickly if needed.What Needs to Be UnifiedFor a two-hour response to work across Google Workspace, Microsoft 365, Salesforce, and Slack, three things have to be unified: the identity model for integrations, the control plane for response, and the data-layer recovery.Most organizations are fragmented on all three.You need one integration-centric identity model. A single view where an OAuth app or extension is represented once, with its installs, scopes, and behavior correlated across every SaaS platform and browser. “App X” in Google, M365, Salesforce, Slack, and Chrome gets treated as one actor, not five separate entries.You need one cross-SaaS response plane. The team must be able to isolate that actor from a single place: revoke its tokens, block new installs, lock or quarantine affected users and spaces, and enforce DLP policies across all connected SaaS apps without swivel-chairing through four admin consoles and three tools.You need unified backup and rollback semantics. Behind that identity, there has to be consistent, API-aware backup and recovery for each SaaS. When you kill the integration, you can roll back what it touched (files, emails, records, channels) to clean versions within the same two-hour window, regardless of which platform it hit.A two-hour response only works if your recovery points and recovery times are aligned across platforms. If Salesforce can be restored quickly but Slack or Google take days, you’re still constrained by the slowest system.Most teams have per-platform tools (a Google add-on, a Salesforce scanner, a separate browser security product) that each see part of the problem, but nothing that normalizes an integration’s identity and risk posture across all of them.Response usually means raising tickets for different platform owners who act on different timelines and with different playbooks. This is fundamentally incompatible with a two-hour containment and recovery objective.The Forcing Functions That Drive ChangeSecurity leaders who genuinely move to integration-first thinking almost always point to a forcing function that turned “consolidation” from a cost project into a board-level risk issue.A painful incident. Many describe a specific event (a browser extension or OAuth app touching Google Workspace, Microsoft 365, and Slack at once) where investigation and recovery took weeks because every platform and tool had a separate view and playbook. After such an incident, the conversation stops being about incremental sensor coverage and becomes about provable blast-radius reduction and RTO/RPO guarantees across all SaaS platforms.Audits exposing fragmented control. CISOs talk about PCI, SOC 2, or sector-specific audits where they had to explain, system by system, how third-party apps and nonhuman identities were governed, and realized they didn’t have a consistent answer across Google, M365, Salesforce, and Slack. Regulators and customers increasingly ask for concrete proof: unified inventories of apps, risk scores, access histories, and response workflows for integrations.Tool and alert fatigue at scale. Leaders see their teams drowning in per-platform tools, overlapping alerts, and manual tickets just to manage app reviews and incidents. They recognize this doesn’t scale as SaaS usage and automation grow. When it becomes clear they can’t hire their way out, consolidating discovery, risk, DLP, and backup into a single control plane shifts from “nice optimization” to the only viable way to stay ahead.These leaders describe the pivot as accepting that the primary risk object in SaaS is no longer just the user or the platform, but the integration that can span them. This forced them to align visibility, governance, and recovery around that object.How Proactive Teams Think DifferentlyOrganizations that get ahead of integration attacks think of integrations as a measurable, continuously changing risk surface, not a static allowlist problem.They talk about “nonhuman accounts” and “apps as users,” and expect the same governance you’d apply to a privileged human: joiner/mover/leaver processes, ownership, least privilege, and periodic certification for every app and extension.Instead of treating app reviews as a project, they assume risk changes whenever scopes, publishers, code, or user adoption changes. Risk assessment is continuous and automated, not an annual spreadsheet exercise.They assign numeric risk scores to every integration. Leading teams score every OAuth app and browser extension based on permissions, code behavior, publisher reputation, data sensitivity, and user install base. They track score movement over time.They quantify blast radius. They measure how many users, workspaces, records, and files a given integration can reach across Google Workspace, Microsoft 365, Salesforce, Slack, and browsers. They use that as a key input into risk decisions and exceptions.Rather than relying on self-reporting or ad-hoc scans, they maintain an always-up-to-date inventory of all third-party apps and extensions, including shadow IT, with unified visibility into where each one is installed and what it can touch.They define policies like “auto-block high-risk extensions,” “auto-quarantine apps above score X touching sensitive data,” or “require owner approval for new high-scope integrations.” The platform enforces them instead of handling each app via tickets.They alert when an app’s permissions expand, publisher changes, or its risk score jumps—even if it was previously “approved”—and automatically trigger re-certification or revocation workflows.Because they have scores, ownership, blast-radius metrics, and automated rollback, security leaders can challenge risky integrations proactively instead of waiting for them to show up in an incident report.The Baseline Nobody Wants to AdmitTwo realities often get missed in these conversations.High risk is already the baseline, not the exception. Large-scale assessments show that most browser extensions and a significant share of OAuth apps connected to Google Workspace and Microsoft 365 fall into medium or high-risk categories—even before you factor in clearly malicious campaigns.When organizations finally run a proper app and extension risk assessment, they usually discover thousands of integrations already live in production, many with unknown authors, personal email registrations, or excessive scopes. The attack pattern we’ve been discussing doesn’t start at zero. It starts from this saturated baseline.AI-powered integrations are creating invisible data flows. A growing number of SaaS apps, plugins, and extensions now send content to LLMs or external AI services in the background. Sensitive SaaS data can cross organizational boundaries without any obvious user action like “export” or “share.”Because these integrations can change behavior through updates or backend model changes, point-in-time reviews or contract language aren’t enough. Continuous, behavior-aware risk scoring and monitoring become essential to see when an otherwise “approved” AI-powered integration turns into a data exfiltration path.The right question is less “Could this pattern happen here?” and more “Which of our existing integrations would be the easiest way for it to happen tomorrow?”The First Structural Change to MakeIf you recognize your team is still operating reactively, the first structural change is to create a single, owned integration-risk inventory with scores and blast radius for every app and extension—and make that the system of record for decisions.Start by unifying visibility across SaaS and browsers. Discover all OAuth apps and browser extensions across Google Workspace, Microsoft 365, Salesforce, Slack, and endpoints into one catalog. Include status, users, and “access last granted” for each integration.Assign explicit ownership. Make that catalog owned by a specific function (SecOps or a SaaS security team) so there’s one place and one team responsible for the lifecycle of nonhuman identities.Score every integration. Attach a standardized risk score to each app and extension based on scopes, external communication, vendor reputation, security posture, and behavior. Track score history as things change.Quantify blast radius. For each integration, record how many users, groups, workspaces, and data objects it can reach. Use that blast radius plus risk score as the primary lens for reviewing and prioritizing integrations.Flip the default with policy. Define simple, metric-driven policies like “block or auto-review any app above risk score X with access to sensitive data or more than N users.” The burden of proof shifts to justifying why a high-risk integration should remain.Automate first responses. Use the platform to automatically enforce allowlist and blocklist decisions, downgrade scopes, or trigger re-certification when scores change, instead of relying on ad-hoc tickets.By making “one inventory plus one scoring model” the foundation, teams move naturally from reactive questions about individual apps to proactive, metric-driven governance of the entire integration surface.We’ve watched this pattern play out across enough incidents to know: the organizations that wait for an incident to force change pay significantly more than those who build the unified inventory, scoring, and automation before the breach happens.Start with the inventory. Make it owned. Score the risk. Automate the response.The integrations are already there. The question is whether you’ll govern them before or after they’re weaponized.ReferencesSOCRadar – Top 10 Supply Chain Attacks 2025Cigent – Ransomware and Recovery Time: What You Should ExpectSpin.AI – Browser Extension Risk Report Share this article Share this post on Linkedin Share this post on X Share this post on Facebook Share this post on Reddit Was this helpful? Yes No What was missing / how can we improve? Submit Cancel