How Spin.AI’s Researchers Uncovered 14.2 Million More Victims in the RedDirection Browser Extension Attack CampaignRead Now
Home>Spin.AI Blog>SaaS Backup and Recovery>Why Backup Infrastructure Became the Easiest Target in Enterprise Security

Why Backup Infrastructure Became the Easiest Target in Enterprise Security

Jan 16, 2026 | Reading time 9 minutes
Author:
Sergiy Balynsky - VP of Engineering Spin.AI

VP of Engineering

Even organizations with maturing security programs (strong perimeter defenses, good identity management, regular pen testing) still get hit hard by ransomware. The attacks succeed not because front-door security failed, but because attackers find a structural weakness most teams haven’t considered.

They target backup infrastructure first.

The data confirms what we’ve observed across our customer base: 93% of cyber-attacks now attempt to compromise backup repositories, with 75% successfully reaching at least some backup data. When attackers destroy backup systems, they’re not just stealing data. They’re eliminating your recovery options, which fundamentally changes the negotiation.

Organizations whose backups get compromised face median ransom demands of $2.3 million, compared to $1 million when backups remain intact. Attackers understand that backup destruction doubles their leverage.

The Control Plane Problem

The first thing attackers go after isn’t storage. It’s control.

Admin credentials, backup consoles, retention policies, job configurations: anything that lets them quietly turn your safety net into an illusion before they ever encrypt a file. That targeting sequence exposes a core assumption in many organizations: backup is treated as “trusted infrastructure behind the scenes,” not as a high-value, actively defended security asset.

In practice, this looks like stolen or abused privileged access to the backup system. AD-integrated admin accounts, shared backup admin logins, or cloud API credentials controlling backup jobs and retention. With that control plane access, they disable or thin out backups systematically.

Delete snapshots. Shorten retention. Pause jobs. Redirect backups.

By the time encryption starts, recent clean restore points simply don’t exist. The backup jobs still show green in your monitoring dashboards, but the data you need to recover is already gone.

How Legacy Architecture Created the Gap

Attackers expect backup to be a soft target because backup grew up as an operations system, not a security system. On-premise, it was designed for “get everything, trust the operator” rather than “assume the operator or their account can be malicious.”

Traditional backup infrastructure lived on trusted internal networks. Broad, implicit trust between backup servers, storage, and admin consoles made sense when the main concern was hardware failure or operator error, not insider abuse or targeted compromise.

Backup admins were typically given sweeping rights (see all systems, read all data, change any job) because that was the simplest way to guarantee coverage and fast restores. Those privileges were rarely segmented or audited like production access.

Security controls on backup systems lagged accordingly. Weaker or shared credentials, limited MFA, generic “backup-admin” accounts, and sparse monitoring became standard, since these were viewed as back-office utilities rather than high-value attack surfaces.

When organizations later lifted that same model into cloud and SaaS (central backup consoles with broad reach, integrated with the same identity plane as production), they preserved the full-trust assumptions but exposed them to internet-scale threats.

The SaaS Replication Pattern

The transition usually looks modern on the surface. IdP-integrated, cloud-to-cloud, everything “as-a-service.” But underneath, it quietly recreates the old “one god-console, one god-account” pattern from the on-premise world.

Organizations grant the backup platform tenant-wide, high-privilege API scopes via a single app consent. Read/write across all mailboxes, drives, sites, teams mirroring the old “backup server can see every volume” assumption. They then assign one or a few global backup admins with broad rights in both the SaaS tenant and the backup console.

Because backup access rides on the same identity plane and admin roles as production, a single compromised account or OAuth integration can now sabotage both live data and its safety net. Just as compromising a backup server on-premise gave full visibility and control, the cloud version creates the same vulnerability at internet scale.

Those backup control paths often have weaker guardrails (less granular RBAC, fewer approvals for destructive changes, and limited monitoring) since they’re still perceived as “maintenance tools,” not as the most attractive target in the environment.

The Timing Window That Matters

Even with hourly backup jobs, attackers have enough room to operate. They’re not racing your backup schedule. They’re racing your detection and containment.

Modern ransomware campaigns spend days doing low-and-slow reconnaissance before any obvious encryption. Mapping users, data, and backup tooling. By the time you’d notice something wrong, they already know exactly what to hit and how your backups run.

They use that access to modify retention, pause or narrow jobs, or poison data so that upcoming backup runs faithfully capture compromised states instead of clean ones. Between January and September 2025, 4,701 ransomware incidents were recorded globally, a 34% increase over the same period in 2024.

Backups are snapshots, not time machines. If compromise or corruption sits undetected for hours or days, multiple backup points can end up containing encrypted or altered data, shrinking or eliminating your pool of “known good” copies.

Cloud ransomware increasingly encrypts at the object/API level and often outside business hours, so it can touch a huge amount of data between your last good backup and the point anyone raises an alarm.

Most environments still assume “shorter RPO plus daily or hourly jobs equals safety,” but that assumes attacks are sudden and immediately visible. Modern campaigns are phased and stealthy, designed to live inside your backup interval.

When Separation Becomes Downtime

What breaks when backup, security monitoring, and incident response stay separated is the feedback loop. Security sees something bad, backup has the clean data, IR owns the process, but in this traditional setup no single system or team can move from “we detected it” to “we’ve responded” to “we’ve safely rolled it back” in one coherent motion.

That gap is exactly where a recoverable event stretches into weeks.

Security tooling flags suspicious SaaS behavior (mass file edits, unusual OAuth scopes, abnormal mailbox activity) and raises an alert. The SOC can isolate accounts or kill sessions, but it usually lacks direct visibility into backup state. What’s clean, what’s in scope, what restore options exist. Containment and recovery planning split immediately into separate tracks.

The backup team sees that jobs are green and snapshots exist, but they don’t have the security context. Which users, which folders, which time window constitute the malicious change set to target. Without that context, they default to coarse options (tenant-level or app-level restores, big time windows) which are slow, risk overwriting good work, and still may miss data touched by compromised integrations or service accounts.

Incident response tries to stitch security alerts and backup capabilities together manually. War rooms, spreadsheets, and ticket threads between three teams, each in its own tools and vocabulary.

Decisions like “How far back do we roll this group’s data?” or “Which shared folders were touched by both the attacker and legitimate work?” take days to resolve because no single platform correlates users, data, attack behavior, and restore points.

Organizations face an average of 24-27 days of disruption following ransomware attacks, with recovery costs averaging $5-6 million per incident. A single hour of downtime costs approximately $300,000 for most enterprises.

The Restore Reality Gap

The most common gap between vendor promises and actual experience is that vendors promise “fast, granular restore” in theory, but organizations experience slow, coarse, and incomplete recovery once they try to use it at real scale and under real constraints.

Vendors talk about quick recovery, but in practice large restores slam into SaaS provider throttling and rate limits. What sounded like “minutes or hours” turns into multi-day batch jobs once thousands of objects are involved. Teams don’t anticipate that the limiting factor won’t be the backup system’s storage, but the underlying platform APIs, which were never designed for tenant-wide rollback under attack conditions.

Brochures emphasize item-level restore, yet what organizations get is object-centric recovery that doesn’t map cleanly back to how the business thinks about workflows, teams, or projects. “We can restore this file” is not the same as “we can reconstruct this entire workflow with the right structure, permissions, and point-in-time state intact.”

Marketing describes intuitive, one-click restore, but real recovery under pressure involves multiple consoles, CSVs, permission fixes, and user-by-user verification. That operational drag is what turns a nominally “supported” restore into days of coordinated effort across IT and the business, even though every backup job looked green the whole time.

The Architectural Shift That Works

Organizations that successfully unified backup and security monitoring made one structural change first: they stopped treating “backup success” as the main metric and started measuring Recovery Time Actual and Recovery Point Actual for specific business workflows.

That shift from “are jobs green?” to “how fast and how precisely can we reverse this workflow?” signals the architecture is actually changing.

They stopped assuming tenant-wide, god-mode backup access is acceptable and began dismantling “one console, one super-admin” patterns. Separating backup control from general SaaS admin roles. Tightening who can change retention, delete backups, or run bulk restores.

They stopped treating backup as a back-office utility owned solely by IT ops and instead pulled it under the same governance and threat modeling as other critical security controls.

They started tracking RTA/RPA for named workflows (“month-end close in 365,” “customer onboarding in Google Workspace”) and using those numbers to drive requirements for detection, isolation, and restore automation.

They designed backup with explicit security properties: separate control plane, least-privilege access, immutability, anomaly detection. Those became prerequisites to hitting their recovery SLOs under real attack conditions, not optional hardening.

Navigating the Friction

The first friction they hit is convenience loss. People who were used to “just fixing it” with broad admin rights now have to wait for approvals, coordinate with another team, or work through more constrained roles.

The pushback is almost always about speed and ownership. “This slows us down.” “Who’s actually responsible now?”

Successful teams address that by pairing tighter access with better automation and clearer workflows, so the new model feels faster for 95% of cases even though raw privileges went down.

They automate the common paths (standard scoped restores, routine scope changes) so they’re actually faster through the new, limited interface than they ever were via manual god-admin work. They define and document clear break-glass procedures for truly urgent cases, with audited, time-bound elevation.

They make RTA/RPA and risk reduction visible, showing that after separation and hardening, drills are faster and safer than before. That reframes the extra structure as a performance win, not just a control.

What usually flips the skeptics is seeing a like-for-like drill where the same workflow that used to take days to recover is back in hours, with fewer mistakes and less manual heroics. When RTA/RPA drop and the incident feels calmer, the extra structure stops looking like bureaucracy and starts looking like performance engineering.

What Comes Next

The next frontier is that backup stops being “just” for resilience and becomes the authoritative data source for everything else you care about in SaaS security: forensics, e-discovery, AI governance, even insider-risk investigations.

Organizations are starting to realize that the same immutable, point-in-time copies used for recovery are also the cleanest source of truth for internal investigations, legal discovery, and post-incident forensics in SaaS.

This means backup platforms will increasingly be asked to answer questions like “what exactly changed, who touched it, and what did this mailbox or workspace look like at that moment?” not just “can you restore it.”

As generative AI and copilots get wired into SaaS, the question shifts from “is my data backed up?” to “what models and tools are training on or reading from my historical SaaS data, and under what controls?” Backup stores some of the richest longitudinal SaaS data. Treating it as an active security system means thinking about how that history is protected, queried, and potentially used by AI systems.

The emerging pattern is to treat “blast radius over time across multiple SaaS apps” as a concrete object you can query and act on, with detection, containment, and restore all driven from one correlated view.

Very few organizations today can ask and answer, in one place: “Show me every object, in every SaaS app, touched by this user, this extension, or this integration between 09:00 and 09:20, and roll only that back.” But that will feel obvious and expected within 12 to 24 months.

As backup becomes central to investigations and e-discovery, regulators, customers, and internal privacy teams will scrutinize who can see and search historical SaaS content inside backup platforms, not just in production. That will push organizations to apply zero-trust, fine-grained RBAC, and data minimization to backup systems themselves.

The Decision That Positions You Correctly

For organizations just beginning to recognize this shift, the one architectural decision to make now is this: treat backup data as a governed security asset from day one, not as an operational afterthought you’ll harden later.

That means designing access controls, audit logging, immutability, and retention policies for backup infrastructure with the same rigor you apply to production identity systems or endpoint security. It means asking “who can see, search, modify, or delete this historical data?” and “how do we prove that in an audit or investigation?” before you ever need forensics or e-discovery capabilities.

If you architect backup as a governed, hardened system now, adding forensics, AI governance, and cross-SaaS correlation later becomes an extension of existing controls rather than a retrofit that exposes years of ungoverned historical data.

The organizations that get ahead of this will treat backup not only as “how we get back up,” but as “where our most trustworthy view of SaaS actually lives.” That positioning turns what most teams see as insurance into load-bearing infrastructure for security, compliance, and operational resilience.

Start measuring Recovery Time Actual and Recovery Point Actual for your critical workflows today. Run a realistic restore drill this quarter. Track how long it actually takes and how many manual steps are involved. That data will tell you whether your backup infrastructure is genuinely protecting you or just creating the illusion of safety.

The gap between those two realities is where attackers are already operating.

References

  1. Veeam. “New Veeam Research Finds 93 Percent of Cyber Attacks Target Backup Storage to Force Ransom Payment.” https://www.veeam.com/company/press-release/new-veeam-research-finds-93-percent-of-cyber-attacks-target-backup-storage-to-force-ransom-payment.html
  2. Spin.AI. “Ransomware Attacks Surged in 2025.” https://spin.ai/blog/ransomware-attacks-surged-2025/
Was this helpful?

Sergiy Balynsky is the VP of Engineering at Spin.AI, responsible for guiding the company's technological vision and overseeing engineering teams.

He played a key role in launching a modern, scalable platform that has become the market leader, serving millions of users.

Before joining Spin.AI, Sergiy contributed to AI/ML projects, fintech startups, and banking domains, where he successfully managed teams of over 100 engineers and analysts. With 15 years of experience in building world-class engineering teams and developing innovative cloud products, Sergiy holds a Master's degree in Computer Science.

His primary focus lies in team management, cybersecurity, AI/ML, and the development and scaling of innovative cloud products.

Recognition