How Spin.AI’s Researchers Uncovered 14.2 Million More Victims in the RedDirection Browser Extension Attack CampaignRead Now
Home>Spin.AI Blog>SaaS Backup and Recovery>Why Backup Security Controls Are the New Perimeter

Why Backup Security Controls Are the New Perimeter

Jan 23, 2026 | Reading time 8 minutes
Author:
Sergiy Balynsky - VP of Engineering Spin.AI

VP of Engineering

We’ve been watching an uncomfortable pattern emerge across ransomware incidents over the past few years.

Attackers aren’t treating backup platforms as collateral damage anymore. They’re running playbooks that start with systematically mapping, disabling, and corrupting backup infrastructure long before the encryption event, and they’re doing it with far more precision and automation than three years ago.

The backup layer has become an explicit, early-stage objective.

Recent data shows that 93% of ransomware incidents now target backup repositories, with 75% of victims losing at least some of their backups during the attack. Three years ago, the dominant pattern was simpler: gain access, move laterally, encrypt production, then opportunistically hit obvious backup shares if discovered.

That’s not what we’re seeing now.

How Attackers Changed Their Approach

Modern campaigns maintain persistence for weeks, quietly ensuring that malware sits inside multiple backup generations before detonating the ransomware payload. Earlier waves tended to move faster to encryption, which meant more organizations still had at least some clean, earlier restore points available.

The shift is strategic, not opportunistic.

Attackers are routinely disabling backup jobs, extending or shrinking retention in weaponized ways, deleting or expiring snapshots, and modifying replication rules so that bad states propagate everywhere at once. A few years ago, most incidents centered on encrypting backup volumes directly, without this level of fine-grained manipulation of schedules and policies.

Here is an example of a recent incident where the real inflection point comes 48 hours into recovery.

Every restored environment starts re-encrypting itself and beaconing to the same command-and-control infrastructure as production. That’s the moment the team realizes their last four weeks of backups weren’t clean snapshots of a healthy state. They were faithfully preserved copies of an environment that was already compromised.

The threat actor has often gained initial access through a remote access service, then spends several weeks moving laterally while remaining under traditional endpoint and perimeter radar. During this dwell period, nightly and weekly backups continued to run as normal, capturing both the legitimate data and the attacker’s persistence mechanisms.

Because the backup system’s success criteria have been set up to focus on job completion, not integrity or anomaly detection, every poisoned recovery point was still marked as a valid restore candidate.

The Ownership Gap That Creates Vulnerability

The core blocker isn’t technology. It’s ownership and mental model.

Most security teams still see backup as insurance managed by IT, not as an active control surface an attacker will deliberately weaponize, so it falls outside their “must-harden and must-monitor” perimeter. That split in accountability, combined with legacy processes and tooling, means backup platforms rarely get the same identity, access, telemetry, and testing rigor as production systems.

In most org charts, backup and disaster recovery sit with infrastructure or platform teams whose KPIs are uptime and restore speed, not threat reduction or adversary disruption. Security focuses on endpoints, identities, and networks. Backup falls into a gray zone where neither team fully owns hardening, monitoring, or threat modeling.

Recent reporting shows that a large majority of backup admins believe alignment with security needs “significant improvement” or a “complete overhaul.”

Many organizations still frame backup as protection against hardware failures, human error, or natural disasters, not against a human adversary that will intentionally destroy or poison restore points. That mindset sustains practices like shared admin accounts, flat network access to repositories, and minimal logging, which would never be accepted on production databases or identity systems.

Backup platforms often lack first-class security telemetry. Many backup products still expose logs in formats geared toward troubleshooting jobs, not detecting misuse. You can see that a policy changed or a repository was deleted, but not get high-fidelity, security-grade alerts or correlation with SIEM and SOAR workflows by default.

When Compliance Forces the Conversation

The shift from IT ops concern to board-level conversation usually happens when a regulator or auditor connects backup weaknesses directly to regulatory exposure and financial risk.

The trigger is less “Do you have backups?” and more “Your backup practices materially increased the impact of this incident, and that’s now a reportable compliance and business-continuity risk.”

Under GDPR, backups are treated as processing, so auditors look at whether you can preserve integrity, honor erasure and retention requirements, and recover quickly from incidents without further compromising personal data. The board-level inflection point is typically a breach investigation where the DPA or internal GDPR audit finds that incomplete, untested, or insecure backups prolonged downtime or led to additional unlawful disclosure.

For HIPAA compliant entities, the big trigger is an OCR investigation after a ransomware or ePHI breach that highlights poor backup and contingency practices as contributing factors to the event’s severity.

Once an external report states in writing that “if your backups had been properly secured and tested, this incident would have been smaller, cheaper, and less reportable,” backup moves from a back-office hygiene issue to a direct driver of regulatory fines.

The Practical Path to Hardening Controls

When an organization reaches that board-level moment and decides backup infrastructure needs the same rigor as production systems, they’re suddenly looking at implementing RBAC, MFA, network segmentation, and audit trails on systems that have been running with shared admin accounts and flat network access for years.

The practical path is to treat this as a series of tightly scoped hardening sprints around the backup control plane.

Start by inventorying backup components (servers, proxies, storage, consoles, service accounts) and freeze high-risk changes until basic protections are in place. In parallel, take at least one additional, well-documented offline or logically isolated backup of crown-jewel systems so you have a known-good safety net before you touch the plumbing.

Enforce MFA on the backup admin console and any remote access path to it before redesigning the full role model.

This is often a configuration change or SSO integration rather than a rebuild. Keep the same small set of operators but harden authentication so an attacker can’t replay existing shared credentials.

Then introduce a minimal RBAC tiering: “Backup Admin” with full access, “Operator” for run jobs and restore only, and “Auditor” for read-only access. You’re not designing a perfect enterprise RBAC taxonomy on day one. You’re just removing shared all-powerful accounts and separating “configure the platform” from “use the platform.”

Put a basic network guardrail around backup services using firewalls or ACLs to restrict console access to a management subnet and limit which hosts can talk to the backup ports, without touching existing backup agent-to-server flows yet. The pattern is: isolate the control plane first, keep data-plane connectivity as-is, and validate that scheduled jobs and test restores still work.

Enable the richest logging your backup platform supports: admin logins, configuration changes, policy edits, deletion of jobs or repositories, and restore operations. Forward these logs to your SIEM with a small set of high-value alerts so security operations can see abuse quickly.

The Recovery Time Difference

For organizations that actually land strong identity, immutable copies, centralized logging, and regular testing, ransomware recovery time typically drops from “multiple days and manual triage” to “hours, sometimes minutes, for the systems that matter most.”

The difference isn’t just speed of restore. It’s the ability to move directly to known-clean, pre-validated recovery points instead of spending days figuring out which backups can be trusted.

With immutable backups plus scanning and test restores, teams enter an incident already knowing which restore points are both available and trustworthy, so they can execute a planned runbook rather than improvising under pressure. That turns recovery into an orchestrated process measured in hours, instead of an investigation-plus-recovery loop that often stretches into several days for fragmented environments.

Industry data shows that with immutable and well-tested backups, critical systems can often be restored in hours instead of days.

Without consistent immutability and testing, organizations frequently discover mid-incident that early restore points are already compromised, forcing multiple restore cycles and further extending downtime.

Making Validation Continuous

The bottleneck is no longer that the technology to validate backups continuously doesn’t exist. It’s that most organizations still treat validation as a heavyweight “DR drill” instead of something that’s built into every backup job.

Many teams equate “testing backups” with full DR exercises that require change windows, business sign-off, and lots of manual work, so they only do it occasionally. That mindset ignores lighter-weight, per-job integrity checks (checksums, mount-and-ping tests, basic app health scripts) that modern platforms support but are often not turned on or tuned.

Configure platforms to run checksum or hash verification, block-level validation, and basic mount checks automatically after each backup completes.

This turns “did it copy?” into “is the copy structurally sound?” by default, not as a separate process.

For Tier-1 systems, schedule automated restores into isolated environments (separate VLANs, non-production clusters, masked data) daily or weekly, with scripted smoke tests to confirm applications start and basic transactions succeed. Results and metrics should feed into a central dashboard so security and ops both see which restore points are “proven good.”

Use a unified view to aggregate validation results from all environments and tag restore points with their verification status and timestamp. When ransomware hits, recovery teams can filter to “last verified-clean restore points” instead of guessing.

What Comes Next

As backup systems harden, attackers will increasingly behave as if backups will survive.

The focus shifts from “can I destroy your restore options?” to “even if you recover, can I still hurt you enough that you pay?” This means more emphasis on identity takeover, data extortion, and control-plane abuse around your backup stack.

Double extortion (stealing data before any encryption) has already become the default in most modern campaigns, and that trend only accelerates as immutable, well-tested backups become more common. Even if recovery is fast, attackers will lean harder on the threat of public leaks, regulatory complaints, and targeted harassment of executives and customers to maintain leverage.

As backup platforms adopt strong RBAC and immutability, initial access and privilege escalation will continue to pivot through identity: cloud IAM, SSO, privileged access systems, and federation misconfigurations. Compromising those layers lets attackers indirectly manipulate or bypass protections, even if they cannot directly delete immutable copies.

The future-state assumption has to be: “Backup will work; the attacker knows that and will plan around it.”

That pushes strategy beyond only making backups immutable and testable, toward making the entire data lifecycle (identity, access, movement, and classification) resilient against extortion-driven threats.

Start With Immutability and Identity

The single highest-leverage move is to make immutable backups with strong, identity-bound access the new default for your most critical data.

Immutability takes away the attacker’s easiest win (deleting or corrupting every recovery point) while tying access to modern identity with MFA and SSO gives you a clean foundation to layer on RBAC, segmentation, and automated validation over time.

Industry guidance increasingly treats immutable, tamper-proof backups as the non-negotiable core of ransomware resilience, because they guarantee at least one recovery path survives even if production and admin credentials are compromised. And the ideal ransomware resilience is to stop ransomware in the live environment, before you have a bigger problem on your hands.

When you combine ransomware protection and immutability with enforced MFA for backup administration, you sharply reduce both the likelihood and the impact of a successful attack on your live layer as well as your backup layer.

Start with a narrow scope: crown-jewel workloads only, using object storage or WORM/immutable-capable targets with 30 to 90 day retention and enforced write-once policies. Front the backup console with SSO and MFA for all admin actions, then progressively expand immutability and identity controls to additional systems as you harden processes and monitoring.

Treat backup as a Tier-0 system with zero-trust assumptions.

Backup control planes and repositories should sit in the same class as identity providers and key management: separate identity boundaries, enforced MFA, hardened management paths, and full integration into security monitoring and response.

The mental shift that still has to happen is viewing backup not just as storage with some security features, but as a continuously verified, identity-aware, and monitored service that forms part of the perimeter wherever your critical data lives.

References and Further Reading

  1. Infosecurity Magazine – 93% of Ransomware Incidents Target Backup Repositories
  2. HIPAA Journal – OCR Guidance on Ransomware Prevention and Response
  3. Trilio – Immutable, Tamper-Proof Backups for Ransomware Resilience
Was this helpful?

Sergiy Balynsky is the VP of Engineering at Spin.AI, responsible for guiding the company's technological vision and overseeing engineering teams.

He played a key role in launching a modern, scalable platform that has become the market leader, serving millions of users.

Before joining Spin.AI, Sergiy contributed to AI/ML projects, fintech startups, and banking domains, where he successfully managed teams of over 100 engineers and analysts. With 15 years of experience in building world-class engineering teams and developing innovative cloud products, Sergiy holds a Master's degree in Computer Science.

His primary focus lies in team management, cybersecurity, AI/ML, and the development and scaling of innovative cloud products.

Recognition