How Spin.AI’s Researchers Uncovered 14.2 Million More Victims in the RedDirection Browser Extension Attack CampaignRead Now
Home>Spin.AI Blog>SaaS Backup and Recovery>Healthcare’s SaaS Ransomware Problem Isn’t About EHR or Backup, It’s About Recovery

Healthcare’s SaaS Ransomware Problem Isn’t About EHR or Backup, It’s About Recovery

Feb 12, 2026 | Reading time 13 minutes
Author:
Sergiy Balynsky - VP of Engineering Spin.AI

VP of Engineering

We keep hearing the same story from healthcare CISOs. They’ve invested in endpoint detection, firewalls, and traditional backup systems. They feel reasonably protected.

Then we ask about their SaaS environment.

The conversation shifts. They’re well-defended at the endpoint and network layers, but they’re largely blind to how ransomware enters, spreads, and destroys data inside mission-critical SaaS platforms like Microsoft 365, Google Workspace, and Salesforce.

This gap isn’t theoretical. In 2025, 445 ransomware attacks were recorded on hospitals, clinics, and other direct care providers, with an additional 130 attacks targeting businesses within the healthcare sector, including pharmaceutical manufacturers, medical billing providers, and healthcare tech companies. The Change Healthcare attack alone compromised the protected health information of 100 million individuals, disrupted care delivery nationwide, and incurred $2.4 billion in response costs.

In 2025, just 36% of healthcare providers paid the ransom—down from 61% in 2022, placing the sector among the four least likely to recover data this way. At the same time, backup use has also fallen (51%, down from 72%), revealing possible weaknesses in backup resilience.

The Blind Spots Traditional Protection Misses

Most healthcare organizations have invested in solid traditional controls. EDR and XDR on endpoints and servers. Email security, perimeter firewalls, VPN or zero-trust network access. Traditional backup solutions aimed at on-premises systems or infrastructure workloads.

All of that matters.

But it assumes ransomware only lives on endpoints and in the data center. It doesn’t account for ransomware as a native SaaS problem.

We see three major blind spots that organizations consistently miss:

No behavior-based detection for SaaS encryption patterns. When mass file modifications happen in OneDrive, Google Drive, or SharePoint, most security tools don’t flag it. Malicious OAuth apps can act as “non-human ransomware users,” and traditional endpoint tools see nothing unusual.

Overly permissive SaaS sharing and misconfigurations. An infected user or third-party app can propagate encryption or data exfiltration across tenants, sites, and external collaborators. The permissions structure that makes collaboration easy becomes the attack vector.

Limited visibility into third-party app access. Organizations have little insight into which third-party apps and APIs have access to protected health information in their SaaS environment. The Change Healthcare breach demonstrated how third-party and API abuse can create massive exposure.

You can detect a compromised laptop. But you can’t see the OAuth app quietly encrypting or exfiltrating files through SaaS APIs using legitimate tokens.

How OAuth Apps Become Ransomware Operators

Let me walk you through what this actually looks like in a healthcare setting.

A clinician or revenue-cycle staff member sees a convincingly branded app promising productivity help, AI assistance, or document handling for Microsoft 365 or Google Workspace. They click “Sign in with Microsoft” or “Sign in with Google” and grant it permissions to read and write files, mail, and contact lists.

Because OAuth uses legitimate single sign-on and tokens, there’s no password theft event that your security operations center easily flags. The app now has persistent, non-human identity with API access to protected health information in OneDrive, SharePoint, Google Drive, or connected SaaS platforms.

From the inside, your admin audit logs just show “User X granted app Y the following permissions.” In many organizations, this is completely normal and unreviewed shadow IT behavior.

Once authorized, the malicious app uses SaaS APIs to quietly map and stage data.

It enumerates drives, shared folders, and collaboration spaces used for imaging reports, care coordination, and billing. It may exfiltrate a subset of high-value protected health information to external storage while copying or modifying files in place.

Everything happens through sanctioned APIs with valid tokens. It looks like a hyperactive but legitimate user or integration, not malware on a device or a rogue process on a server.

When the attacker is ready to monetize, the OAuth app shifts into full ransomware mode entirely in the cloud. It begins bulk-encrypting or corrupting documents by reading them via the API, overwriting contents with encrypted data, and saving them back. It can revoke existing shares, create ransom-note files, or modify email inbox contents to inject extortion instructions.

All without touching a single endpoint binary.

The Timeline Organizations Don’t Expect

For a healthcare organization with good traditional controls but no SaaS-native visibility, the timeline from first complaint to confirmed ransomware is rarely minutes. Often many hours. Sometimes a full business day or more.

Here’s the typical pattern we see in incident reviews:

First 1-3 hours: Front-line staff report “broken documents,” sync issues, or odd errors in Office 365, Google Workspace, or line-of-business SaaS. Tickets get routed as app performance problems, not security incidents.

Next 3-12 hours: Service owners and IT notice a wider pattern across multiple departments, shared drives, or collaboration spaces. They start pulling logs, escalating to infrastructure and security teams, calling vendors. The working theory is still “outage or bug,” not a deliberate encryption campaign.

In parallel, the attacker has already completed encryption in many cases. Median ransomware dwell time from initial access to deployment has dropped to under 24 hours, with some incidents measured in just a few hours.

The inflection point is usually not technical but human. Someone connects three dots at once: data is consistently unreadable, the pattern is spreading, and a clear ransom or extortion signal appears.

In many healthcare incidents, that “this is ransomware” moment lands somewhere in the 6-18 hour window from the first user complaint. For complex ecosystems with multiple hospitals, affiliates, and third-party SaaS, it can slip into “next day” territory before leadership formally treats it as a ransomware event.

The attacker completed their work much earlier.

The Backup Misconception That Fails Under Pressure

The single biggest misconception we hear: “Our SaaS provider’s native tools and our existing backups are enough. We’ll just roll back if something happens.”

For ransomware in healthcare SaaS, that confidence almost never matches what organizations can actually restore, or how fast.

Most leaders assume the shared responsibility model includes meaningful ransomware recovery, especially for multi-user SaaS events. Native retention, version history, and recycle bins weren’t designed to unwind a large-scale, multi-account encryption or malicious-app event. They may be limited by short retention windows, version caps, or gaps in what’s even stored.

Once encrypted data and bad versions synchronize across accounts, sites, or shared drives, there’s often no clean, point-in-time snapshot in the SaaS platform that can be quickly and reliably restored at the granularity clinicians and revenue teams need.

The second misconception is that having any third-party backup means rapid, predictable recovery.

Many backup products were built for basic data loss, not high-speed, multi-terabyte SaaS restores under API throttling. Organizations often discover restore failures, missing data, or multi-day recovery time objectives only during their first real incident.

Healthcare leaders tend to overestimate both coverage, which SaaS objects, permissions, and metadata are actually protected, and performance, how long it will take to recover thousands of users or a major shared drive. Test restores, if they happen at all, are usually tiny and non-representative.

For the first time in three years, healthcare providers identified exploited vulnerabilities as the most common technical root cause of attack, used in 33% of incidents. Healthcare ransomware economics shifted sharply in 2025, with ransom demands plummeting 91% to $343K (from $4M in 2024) and ransom payments dropping from $1.47M to just $150K. However, the rate of paying ransoms has declined sharply while backup use has also fallen, collectively pointing to stronger resistance to demands but possible weaknesses or a lack of confidence in backup resilience.

What the War Room Looks Like

We’ve seen the moment when a healthcare organization realizes their backup strategy isn’t going to work. It’s a very fast transition from “this is stressful but manageable” to “we may not actually be able to put the business back together the way we thought.”

It usually happens in the war room, during the first serious attempt to restore at scale.

Teams kick off what they believe will be a straightforward restore from their SaaS backup or native tools. They discover jobs failing, critical data missing, or restores completing so slowly that recovery time objectives stretch from “hours” to “days.”

As multiple restore attempts run in parallel, they hit API throttling, dependency issues, or corrupted snapshots. It becomes clear that the backups were never validated end-to-end for a multi-system, multi-terabyte event, just for small, isolated test cases.

That’s the point where technical optimism in the room starts to drop. Executives begin asking, “So what’s our plan if this doesn’t work?”

In healthcare, the realization is amplified because downtime is not abstract. It’s visible in patient care and revenue cycle immediately.

While restores crawl or fail, leaders are staring at growing paper workflows, delayed procedures, and mounting backlogs of labs, imaging, and claims that will need to be reconciled when systems finally come back. Healthcare organizations lost an average of more than 17 days to downtime across all years examined, with an estimated $21.9 billion in downtime losses over the past six years and an average of $1.9 million daily in recovery costs.

When someone connects the dots, backup jobs not completing, incomplete coverage of critical SaaS data, and no realistic way to hit the originally assumed recovery time objective, the conversation shifts from “how quickly can we restore?” to “what is our actual minimum viable recovery, and what are we willing to accept in terms of data loss and downtime?”

The Specific Gaps Organizations Discover Too Late

Organizations discover their “strategy” is really a set of assumptions. What breaks are very specific things:

Coverage gaps: Entire classes of data outside backup scope. Shared drives, user mailboxes, Teams and Chat, SaaS EHR adjuncts, imaging shares, or third-party SaaS that everyone assumed “the vendor backs up.” Configuration, permissions, and metadata, which control who can see protected health information and how apps connect, aren’t backed up or restorable in a meaningful way.

Immutability problems: Ransomware-encrypted or corrupted data has already synced into backups or version history. There’s no immutable, air-gapped copy to roll back to with confidence. Retention settings, overwrite behavior, or misconfigurations mean older, clean restore points are missing or incomplete.

Granularity limitations: The tools support coarse, all-or-nothing restores—whole tenant, whole site, whole mailbox, which would overwrite ongoing work or are simply too big to run under time pressure and API limits. There’s limited ability to restore specific users, folders, channels, or objects without collateral impact to unaffected data.

Performance bottlenecks: Restore jobs run into SaaS API throttling, bandwidth limits, or product inefficiencies, turning theoretical “hour” recovery time objectives into multi-day or multi-week efforts for large SaaS tenants. Organizations often find they never tested restores at real scale, thousands of users, terabytes of data—so the first large run exposes timeouts, failures, and operational overhead nobody planned for.

Process and ownership confusion: No one owns an end-to-end SaaS recovery runbook. Who declares which systems “priority 1”? Who picks restore points? Who validates protected health information integrity? How do you coordinate with vendors and clinical leadership? Access, credentials, and approvals for the backup platform itself may be fragmented or tied to people who are unavailable.

The Tuesday Morning Scenario That Changes Conversations

We use a specific scenario to help clinical leaders see themselves in this story: a Tuesday morning in surgery at a mid-size hospital.

Imagine a typical weekday. First cases are rolling into the operating room, pre-op is full, and clinicians are relying on a mix of systems. The EHR itself is up, but a malicious OAuth app has begun encrypting and corrupting files in Microsoft 365 or Google Drive and a radiology sharing portal that teams use for surgical planning, consents, and imaging review.

Within an hour, shared drives used by surgery, anesthesia, and perioperative nursing start returning errors or opening with unreadable content. The imaging portal intermittently fails to load pre-op CT and MRI scans.

From IT’s perspective, it still looks like “weird cloud issues” or a vendor problem. From the operating room’s perspective, it’s “our stuff is broken, and cases are stacking up.”

Walking clinical leaders through the next 2-4 hours is where they suddenly recognize themselves.

Pre-op and operating room staff begin calling. Consent packets, pre-op checklists, and preference cards stored in shared folders won’t open. Surgeons can’t access planning documents or prior imaging overlays that they keep in Teams, Drive, or SharePoint, even though the EHR shows basic notes.

Radiology and surgery are on the phone trying to reconcile which images are in the PACS and EHR versus in SaaS-based sharing tools, delaying start times. Leadership has to decide whether to proceed with limited supporting documentation, re-create work on paper, or cancel and reschedule.

When we lay this out step by step, chief medical officers and service-line chiefs see concrete impacts: longer anesthesia times, canceled or delayed surgeries, patient transfers or diversion, and downstream revenue and quality metrics at risk.

All without the EHR actually “going down.”

That’s usually when clinical leaders start asking different questions: “What’s our recovery time objective for those shares and portals? Who decides what gets restored first? How do we practice this so the operating room, emergency department, and ICU know exactly what to do if this happens on a Tuesday morning?”

SaaS ransomware resilience stops sounding like an abstract IT control and starts looking like a patient safety and operations program they need to help lead.

What Right Looks Like

For a mid-market healthcare organization that gets this figured out before an incident, “right” looks less like buying a product and more like adopting a different operating model.

They treat SaaS as mission-critical infrastructure. They instrument it like they do the EHR. They prove—before an incident—that they can detect, contain, and recover within business-driven service-level agreements.

The mindset shifts from “we have backups and multi-factor authentication, so we’re fine” to “we assume failure in SaaS and design for rapid, repeatable recovery.”

They explicitly model SaaS ransomware as a top risk in their enterprise risk register, tied to clinical operations, revenue cycle, and regulatory exposure. They adopt cloud and SaaS security frameworks, NIST Cybersecurity Framework, Cloud Security Alliance, 3-2-1 or 3-2-1-1-0 backup principles—and map controls to specific SaaS apps.

On the technology side, they deploy a SaaS security posture and ransomware detection layer that monitors OAuth apps, browser extensions, sharing, misconfigurations, and user behavior across their core SaaS platforms. They implement independent, immutable, SaaS-aware backup with granular restore capabilities—user, mailbox, drive, site, channel, object level—sized and configured for realistic recovery point and recovery time objectives under API constraints.

They wire in automation. When SaaS ransomware behavior is detected, access is cut off, users and apps are blocked, and targeted restores start without waiting for a human to connect the dots.

The organizations that get it right are relentless about rehearsal.

They run regular SaaS-focused ransomware exercises. They simulate an OAuth attack or mass encryption in a test segment. They measure time to detect, time to contain, and time to restore specific departments—oncology, billing—back to a known-good state.

They treat SaaS recovery like a clinical drill. Clearly defined playbooks, roles, and runbooks. Success metrics are reported to leadership. Continuous tuning of scopes, policies, and automation based on what the exercises reveal.

By the time a real event happens, they’ve already failed safely in testing and iterated. The response feels executed, not improvised.

What Will Force the Change

Looking ahead over the next 12-18 months, we believe mid-market healthcare will be pushed from “we should probably do something” to “we must” by a combination of factors.

More public, SaaS-centric failures that look like their own environment. 2025 data shows ransomware actors increasingly targeting healthcare vendors, billing services, and cloud and SaaS providers, creating multi-hospital outages even when the provider’s own EHR is technically up. Each high-profile disruption that traces back to a vendor, shared SaaS platform, or shadow IT tool makes it harder for leadership to ignore that their real blast radius is the entire SaaS ecosystem.

Tougher compliance expectations. HHS and HIPAA guidance is tightening around cybersecurity expectations, with updated Security Rule proposals and enforcement rhetoric that explicitly link data protection, third-party risk, and business continuity to patient safety. Ransomware is being framed more explicitly as a public health issue, with evidence that attacks disrupt care and harm outcomes.

Healthcare continues to see the highest breach costs across industries—averaging $7.42 million per breach in 2025, with OCR penalty enforcement increasing by 340% in 2024-2025, with Tier 3 and 4 violations now accounting for 67% of all financial penalties. In healthcare, downtime is more than just an IT hiccup; it’s a $7,500-per-minute crisis that puts both patient care and hospital finances at serious risk. Cyber insurers are tightening questionnaires and pricing around things like SaaS backup, incident response maturity, and third-party risk. “Yes” answers without evidence are increasingly not enough to secure coverage or favorable terms.

Maturity of SaaS security offerings. Platforms that combine SaaS Security Posture Management, SaaS ransomware detection, and integrated backup and recovery reduce the integration and staffing burden that has historically made mid-market IT leaders hesitant. Broader market trends, security automation, nonhuman identity management, and SaaS-focused monitoring are converging into clearer reference architectures.

The next 12-18 months are likely to see a tipping point driven less by new fear and more by accumulated proof: repeated SaaS-related healthcare outages, regulators and insurers asking sharper questions, and a maturing tooling ecosystem that makes it both feasible and expected to treat SaaS ransomware resilience as part of core clinical operations.

The Single Most Important First Step

If you’re a healthcare CISO or IT director reading this and you realize you have these gaps, here’s what you should do tomorrow morning:

Schedule and commit to a SaaS-specific ransomware readiness test with real stakeholders. Don’t let it be theoretical.

Pick one critical SaaS platform, Microsoft 365 or Google Workspace. Run an evidence-based readiness assessment or simulator against it to see, in black and white, what your real detection, backup, and restore posture looks like today.

Turn those findings immediately into a short, time-boxed action plan—30 to 60 days—that you review with clinical and executive leadership. Document gaps in SaaS backup coverage, missing immutability, slow or untested restores, lack of OAuth and app governance, and absence of a SaaS ransomware runbook.

That single step creates the forcing function you control: real data about your SaaS resilience, shared with the people who own patient care and the budget.

Fixing the gaps stops being an abstract “someday” project and becomes a prioritized, funded roadmap.

Sources and References

Healthcare Ransomware Statistics (2025)

Ransomware Tactics and Economics (2025)

Major Healthcare Breaches (2024-2025)

Healthcare Breach Costs and Downtime (2025)

Regulatory and Compliance Trends (2025)

SaaS Security and Ransomware Protection

Was this helpful?

Sergiy Balynsky is the VP of Engineering at Spin.AI, responsible for guiding the company's technological vision and overseeing engineering teams.

He played a key role in launching a modern, scalable platform that has become the market leader, serving millions of users.

Before joining Spin.AI, Sergiy contributed to AI/ML projects, fintech startups, and banking domains, where he successfully managed teams of over 100 engineers and analysts. With 15 years of experience in building world-class engineering teams and developing innovative cloud products, Sergiy holds a Master's degree in Computer Science.

His primary focus lies in team management, cybersecurity, AI/ML, and the development and scaling of innovative cloud products.

Recognition