On Friday, Ruth Reader of Politico broke the story that over 60 Epic health systems had written a letter to Mariann Yaeger, CEO of the Sequoia Project, raising concerns about trust, privacy, and governance in nationwide interoperability frameworks. The providers allege a pattern of bad actors improperly accessing patient data and argue that self-attestation and decentralized controls are no longer sufficient.
Instead, they have several proposals:
Centralized vetting, onboarding, and continuous monitoring of participants
Much greater transparency, including a public directory of participants, disclosure of exchange purposes, usage metrics, and intermediary data-retention policies.
Faster, government-backed dispute resolution
Stronger legal accountability for misrepresentation.
Creation of a digital health fraud task force involving federal agencies and state attorneys general to address misuse and large-scale data harvesting.
Later, Brittany Trang of STAT followed up with the nugget that Epic organized (and perhaps astroturfed) the letter:
An Epic spokesperson told STAT that the letter was a “collaboration of the Epic community and was coordinated through the Epic Health Policy Workgroup, which is an informal group of organizations using Epic that meets to develop solutions to policy challenges.”
I am surprised Epic took this particular route, given how poorly the last time they organized a letter about privacy with ~60 health systems went for them:
Jokes aside, what should be less surprising, though, are the recommendations here. They should sound familiar! In Epic v. Particle and subsequent articles, I posited similar concepts - tighter admission controls, greater transparency, some processes around monitoring and auditing, and actual enforcement. It’s not rocket science - at their core, these represent some of the most common levers we use to combat fraud and abuse.
The Fraud Mitigation Stack
In “Why Identity in Healthcare Sucks”, we discussed how there are no absolute, foolproof paths to trust at scale, only tradeoffs:
Competent organizations know they need to find a satisfactory privacy path to achieving their business outcomes because (again quoting McKenzie’s article):
The marginal return of permitting fraud against you is plausibly greater than zero, and therefore, you should welcome greater than zero fraud.
All organizations must establish trust in order to do business. The level of trust needed varies based on the potential value and risk of a relationship. Thus, the level of friction is a sliding scale:
Unaddressed in this spectrum is the remote island of liars, cheaters, and scammers who shoot on sight, aka Ireland
So the choices we make here are just risk decisions as part of overall design choices. Once framed this way, the controls below stop looking controversial or overly dramatic and start looking foundational.
Admission controls
Qualification of participants is a very normal preventative step to stop obvious harm vectors at the outset. Before granting access to a shared network, operators routinely assess who an entity is, what it is permitted to do, and whether its stated purpose is consistent with the value and risk profile of the network.
Know your customer and know your business are two manifestations of this in banking. These controls do not eliminate abuse, but they dramatically reduce low-effort exploitation.
Think of this as the front desk guard checking you in to the building. He’s not stopping sophisticated abuse, but he’s reducing the number of people who never should have made it inside in the first place.
Transparency
Transparency enables participants, operators, and regulators to understand how the network is actually being used, not merely how it is described in policy. Public directories, disclosed exchange purposes, and aggregate usage metrics create shared situational awareness and make misuse easier to detect, contest, and correct.
Transparency is the public ledger of who has been issued keys to the building and which doors those keys are supposed to open. When participation and behavior are visible, misrepresentation becomes riskier, enforcement becomes more credible, and trust is grounded in observable facts rather than assumptions.
Monitoring
Ongoing observation of network activity to identify anomalous behavior, policy violations, or emerging patterns of misuse. Mature networks track volume, frequency, and purpose alignment in near-real time, using thresholds and automated detection to surface behavior that departs from expected norms.
Payment networks like Visa and Mastercard continuously monitor transaction behavior across their networks in near-real time - we’ve all received fraud alerts when our card activity suddenly deviates from our normal spending patterns. The network does not rely on what participants said they would do at onboarding (because let’s be honest - humans lie!). It continuously evaluates what they are actually doing once access is granted, and intervenes when reality diverges from expectation.
Auditing
People and organizations change over time. Periodic evidence-based review of retrospective participant behavior is a natural counterpart to admission controls to keep the same rigor applied over time. Auditing examines the full corpus of telemetry (logs, access patterns, declared purposes, and credentials) on a recurring basis to confirm that participants remain who they said they were and are using the network in ways consistent with their stated role.
This is shocking to people, but all networks have fraud and abuse. When you create a novel mechanism for exchange of communication, payments, or data, you are creating a new attack surface. Human nature means that some actors will probe, exploit, and arbitrage that surface as soon as it exists. That behavior is not an anomaly. It is a constant. As networks scale and the value flowing through them increases, so too does the incentive to misrepresent, impersonate, or extract at volume.
Dispute Resolution
In mature networks, dispute resolution provides a formal escalation path when monitoring or auditing surfaces questionable behavior, or when participants allege harm by other actors. Critically, these processes are not left entirely to private negotiation; they are structured, time-bound, and backed by an entity with the authority to compel facts and impose outcomes.
On the securities markets, FINRA and exchange arbitration processes resolve disputes between brokers, firms, and customers. Whatever the process, effective dispute resolution closes the loop. It translates signals into decisions, decisions into consequences, and consequences into precedent. If admission controls are the front desk and monitoring is the cameras, dispute resolution is the property manager with the keys, the incident report, and the authority to act.
Enforcement
Enforcement is the ability to impose real consequences when rules are violated through concrete forms: access suspension or termination, financial penalties, remediation mandates, loss of privileges, and referral to regulators or law enforcement (with possible criminal repercussions) where appropriate.
Deterrence depends on credibility. Participants must believe that violations will predictably lead to consequences that meaningfully change their incentives. The goal is not punishment for its own sake, but deterrence. If there is no stick, incentives do not change.
What’s Missing
Taken together, these controls form a familiar and largely complete set of controls. They address who gets in, what is visible, how behavior is monitored over time, how disputes are resolved, and whether violations actually change outcomes. None of this is novel. It is how scaled networks in other industries manage trust under conditions of value and risk. We can pramatically and analytically compare ourselves against other networks and copy them shamelessly when it makes sense.
That said, the letter implicitly assumes that most of the burden for detecting and correcting misuse sits with centralized operators and regulators. In practice, the most resilient networks do not rely solely on central oversight. They scale trust by delegating visibility and by reducing the need for workarounds in the first place. From that perspective, two additional design choices would materially strengthen our frameworks.
Keep reading with a 7-day free trial
Subscribe to Health API Guy to keep reading this post and get 7 days of free access to the full post archives.