CMS ACCESS Model: Complete Provider Readiness Guide
The CMS ACCESS Model launches January 2026 and runs for five performance years. Most ACOs and health systems that intend to participate do not yet have the data infrastructure, clinical workflow, or operating model to score well in performance year one. This is the comprehensive readiness guide we walk our clients through before they sign a participation agreement.
If you are looking for the short version, our CMS ACCESS Model Readiness Checklist is a 12-capability self-assessment built from the same framework. This guide is the long version with the reasoning behind each capability, the trade-offs in how you can build it, and the operating model that holds it all together once you are live.
TL;DR. ACCESS is a Medicare value-based care program that puts FHIR interoperability, patient-reported outcome measures, and outcome attainment at the center of scoring. To perform well, an ACO needs five operational pillars (FHIR APIs, PROMs collection, outcome attainment scoring, HCC risk adjustment workflow, and care management integration), all running on a unified data platform with HIPAA-aligned governance. The bar is meaningfully higher than MSSP. Plan a 9 to 14 month build if you are starting from a claims-only stack.
Section 1. What the ACCESS Model actually is
The CMS ACCESS Model, short for Accountable Care for Chronic Conditions Serving Seniors, is a voluntary value-based care program from the CMS Innovation Center (CMMI). It targets Medicare Fee-for-Service beneficiaries with two or more chronic conditions and holds participating ACOs accountable for total cost of care, quality, and a new layer of measurable outcome attainment.
Three things make ACCESS substantively different from MSSP and prior CMMI value-based programs.
First, interoperability is required, not optional. Participating ACOs have to operate production FHIR R4 endpoints that expose structured clinical data on demand to CMS and, by patient consent, to third-party apps. The ONC-mandated FHIR APIs that hospitals already expose are the floor, not the ceiling. ACCESS expects ACOs to use the data, not just publish it.
Second, patient-reported outcome measures (PROMs) are part of scoring. PROMs have lived in pilot programs and specialty clinics for years. ACCESS pulls them into ACO-wide quality calculation. ACOs have to collect, store, and report PROMs across attributed populations at sustained response rates, not just when convenient.
Third, outcome attainment is its own scoring element. Outcome attainment measures whether documented patient goals (target A1C, functional-status improvement, pain-score reduction, and similar) are actually being met, and to what degree. This is genuinely new. It requires structured goal capture in the EHR, scoring engines that update as new data arrives, and clinical-team workflows that close the loop on the patients whose goals are not being met.
The mechanics are otherwise familiar to anyone who has operated MSSP. Prospective attribution based on plurality of primary care claims. Single risk track with progressive downside risk across the five performance years. Regional plus national benchmark blend with risk adjustment. Shared savings and losses settled annually.
Section 2. Who should participate
Not every ACO is a fit for ACCESS in performance year one. The fit profile favors organizations that already have:
- An attributed population of meaningful size (typical participant has more than 10,000 attributed lives, with many at 30,000-plus).
- A history of MSSP performance and the financial capacity to absorb downside risk.
- Multiple participating provider organizations with EHR estate that includes Epic, Cerner (Oracle Health), or Meditech in production.
- A clinical leadership team that has the bandwidth to drive PROMs adoption and outcome-attainment workflow change.
- Either an existing data platform that can be extended toward ACCESS requirements or the budget to build one.
If you are missing two or more of these, performance year one is probably not your year. The model runs for five performance years and CMMI typically opens additional cohort entry windows. Building toward an entry in performance year two or three is a perfectly valid strategy. The financial and operational consequences of joining unprepared are significantly worse than waiting a year.
Section 3. The five operational pillars you need to operate
Everything that scores well in ACCESS rests on five capability pillars. They are not independent. The data platform under each is largely the same, which is why a unified build is significantly cheaper than five disconnected ones.
Pillar 1. FHIR APIs and interoperability
The foundation. ACOs need a production FHIR R4 endpoint exposing the USCDI v3 core resources (Condition, Observation, Procedure, MedicationStatement, AllergyIntolerance, Encounter, plus QuestionnaireResponse for PROMs) for every attributed patient. SMART on FHIR / OAuth 2.0 authentication with documented system, user, and patient scopes. FHIR Bulk Data API operational with at least one major attributed-population payer for cross-organizational data exchange. Patient-facing FHIR endpoints with documented registration and consent capture.
The work is rarely in writing pipeline code. It is in vendor registration timelines (Epic App Orchard, Cerner Code, Meditech APIs each have their own approval process), scope approvals, clinical-content governance, and operational hardening (rate limit handling, retries, freshness monitoring, and dead-letter handling). Plan 4 to 6 months for a single-EHR build, 8 to 12 months for multi-EHR with bulk data.
For a deeper treatment, see our FHIR integration consulting deep-dive and the Epic FHIR integration for ACOs practitioner guide.
Pillar 2. PROMs collection and reporting
Patient-reported outcome measures sound deceptively simple to integrate. Implementing them at the response rates ACCESS expects is the single hardest organizational change ACO operators report.
You need a PROMs platform that integrates with patient portal, SMS, or in-EHR survey delivery. PROMIS-29 and condition-specific instruments are typical starts. Quarterly cadence minimum for chronic-condition cohorts. PROMs results stored as FHIR Observation or QuestionnaireResponse resources tied to longitudinal patient records, not in a separate survey silo. Most importantly, PROMs visible in clinician workflow with documented escalation thresholds, because PROMs that nobody reads do not change outcomes.
Sustained 40 to 50 percent response rates are the bar. Hitting that requires workflow design, clinical-staff training, automated reminders, and panel-level reporting that gives care teams visibility into who has and has not responded. The technology piece is roughly 30 percent of this pillar. The other 70 percent is operating model.
Pillar 3. Outcome attainment scoring
The novel scoring layer. Outcome attainment scores whether documented patient goals are actually being met. To score it, you need three things in production:
-
Structured patient-specific goal capture in the EHR. Free-text goals will not score. Custom EHR fields, care-plan modules, or condition-specific goal templates. Typical examples: target A1C of 7.0, functional-status target of 80 on a defined scale, pain-score target of 4 or less.
-
A scoring engine that updates as new clinical data arrives. Lives in the data platform, not in the EHR. Computes attainment per patient, per goal, per measurement period.
-
Attainment-rate dashboards by panel, provider, and condition. Used in clinical operations, not just back-office reporting. Care teams need to see attainment rates as a leading indicator, not in retrospect at year-end.
This pillar is where most ACOs underestimate effort by 4 to 6 months. Goal capture in EHR is a clinical-content governance problem disguised as a data engineering problem. Plan accordingly.
Pillar 4. HCC risk adjustment workflow
ACCESS uses the same hierarchical condition category framework as MSSP, but the V28 transition is fully phased in by performance year 2026. The V24 playbook does not produce the same financial result under V28. ACOs that have been carrying mature retrospective coding programs frequently find that performance year one revenue is meaningfully below expectations if they relied on the old model.
The capability you need is pre-visit clinical NLP that surfaces unaddressed HCCs from EHR notes in time for the clinician to confirm them during the encounter. Two-stage extraction with V24 and V28 mapping, confidence scoring, audit-trail design that holds up under RADV-style review. Surfaced 24 to 48 hours before the encounter, embedded in the clinician's daily workflow.
Retrospective chart review still has a place, particularly for population-level coding QA, but it cannot be the primary capture mechanism. The economic and clinical effects are both significantly stronger when conditions are confirmed in care rather than reconstructed in coding.
For details on how to build this capability, see the HCC risk adjustment automation deep-dive and the HCC risk adjustment NLP practitioner guide.
Pillar 5. Care management workflow integration
The last pillar is where most ACO platforms fail in production. You can build the cleanest FHIR pipeline, the most sophisticated outcome attainment engine, the most accurate HCC NLP, and still see year-one performance fall flat if the resulting insights do not change clinician behavior.
Care management workflow integration means attribution-aware patient lists in the clinician daily UI, not in a separate analytics tool. Care gap surfacing inside the EHR worklist, not in a quarterly PDF. PROMs results visible at the point of care with documented escalation thresholds. HCC suggestions surfaced 24 to 48 hours before the encounter. Outcome attainment dashboards reviewed by clinical leadership monthly with named owners on the patients and panels that are not on track.
The integration work is partly technical (in-EHR worklist embedding, SMART app gallery, sidebar context, MyChart workflow) and partly organizational (defining the care team, the cadence, the escalation paths, and the leadership accountability). Both halves have to ship for the rest of the platform to produce outcomes.
Section 4. Performance year 1 timeline
If you are entering performance year one (January 2026 start), here is the work-back from the start date that we use with clients.
18 months out (mid-2024 if you have not started). Architecture decision and platform vendor selection. Lakehouse vendor (Microsoft Fabric, Databricks, Snowflake) chosen based on existing tenancy, team skills, and contract footprint. Data residency and HIPAA stance documented. ACCESS application timeline aligned with build milestones.
12 months out (early 2025). Platform foundation live. Identity, network isolation, BAA-covered cloud zones, baseline governance via Purview or equivalent. First FHIR ingestion pipeline (typically your largest EHR participant) ingesting Conditions, Observations, Procedures, and Encounters into bronze.
9 months out (April 2025). Multi-EHR FHIR ingestion live across participating provider organizations. Payer claims feeds integrated. Canonical patient-encounter-claim graph in silver. Attribution logic live for the ACCESS-eligible population.
6 months out (July 2025). PROMs platform integrated with patient portal and SMS. Workflow trained. First sustained-response measurement period running. HCC NLP pipeline soft-launched on a single specialty or pod with gold-standard set built. Outcome attainment scoring engine running on a defined goal set with at least one chronic condition.
3 months out (October 2025). Multi-pillar shadow run on production data. Outcome attainment dashboards live for clinical leadership. PROMs response rates trending toward 40 percent. HCC NLP rolled out across attributed population with defensible audit trail. Care management workflow integration in clinician daily UI.
Performance year start (January 2026). Production cutover with documented rollback plan. 30-day stabilization window with active engineering and clinical operations support. Weekly performance reviews against the scoring framework.
The single most common mistake is collapsing this 18-month timeline into 9 months. We have seen it. It does not produce a defensible PY1 result. If you are inside 9 months, plan for entry in performance year two or three rather than rushing PY1.
Section 5. Quality measure framework under ACCESS
Quality scoring under ACCESS blends three measurement families.
Process and outcome measures. A subset of the familiar MSSP / e-CQM library, adapted for ACCESS-specific clinical priorities (chronic-condition management, transitions of care, advance care planning, behavioral health integration).
Patient-reported outcome measures. PROMIS-29 generic measures across the attributed population, plus condition-specific measures for the dominant chronic conditions in your panel. Captured at baseline plus at quarterly cadence minimum.
Outcome attainment. The novel layer. Per-patient documented goals scored for whether they are being met, aggregated to attainment rates by panel, provider, and condition.
The weighting across these three families is laid out in the ACCESS scoring framework that CMMI publishes annually. Outcome attainment carries enough weight that an ACO with strong process measures but weak outcome attainment will underperform an ACO with the inverse. The strategic implication is that outcome attainment is the highest-leverage scoring element for an ACO with strong existing operational foundations, because it is the dimension on which fewer competitors are well prepared.
Section 6. Risk-track economics under ACCESS
ACCESS runs a single risk track with progressive downside risk across the five performance years. PY1 has minimum-savings-rate and minimum-loss-rate thresholds that look familiar to MSSP enhanced track participants. PY3 onward, the asymmetry of upside-to-downside narrows. By PY5, downside exposure is meaningful enough that ACO operators have to be confident in their performance trajectory before committing.
The economic decision is rarely about year one. PY1 is structured to give well-prepared ACOs an asymmetric upside and limited downside. The economic decision is about whether you can hit the trajectory that makes PY3 onward profitable. If your data platform, PROMs adoption, outcome attainment scoring, and HCC capture are not on a strengthening curve into PY2, the back-half years compress your shared savings rapidly.
This is why the build cannot be PY1-only. The infrastructure has to support sustained improvement across measurement periods. Platforms built for PY1 readiness with no operating model for continuous improvement underperform platforms built thinner but with strong continuous-improvement loops.
Section 7. Common readiness gaps
Across the readiness reviews we run with ACOs and health systems, the same six gaps appear repeatedly.
-
PROMs participation rate well below 40 percent. Almost universal. The technical integration is the easy 30 percent. Sustained workflow adoption is the hard 70 percent.
-
No structured patient-specific goal capture in EHR. Goals live in clinical narrative, not in structured fields. Outcome attainment scoring cannot run against narrative text without a heroic NLP layer that nobody wants to build under time pressure.
-
Single-EHR FHIR pipelines with multi-EHR participating provider organizations. ACOs assume the largest EHR represents the whole estate. They find out at month 9 that the second-largest EHR does not. The retrofit cost is significant.
-
Care management workflow not integrated with EHR. Dashboards in a standalone tool that clinicians do not open. The platform may be technically excellent and produce no outcome change.
-
HCC NLP without RADV-defensible audit trail. Suggestions surfaced without source-note evidence, without confidence scores, without clinician-decision lineage. The first audit exposes the gap.
-
Vendor-led architecture rather than use-case-led. A platform shaped by Fabric, Databricks, or a healthcare-AI vendor optimizes for the vendor. Platforms shaped by ACCESS scoring requirements optimize for the outcome.
Section 8. Build versus buy decision framework
The build-versus-buy question for ACCESS readiness has changed significantly in the last 18 months. Three factors drive the decision.
Existing platform footprint. If you already have a lakehouse on Fabric, Databricks, or Snowflake with ACO-relevant data flowing, the marginal cost of extending it for ACCESS is small. Buying a vendor platform on top of an existing platform is rarely defensible.
Time horizon. If you are inside 9 months to performance year start, vendor platforms can compress timeline at the cost of long-term flexibility. If you have 12 months or more, building (often with consulting partner support) produces a platform you control for a decade.
Clinical workflow ownership. The pillars where buying makes the most sense are PROMs collection (where vendor platforms have multi-tenant infrastructure for SMS, portal, and survey delivery) and HCC NLP (where vendors have invested in models and labeled training data). The pillars where buying rarely works are outcome attainment scoring and care management workflow, because both have to be tightly integrated with your specific EHR and clinical operating model.
The hybrid pattern that wins most often. Build the unified data platform. Buy the PROMs collection platform. Build outcome attainment scoring on top of your platform. Buy or partner for HCC NLP if the clinical evaluation rigor is strong, build it if you have the data engineering depth. Build all care management workflow integration. Use a senior consulting partner to compress timeline and avoid the most common architecture mistakes.
For deeper coverage of the platform decision, see our ACO data platform consulting deep-dive and the Building an ACO data platform practitioner guide.
Section 9. What "ready for ACCESS" actually looks like
We score readiness across 12 capabilities organized in 4 sections. The summary version:
- FHIR and interoperability. Production FHIR R4 endpoint, SMART on FHIR / OAuth 2.0 with documented scopes, FHIR Bulk Data API operational with payers, patient-facing FHIR endpoint with consent capture.
- PROMs at scale. Platform integrated, sustained 40 percent-plus response rate, results stored as FHIR resources, results visible in clinician workflow with escalation thresholds.
- Outcome attainment. Structured goal capture in EHR, scoring engine in production, panel-provider-condition dashboards used by clinical leadership.
- Underlying platform and operating model. Unified lakehouse, HCC NLP with RADV-defensible audit trail, care management workflow embedded in clinician daily UI.
The full self-assessment with scoring guidance is in the CMS ACCESS Model Readiness Checklist (free download). It is the working version we walk clients through during readiness reviews, and the scoring map gives a defensible read on whether you are entering PY1 or planning for PY2 or PY3.
Section 10. Performance year 1 operating model
The platform is necessary but not sufficient. The operating model that runs on top of it produces the result. Three components.
Weekly clinical-operations rhythm. Care team review of PROMs response rates, outcome attainment dashboards, HCC NLP suggestion volume and confirmation rates, and care management worklist drift. Named accountability per panel.
Monthly leadership review. ACO leadership review of trajectory against scoring framework, with formal go-or-no-go decisions on workflow changes or platform investments. The cadence matters because mid-year course corrections in performance year are operationally expensive but cost-effective if the trajectory is clearly off.
Quarterly platform retrospective. Engineering and clinical informatics review of platform health (data quality, freshness, accuracy of HCC NLP, drift on outcome attainment scoring) and the runbook for the next quarter's evolution.
The operating model has to ship before performance year start. Platforms with strong technology and weak operating models underperform platforms with merely competent technology and strong operating models. Every time.
Section 11. Where to go from here
Three concrete next steps depending on where you are today.
If you are still scoping ACCESS participation: Score yourself with the readiness checklist, align internally on PY1 versus PY2 versus PY3 entry, and book time with a partner who has done this before. The decision is structurally easier with a defensible scoring read.
If you are mid-build and worried about timeline: Run a focused readiness review with your team. Identify the two or three highest-leverage gaps (almost always PROMs adoption, outcome attainment goal capture, and care management workflow integration). Decide whether to absorb them in PY1 or accept their absence as a known shortfall.
If you are post-platform and pre-performance: Pressure-test the operating model. Most year-one underperformance traces to operating-model gaps, not platform gaps. The platform you built is probably better than you think. The cadence and accountability around it are usually weaker than you think.
How DATA4AI helps: We run readiness reviews and build the data platform, FHIR integration, HCC NLP, outcome attainment scoring, and care management workflow integration that ACCESS Model performance requires. Senior practitioners only, no handoffs. Book a working session to walk your scored checklist with us, or download the readiness checklist to score yourself first.
Related articles
FHIR API Readiness for ACOs: USCDI v3, SMART on FHIR, Bulk Data, and Production Hardening
Cluster article on Pillar 1 of the CMS ACCESS Model readiness framework. The technical depth on what production FHIR APIs require for ACO data teams: resource coverage, auth patterns, Bulk Data, and the production hardening non-negotiables.
Read articlePROMs Collection at Scale: Hitting Sustained 40 to 50 Percent Response Rates
Cluster article on Pillar 2 of the CMS ACCESS Model readiness framework. Instrument selection, four-channel delivery, the patterns that move response rates from pilot levels to ACCESS-eligible scale, and how PROMs surface in clinical workflow.
Read articleOutcome Attainment Scoring: The Highest-Leverage Element in ACCESS Model PY1
Cluster article on Pillar 3 of the CMS ACCESS Model readiness framework. Why outcome attainment is the genuinely new scoring element, the clinical-content governance that has to ship before the engine can score, and the dashboard layer that closes the loop.
Read articleLet's talk about your value-based care project.
Working on a value-based care contract, ACCESS Model application, EHR integration, or AI-enabled clinical workflow project? Book a 20-minute discovery call or email [email protected].