TL;DR
- MEDDIC is a B2B qualification framework — Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, Champion — created at PTC in 1996 during the run from $300M to $1B ARR.
- Roughly 73% of SaaS companies above $100K ACV run some version of MEDDIC or MEDDPICC, and full adopters report 18% higher win rates and 24% larger deals.
- MEDDPICC adds Paper Process (legal, procurement) and Competition (named competitor) to the original 6 — it is the version most enterprise sales orgs run on $250K+ deals.
- The framework only works when it is installed into the discovery call itself. Scoring after the fact — in a CRM field dump or a pipeline review two weeks later — is how MEDDIC rollouts quietly die.
- Grade every deal on a 0–10 scale per letter. Refuse to forecast anything below 7/10 on Economic Buyer and Decision Process. Forecast accuracy goes from ±35% to ±10% inside two quarters.
Snippet answer
MEDDIC is a B2B sales qualification framework that stands for Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, and Champion. Created in 1996 by John McMahon at PTC, the framework gives reps six checkpoints to verify on every enterprise deal — if any checkpoint is missing by late-stage, the deal is flagged as at-risk in the forecast. The modern extension, MEDDPICC, adds Paper Process and Competition for enterprise deals above $250K ACV.
What MEDDIC is (the one-paragraph answer)
Forget for a second that MEDDIC is an acronym. Start with the rep problem it solves: a $180K deal that looked great in discovery, closed in pipeline review, and ghosted in the final week because nobody ever met the person signing the MSA. Every rep has lost that deal. MEDDIC is the framework built to stop losing it twice.
The plain definition: MEDDIC is a B2B sales qualification framework. It gives the rep six specific things to verify before committing a deal to the forecast — Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, Champion. Each letter is a checkpoint. Miss one, and the commit is dishonest. Hit all six, and the forecast gets quieter.
The framework sits in a specific category. It is not a selling methodology like Challenger or Sandler — those teach how to pitch and handle the room. It is not a discovery technique like SPIN — that teaches what questions to ask. MEDDIC is pure qualification: what has to be true before this deal is real. The two things most often conflated with MEDDIC — selling technique and discovery questioning — are compatible with it, not substitutes for it.
Adoption is the tell. Roughly 73% of SaaS companies selling above $100K ACV run some version of MEDDIC or its extensions, according to practitioner research tracked by the MEDDIC Academy. The framework has outlasted every qualification trend of the last 30 years because complex deals have six checkpoints whether the rep tracks them or not — MEDDIC just makes the tracking explicit.
Where MEDDIC came from: PTC, 1996, and the $300M-to-$1B run
MEDDIC was built in 1996 by John McMahon, Dick Dunkel, and Jack Napoli at Parametric Technology Corporation (PTC) — a CAD software company selling six- and seven-figure contracts to industrial engineering teams. The context matters: PTC\'s deals were complex, multi-threaded, and competitive against incumbents with deep installed bases. The reps needed a way to tell which deals were real and which were stories.
The outcome is the part every sales leader cites. PTC grew from roughly $300M to $1B in revenue in four years with MEDDIC as the qualification backbone, and the framework became one of the most cited sources of the 1990s enterprise-software playbook. John McMahon went on to co-author The Qualified Sales Leader, which codified MEDDIC for a generation of SaaS founders and CROs. The lineage is why you see MEDDIC at Snowflake, MongoDB, Datadog, Cloudflare, and most of the post-2010 enterprise SaaS canon.
The original framework was six letters. The extensions came later. In the mid-2000s, enterprise deals started spending more time in procurement and legal — so practitioners added Paper Process (the P), creating MEDDPIC. By the 2010s, competitive displacement in the SaaS market meant reps were routinely selling against named incumbents — so Competition got added (the second C), creating MEDDPICC. Both extensions shipped out of the field, not out of a book. They are what reps actually need.
The framework\'s durability comes from the fact that nothing about the PTC context was unique. Any B2B deal above $50K ACV runs into the same six checkpoints. Any deal above $250K adds paper process and competition. MEDDIC does not date because the underlying buyer behavior does not date — committees still have economic buyers, decisions still have criteria, and deals without real pain still do not close.
The 6 letters of MEDDIC, with rep scripts for each
Here is the rep-ready breakdown — each letter, what it actually means in a deal, the question a rep uses to get it, and the failure mode that kills the qualification if the letter is shallow. Print this, pin it above the monitor, and stop reading generic MEDDIC overviews.
Metrics
What it is: The quantified business outcome the buyer will get from the deal — dollars saved, hours recovered, revenue added, risk retired. Not a benefit statement. A number the CFO would sign off on.
Rep script
"If we could cut your onboarding time from 6 weeks to 2 weeks, what would that be worth to you in hiring velocity this year?"
Watch-out: Metrics without a dollar sign or a percentage are aspirations. Push until the buyer gives you the number themselves.
Economic buyer
What it is: The one person who can say yes and overrule a no. Not the person running the eval. Not the champion. The budget owner — usually a VP or C-level who will sign the MSA.
Rep script
"Beyond yourself, who else needs to sign off on a decision like this? If Finance pushed back, who would have the final call?"
Watch-out: If you have not met the economic buyer by stage 3, the deal is at risk. Every unmet economic buyer adds 30 days to the cycle and a 40% slip probability.
Decision criteria
What it is: The formal list the buyer will use to score you against competitors — technical, commercial, and relational. The RFP checklist, written or implicit, that your demo has to hit.
Rep script
"If you were writing the one-page evaluation scorecard today, what would the top five rows be? And who gets a vote on each row?"
Watch-out: If the criteria sound generic ("ease of use, price, support"), you have not found the real ones yet. Dig for the two criteria that would actually disqualify a vendor — those are the ones that matter.
Decision process
What it is: The step-by-step path from today to a signed contract — evaluation stages, approval gates, legal review, security review, procurement. A date for each step, owned by a named person.
Rep script
"Walk me through how a tool like this has gotten approved in the past. Who reviews first, what happens after, and how long does legal usually take?"
Watch-out: A decision process without named owners and dates is a wish list. The deal slips by the length of whatever step you did not map.
Identify pain
What it is: The specific, quantified business problem your product solves — the pain that is bad enough that standing still is no longer an option. Not a nice-to-have. A the-CEO-asks-about-it-at-the-board pain.
Rep script
"What is this problem costing you if you do nothing for another 12 months? Who notices when it happens, and what does it stop them from doing?"
Watch-out: If the buyer says "we could live with it," the deal is not real. Real pain has a deadline, a stakeholder, and a cost of inaction.
Champion
What it is: Someone on the buyer's side who wants this deal to happen and has the power to influence the outcome. A champion sells when you are not in the room — coaches on the politics, flags risks, and pushes internal sign-offs.
Rep script
"What would it take for you to personally go to bat for this inside your organization? And who inside your team would push back if we tried to move forward?"
Watch-out: A friend is not a champion. A champion is the person who emails you at 9pm saying procurement added a clause you should push back on. If you do not have one, you have a contact.
Read the six together and the pattern is obvious: MEDDIC is a sequence, not a checklist. Pain anchors the call. Metrics quantify the pain. Economic Buyer ties the money to the pain. Decision Criteria and Process map the path. Champion carries the deal when the rep is not in the room. Break the sequence and the framework collapses into box-ticking.
MEDDIC vs MEDDPICC vs MEDDPIC: which acronym matters
The acronym keeps growing. Every year a new letter shows up on LinkedIn and a fresh argument breaks out about which version is the "real" one. The honest answer is that the version matters less than the letters you actually use on live deals. Here is the three-way split most reps actually encounter.
| Acronym | Letters | Introduced | What it adds | Best fit |
|---|---|---|---|---|
| MEDDIC | 6 letters | 1996 | Original framework — M, E, D, D, I, C | Classic complex B2B deals, $50K–$500K ACV |
| MEDDPIC | 7 letters | 2005+ | Adds Paper Process — legal, procurement, MSA review | Enterprise deals where legal is a separate workstream |
| MEDDPICC | 8 letters | 2010s | Adds Competition — named competitor with a defined battle plan | Competitive enterprise deals, $250K+ ACV, multi-vendor bake-offs |
The practical rule: start with MEDDIC, add letters when the deal shape demands them. A $60K SaaS deal with a single decision-maker and a 45-day cycle does not need Paper Process as a separate workstream — legal runs in the final two weeks and the rep handles it. A $400K enterprise deal with a security review, a procurement process, and two competing vendors needs MEDDPICC because the two extra letters have their own timelines and their own owners.
The thing not to do is run all 8 letters on every deal by default. The overhead kills adoption. Reps stop filling the fields, managers stop coaching them, and the scorecard becomes a form people lie to. Pick the version that matches the deal shape, and let the framework shrink when the deal shrinks.
When MEDDIC beats BANT, SPIN, and Challenger
MEDDIC lives alongside other frameworks, not in place of them. The debates on LinkedIn — "MEDDIC vs Challenger," "SPIN vs MEDDIC" — mostly miss that the frameworks answer different questions. A rep running a modern enterprise deal uses three of them on the same call without noticing.
| Framework | Where it wins | Where it falls short | MEDDIC verdict |
|---|---|---|---|
| BANT | Inbound volume, short cycle, one buyer | Enterprise deals, multi-threaded, long cycle | Use for inbound SDR qualification only |
| SPIN | Discovery questioning technique | Opportunity scoring and forecast discipline | Compatible with MEDDIC — use SPIN to ask, MEDDIC to record |
| Challenger | Commercial teaching and reframing pitch | Does not qualify, does not score the deal | Pair with MEDDIC — Challenger for the narrative, MEDDIC for the qualification |
| MEDDIC | Complex enterprise deals, forecast accuracy, multi-threaded qualification | Short-cycle inbound, transactional SMB | The one to install when deals are >$50K ACV and cycles run >30 days |
The pairing that actually works in the field is Challenger + SPIN + MEDDIC running in parallel on the same discovery call. Challenger shapes the narrative ("here is the insight that changes how you should think about this problem"), SPIN supplies the question ladder (Situation → Problem → Implication → Need-Payoff), and MEDDIC records what the call surfaced against the 6 checkpoints. None of the three cancels out the others.
BANT is the one framework MEDDIC actually retires, for anything above SDR qualification. Budget, Authority, Need, Timeline was built for a simpler era of shorter cycles and smaller committees. For the full comparison — with the hybrid model most modern teams run — see our MEDDIC vs BANT breakdown. The short version: BANT for the 30-second inbound pass, MEDDIC for everything downstream.
How to qualify a deal with MEDDIC in a 45-minute discovery call
A 45-minute discovery call can cover 5 of 6 letters cleanly if the rep keeps the clock and anchors the conversation to pain. The sequence matters more than the script. Here is how the best AEs run it — stopwatch, not stopwatch-optional.
- 00:00–00:05
Frame the call
Agenda, time check, "here is what I want to get to by the end." Never start without setting the outcome — a discovery call without a declared outcome is a conversation.
- 00:05–00:15
Identify Pain (I)
Open with the business pain question: what is broken, who notices, what does standing still cost. Everything else in MEDDIC anchors to pain — skip this and the call turns into a feature walk.
- 00:15–00:25
Metrics (M) + Economic Buyer (E)
Quantify the pain into a number. Then trace upward: "who ultimately signs for something this size?" Two letters in ten minutes if the pain is real.
- 00:25–00:35
Decision Criteria + Process (D + D)
How will they evaluate. Who reviews. What the timeline looks like. Draw the step-by-step on a shared doc — if the buyer edits it, the deal is real.
- 00:35–00:42
Champion (C)
Close the call by asking your contact to go to bat. "What would it take for you to personally advocate for this inside?" The answer tells you if you have a champion or a contact.
- 00:42–00:45
Next step commitment
A discovery call without a calendared next step is a dead lead. End with the next meeting on both calendars, the stakeholders for it, and the one artifact you will send beforehand.
The letter that usually gets missed on the first call is Champion. Reps hear enthusiasm, mistake it for advocacy, and walk off the call marking the contact as "champion confirmed." Test the word before you use it: ask your contact to do one specific internal thing — loop in a stakeholder, push back on procurement, rebook the next meeting. If they follow through, champion. If they do not, contact.
The second common miss is Decision Process. Reps ask "what is the timeline?" and take "a few weeks" as an answer. A real process has named gates, named owners, and named dates. Draw it on a shared doc during the call. If the buyer edits the doc in real time, the deal is forecastable. If they do not engage with it, the process is your fiction, not theirs.
The MEDDIC scorecard: how to grade every deal 0–10 per letter
A checklist says "do I have an Economic Buyer? Y / N." A scorecard says "how strong is my Economic Buyer, on a 0–10 scale?" The difference is the difference between a forecast that slips and a forecast that lands. Grade every deal on every letter. Force the honest score, not the aspirational one.
| Letter | 0 / 10 (weak) | 10 / 10 (strong) |
|---|---|---|
| Metrics | No number discussed | Buyer gave a specific dollar / percentage, CFO would recognize it |
| Economic Buyer | Never met, no name | Met in person or on a call, confirmed budget authority |
| Decision Criteria | Generic "ease of use, price, support" | Named 5 criteria with weightings, knows who votes on each |
| Decision Process | No timeline or steps | Written step-by-step with dates and owners, legal + procurement mapped |
| Identify Pain | Nice-to-have language | Quantified, stakeholder-owned, has a cost-of-inaction date |
| Champion | No advocate | Proven advocate — has pushed back on your behalf at least once |
The pipeline-inspection rule most teams end up at: no deal enters commit unless Economic Buyer, Decision Process, and Pain are all graded 7/10 or higher. The other three letters flex by deal stage — Champion and Decision Criteria build over time, Metrics can solidify late. But commit without a verified Economic Buyer and the forecast becomes a lottery ticket.
The scorecard replaces the CRM stage as the primary discipline. Stage is where the rep thinks the deal is; scorecard is what the deal actually has. Running both keeps managers honest during 1:1s — "you moved this to Proposal, but the Economic Buyer score is 4. Walk me through that."
The 7 rollout mistakes that quietly kill MEDDIC adoption
Most MEDDIC rollouts fail for the same seven reasons. The framework works; the install breaks. If the team is six months in and reps are "forgetting" to fill the scorecard, odds are one of these is the cause.
- 1
Rolling it out as a training, not a workflow
Reps sit through the session, nod, and go back to writing notes in Gmail. MEDDIC only sticks when it is wired into the call tool, the CRM, and the pipeline review — not a PDF in a shared drive.
- 2
Making the scorecard a CRM field dump
18 new required fields per deal kills adoption in two weeks. The scorecard works as a 6-field summary, not a 40-field form. Fewer fields, better data.
- 3
Treating "champion" as "anyone who returns your email"
Contact ≠ champion. If the scorecard marks every returned email as a champion, pipeline inspection loses its teeth. Raise the bar: champion = has advocated at least once.
- 4
Skipping the economic buyer until legal review
By the time legal shows up, the economic buyer is a 2-week delay. Meet them by stage 3 or expect the deal to slip a full quarter.
- 5
Using BANT as a shadow process underneath
Teams quietly keep BANT running in the CRM while pretending MEDDIC is live. Deals get qualified twice, reps resent the second pass, and the scorecard stops getting filled. Pick one. Ship it.
- 6
Never coaching MEDDIC on a live call
The framework has to be coached in the moment — during discovery, on the call, not in a post-mortem two weeks later. Reps do not learn qualification from a slide; they learn it from a manager saying "ask this next."
- 7
Measuring completion rate, not deal quality
Filling the 6 fields is not the same as qualifying. A 100% completion rate with 0% win rate is a vanity number. Grade the letters, not the boxes.
The root cause underneath all seven: treating MEDDIC as a process to impose on reps, rather than a workflow to install in the tool they already use. Framework plus coaching plus live-call wiring equals adoption. Any two without the third and the rollout stalls within a quarter.
The 4 metrics that prove MEDDIC is actually working
MEDDIC either moves four specific numbers in the first two quarters, or it is not working. Measure these before rollout and at the 60-day and 120-day marks. If the numbers flat-line, the framework is being filled out, not run.
| Metric | Baseline | 120-day target | Why it moves |
|---|---|---|---|
| Forecast accuracy | ±35% | ±10% | The one number MEDDIC moves harder than any other. Fewer surprise slips means the VP Sales walks into QBR with credibility. |
| Average deal size | Baseline | +20–25% | Full-MEDDIC adopters report 24% larger deals because Economic Buyer conversations surface bigger budget than the champion-only path. |
| Win rate on qualified | Baseline | +15–18% | Higher win rates come from fewer commits on deals missing an economic buyer or a real pain. |
| Sales cycle length | Baseline | −15% | MEDDIC surfaces decision-process gaps earlier, so legal and procurement start sooner. Shorter tail, same close. |
Forecast accuracy is the headline number. A CRO running MEDDIC cleanly walks into board meetings with a forecast that lands inside ±10% — versus the ±35% most mid-market orgs quietly accept. That gap is the difference between a boring, credible VP Sales conversation and a quarterly surprise. Source on the adoption numbers: MEDDIC Academy\'s practitioner benchmarks, which track adopter win rates and deal size across roughly 500 enterprise SaaS teams.
Deal size and win rate follow with a lag. Both typically move at the 90–120 day mark, once reps are fluent enough to push Economic Buyer conversations earlier and walk from deals that fail the pain test. Sales cycle compresses last — the byproduct of decision-process gaps surfacing sooner because the rep mapped them on the first call.
How Gangly installs MEDDIC into every call workflow
The framework only works when the scorecard is filled during the call, not two weeks later. That is where Gangly sits — the sales workflow system that runs MEDDIC inside the rep\'s existing call tool, not in a separate form the rep resents.
- Call Prep Engine — pre-loads the 6-letter scorecard for the account before the rep opens Zoom, using the CRM context plus LinkedIn data. The rep walks in knowing what is already graded and what the call has to prove.
- Live Call Coach — surfaces MEDDIC prompts when the call drifts. If the rep is 18 minutes in and the Economic Buyer has not come up, the coach flags it in-window. No post-call regret, no "should have asked that."
- Post-Call Notes — drafts the updated scorecard from the call transcript. Rep reviews the 6 letters, adjusts the scores, and syncs to HubSpot or Salesforce in one click. The CRM field dump becomes a 30-second review, not a 20-minute form.
The principle that makes MEDDIC adoption actually stick: the rep should never have to choose between running the call and filling the framework. The workflow system covers both. Gangly per-seat pricing runs $99–$299 per seat per month, with a 14-day free trial and no credit card. First scorecard in under 5 minutes.
Related reading: the MEDDIC vs BANT head-to-head covers when each framework wins, and the discovery call framework breaks down the 12 questions that pull MEDDIC out of a cold first call.
Install MEDDIC in the call
Score the deal while the call is still running.
14-day free trial. First MEDDIC-graded call in 5 minutes. No credit card.
Frequently asked questions
What does MEDDIC stand for in sales? +
MEDDIC stands for Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, and Champion. It is a B2B sales qualification framework created in 1996 by John McMahon at Parametric Technology Corporation (PTC), designed to help reps qualify complex enterprise deals consistently. Each letter is a checkpoint a rep verifies during discovery — if any letter is missing by late-stage, the deal is flagged as at-risk in the forecast.
Is MEDDIC still relevant in 2026? +
Yes — and arguably more relevant than ever. Roughly 73% of SaaS companies selling above $100K ACV use some version of MEDDIC or MEDDPICC, according to industry practitioner research. As buying committees have grown (the average B2B deal now involves 6–10 stakeholders), the framework's multi-threaded qualification model has outlasted simpler frameworks like BANT. The 2026 evolution is not replacing MEDDIC — it is installing MEDDIC into the call workflow itself, so reps score live instead of post-mortem.
What is the difference between MEDDIC and MEDDPICC? +
MEDDIC is the original 6-letter framework. MEDDPICC adds two letters: Paper Process (the legal and procurement track that runs parallel to sales) and Competition (the named competitor and battle plan). MEDDPICC is the version most enterprise sales organizations run in 2026 because deals over $250K ACV typically have both a separate legal workstream and an identified competitor. For deals under $100K ACV with a short cycle, core MEDDIC is usually enough.
How do I actually run MEDDIC on a discovery call? +
Anchor the call to Pain first — what is broken, who notices, what it costs to do nothing. Pain is the hook that makes every other letter relevant. Then work outward: quantify the pain into Metrics, trace the budget to the Economic Buyer, map the Decision Criteria and Process, and close by testing for Champion by asking your contact to personally go to bat. A 45-minute discovery call can cover 5 of 6 letters if the rep stays disciplined and does not let the call drift into a feature walk.
What is the difference between MEDDIC and BANT? +
BANT (Budget, Authority, Need, Timeline) is a simpler 4-criterion framework designed for inbound SDR qualification on short-cycle deals. MEDDIC is a 6-criterion framework built for long-cycle, multi-threaded B2B deals. BANT asks "is this a lead worth a demo?" MEDDIC asks "is this deal worth a forecast commit?" Most modern sales teams run BANT at the SDR-to-AE handoff and MEDDIC from discovery to close — same deal, different depths.
How many MEDDIC letters do you need to close a deal? +
All six, with honest scores. A deal with strong Metrics, Pain, and Champion but no verified Economic Buyer is a deal that will slip in the final two weeks. Teams that grade deals on a 0–10 scale per letter and refuse to forecast anything below 7/10 on Economic Buyer and Decision Process see forecast accuracy improve from roughly ±35% to ±10%. The framework only works when every letter gets graded honestly, not when five out of six are enough to commit.