Senior Customer Success · Strategic Enterprise

Mike Bogdan

I run strategic enterprise customer success the way customers need it run — operational, evidence-driven, and quiet during the moments that matter most.

Most recently at Cloudflare across strategic enterprise CSM and partner success engagements: media, sports, financial technology, industrials, IT services, and platform partnerships — including The Associated Press through the 2024 U.S. election cycle, MLB, Warner Bros. Discovery, Binance, Carrier, Kyndryl, and a strategic cloud-platform partnership. 15+ years across Apple consumer channel sales and Cloudflare enterprise CS. Claude is part of the daily practice, not a separate workstream.

🗳️ AP · 2024 U.S. Election
Supported The Associated Press through the 2024 U.S. election cycle by coordinating a 48-hour staffed-support plan with global support leadership — TAM handoffs, joint escalation channel, real-time internal/customer coverage during one of the highest-stakes operational windows of the year.
🏢 Strategic Enterprise Book
Supported customers across media, sports, technology, financial technology, industrials, and IT services — including Stack Overflow, MLB, Warner Bros. Discovery, Binance, Carrier, Kyndryl, and a strategic cloud-platform partnership. Consumption-based and seat-based engagements.
🔎 Portfolio-Scale Visibility
Built a multi-thousand-account portfolio audit and a customer-migration scoping dashboard used in partner sales and renewals cycles. AI-assisted analytics tied directly to QBR and renewal conversations.
🎯 11 quarters / $84M+
Consecutive quarters at or above quota at Apple, on an annual channel territory of $84M+. Designed the onboarding toolkit that supported a 2x increase in US Channel Sales headcount.

Approach

Strategic enterprise customer success is operational work. The customer relationship is the surface; what holds it up is whether the operating model behind it actually runs — escalation paths that people trust before they need them, renewal motions that don't show up as a surprise, account-level visibility that survives a team change.

The good customer success conversations start with empathy for how the customer's business actually works. You learn how the customer makes money, what they're accountable for internally, and what their definition of "the platform held up" looks like — before you propose changes to how they use it.

AI is part of the daily practice. I use it to do the analytical work that historically sat on a separate team or didn't happen at all: portfolio audits, account-level reporting, dashboard prototypes, customer-facing material at QBR. The judgment calls — what to escalate, who to bring into the room, when to push back, when to absorb — are still relationship work. AI handles the data half so I can spend more of my time on the customer half.

In customer-success terms: renewals the customer expects and trusts. Expansions grounded in actual usage and roadmap fit. Escalations that resolve faster because the operating layer underneath them already exists. Strategic moments — like an election night — that the customer doesn't have to think about, because the plan was built before it mattered.

Success stories

Success story · 01

Election Night, Quietly

The Associated Press runs the vote-tally infrastructure that powers national election calls. Election cycles are their highest-stakes operational window of the year — and a primary cyber-target window.

Problem The customer's most consequential 48 hours of the year was approaching with no purpose-built coverage plan.
Action Coordinated a 48-hour staffed-support plan with global support leadership: TAM handoffs, joint escalation channel, real-time internal/customer coverage.
Impact AP went through election night without operational drama on the platform side. AP has since expanded on the platform.

This was less a project than a posture. AP didn't need a deck about how prepared we were. They needed to know — before the night, in concrete terms — that the operating model behind the platform would hold up if something went sideways at 2 a.m. on November 5th.

I worked with global support leadership to design a 48-hour staffed-coverage plan: TAM handoffs at fixed intervals, a joint escalation channel that internal and customer-side stakeholders were already in before the night began, and a real-time internal coverage rhythm so that any escalation had a clear owner at any hour.

The right outcome here was the absence of drama. Election night ran. The customer's operational view of Cloudflare during that window was steady, predictable, and well-staffed.

The judgment call worth naming: most of the value happened before the night. The work was not the heroics — it was making sure heroics weren't needed.

The hard part was deciding what would be on the plan and what would be off it. The night was the easy part, because the plan had already done its work.

Success story · 02

The Customer Map and the Upsell Engine

A digital experience platform (DXP) partner with a large, complex embedded customer base. The partner had account teams ready to grow the relationship — they just didn't have a usable customer-level view to act on.

Part A — The upsell engine

Problem The partner's account team didn't have a customer-level operating view: no per-customer usage picture, no entitlement-to-usage mapping, no upsell signal grounded in what each customer actually did with the service.
Action Built an upsell engine in Claude with internal MCP connections: SQL into customer-usage data (total traffic, blocked traffic, bot traffic, etc.), entitlement-to-usage mapping, and web-lookups on customer domains to estimate which services suited them — page-shield for payment-processing sites, API gateway for API-heavy domains, bot management for high-automation traffic. Tied signals into established package pricing for self-serve quote generation and a downloadable PDF report the partner's account team could present to the customer.
Impact Partner's account team enabled to upsell with confidence; self-service replaced one-off requests; per-customer commercial conversations grounded in actual usage rather than catalog pitch.

The partner's account team wasn't blocked on willingness; they were blocked on view. Every conversation with a customer required someone to manually pull usage data, manually cross-reference entitlements, manually figure out what services might fit, and manually frame a commercial pitch — and that work didn't scale. The result was that obvious upsell opportunities went unaddressed because the operating layer underneath the account-team conversation didn't exist.

I built it. Working with Claude and the internal MCP connections, I pulled SQL against customer-usage data and started piecing together the broader picture — total traffic, blocked traffic, bot traffic, API patterns. I mapped per-service entitlements so the team could see what each customer was actually using out of what they'd already paid for. Then I had Claude analyze the patterns and look for upsell signals — sustained attack traffic suggesting bot management, payment-processing domains suggesting page-shield, API-heavy traffic suggesting API gateway.

I closed the loop on the commercial side. The signals fed into the established package pricing, so the engine produced a per-customer quote the account team could discuss without going to deal desk. The whole view rolled up into a single web page the team could pull up for any customer — twelve months of usage on a Plotly graph, the upsell signals called out, the pricing tied to specific package fit. Best of all, they could export a clean PDF report for the customer with or without pricing.

Part B — The migration tracker

Problem A multi-year, three-pronged migration affecting 300+ existing customers (and any new ones that joined during the migration window). End-customer production environments at stake. No central tracking of which customers needed what work, on which path.
Action Extended the operating view into a migration tracker: pulled DNS records across the install base, ran NS lookups to identify customers who'd need static IPs (a known migration blocker), tracked all three migration paths simultaneously, and surfaced which customers would need the most work to migrate. Tied in active orders, upcoming renewals, and MoM/YoY revenue and churn so the migration view sat next to the commercial view.
Impact Customer-level migration risk became visible and actionable. The partner ran the migration off the same operating layer they were running upsell off of.

The migration wasn't a single move — it was three parallel paths, and which path each customer landed on depended on technical details that no one had at customer level. DNS readiness was the first question. NS-lookup details determined which customers would hit the static-IP requirement, which was the single biggest source of migration friction. Active order state, renewal timing, and revenue trajectory determined whether moving a particular customer made commercial sense in that quarter or the next.

I built the layer that made each of those questions answerable at the customer level. The migration tracker shared the same data plumbing as the upsell engine — so the partner's account team was running both motions off the same operating view, with the same customer records, the same per-customer history, and the same pricing context. The migration view also surfaced where the heaviest lift was concentrated, so the team could sequence customers on operational difficulty rather than reacting to whichever escalation got loudest.

The dashboard isn't the artifact. The artifact is the operating model — per-customer usage, signals, pricing, and migration state in one view, runnable by the partner's team without me in the room. Anyone can hand a partner a dashboard. Giving them an operating model is the work.

Success story · 03

Holding the Roadmap to Account

An enterprise cloud platform partner where the majority of ACV came from the partner's own consumption of the underlying service. New product introductions kept stalling on missing feature requests — and the conversations that defined those asks were scattered across half a dozen tools.

Problem NPI deals blocked on missing feature requests. The partner-stakes context for each ask lived in email, Google Chats, and Jira tickets. There was no single source of truth, no way to hold product-team commitments to account.
Action Built an agentic feature-request tracker with Claude Code — ingested the scattered conversation, captured the technical requirement and partner-stakes context for each request, tracked which PMs had committed to which updates, auto-tagged PMs on Confluence for accountability, and pulled Jira and GitLab status to track follow-through.
Impact Faster ship times. A shared source of truth both sides could point to. Specific evidence to re-engage product with when commitments started drifting.

The deals weren't blocked on price. They were blocked on a specific set of essential features that had been requested, discussed in the right rooms, occasionally committed to, and then somehow drifted. The reason the drift was hard to manage from the partner-success side was structural: the context for each request lived everywhere except in one place. Some of it was in email. Some in a Google Chat thread between three engineers. Some in a Jira ticket that hadn't been updated in two months. The shape of what the partner actually needed — and why — got lost between systems.

I built the tracker that lived where that context belonged. Pulling from email, chat, and Jira via Claude Code, it captured each feature request with its technical detail and — critically — the partner-stakes context: what deal was waiting on it, what customer was affected, what commercial outcome was downstream. It tracked which PMs had committed to which updates and when, and auto-tagged them on Confluence so the accountability lived in their own working surface, not just my notes. It pulled Jira and GitLab status on the underlying engineering work, so progress against commitments was visible without a separate check-in.

The tracker reframed the conversation. The partner could see what the product team had committed to, the product team could see what was at commercial stake, and I could re-engage product the moment a commitment started slipping — with specifics rather than impressions. Several of the blocked features shipped on accelerated timelines once the accountability layer was visible.

The product team wasn't the problem. The lack of a shared operating layer was. Once both sides had one, the meetings got shorter and the roadmap moved.

Success story · 04

What the Field Knew

T-Mobile. Apple's brief was to drive iPad attach across the carrier. T-Mobile's actual problem was something different — but you only heard it if you spent enough time in their stores to surface it.

Problem The partner was missing SMB lead-gen targets, and the iPad and Apple Pencil story was sitting silently unsold across thousands of retail conversations.
Action Built a partner-specific enablement workshop — talk tracks, demos, conversion skills — co-designed with Apple's SMB development team. Piloted in NYC with a 4-day workshop for the Times Square office.
Impact ~20–30% SMB lead lift in the NYC patch with a meaningful iPad sales lift in the same cohort. Apple's training team scaled it virtually for national rollout. AT&T and Verizon alliance teams later brought the content into their own carrier partnerships.

The Apple brief was straightforward: drive iPad attach across the T-Mobile territory. The field told a different story. Spending real time on store visits and in district leadership meetings, what kept surfacing wasn't an iPad problem — it was an SMB lead-generation problem T-Mobile owned internally, with their own targets and their own pressure to hit them. iPad with Apple Pencil was a near-perfect tool for that exact use case, but the store associates didn't know how to bring it into a conversation, didn't know how to showcase what it could do, and didn't know how to read SMB-shaped intent in a customer who'd walked in for something else.

The brief and the partner's problem didn't intersect. The enablement program had to.

I worked with Apple's SMB development team to build the SMB Speaker Series — an enablement workshop with talk tracks, hands-on demos, and conversion skills built specifically for the SMB-conversation moment in a carrier-retail context. When we walked T-Mobile's NYC district leadership team through it, what they heard was Apple paying attention to their internal commercial pressure, not just selling them on Apple's product roadmap. They committed to a 4-day workshop for the full team at their Times Square office.

The lift showed up immediately — roughly 20–30% in SMB leads within the NYC patch, with a meaningful iPad sales lift tracking the same training cohort. The success was strong enough that Apple's training team picked it up, revamped it for virtual delivery, and ran it nationally. Once the AT&T and Verizon alliance managers saw the model, they pulled the content into their own carrier programs.

The hard part was hearing the partner's actual problem instead of the one Apple's brief assumed. The rest was design work. Knowing the partner's business better than they sometimes know it themselves is mostly a function of showing up, listening to the field, and treating what they tell you as the data.

Working with AI

I use AI as a working tool in the customer-success practice, not as a branding point.

In my day-to-day, I use Claude (Claude Code / OpenCode) as an analytical partner for the infrastructure behind the success stories on this site: portfolio audits, account-level reporting, package-pricing analysis, migration-readiness checks, dashboard prototypes, customer-facing material at QBR, and workflow automation that shortens the gap between "I have a question about this account" and "I have a usable answer."

The model does not make the strategic calls. It helps me move faster once the call is made — and lets me bring more evidence into the customer conversation than I otherwise could.

The clearest friction I've hit working this way was the first pass on a 3,500-account audit. These tools work incredibly fast and want to please — garbage in, garbage out. I'd given the model weak inputs, and the output reflected that: customers weren't grouped into verticals correctly, the signals pointing to specific service use cases were misread, and what came out wasn't actionable. I stepped back into build mode and worked through each variable explicitly — what to look for, how to interpret each signal, where to pull cleaner source data. A long iterative session, like a rock tumbler. The lesson: these tools are eager to ship the first plausible answer. When the analytical layer is doing real work, slow down and align with the model on interpretation — don't paper over the output.

The work generally falls into a few patterns:

Portfolio-scale audits. Workflows that normalize large sets of account, usage, and customer-footprint data to surface patterns that would be difficult to find manually. The install-base customer map and DNS-readiness audit both started here.

Customer-facing analytics. Account-level views, renewal trend analysis, expansion modeling, and the kind of material that shows up in a QBR. The value isn't asking AI for an answer; it's building a repeatable analytical pattern that survives the cycle.

Dashboards the customer keeps using. Interfaces and trackers that customer and partner teams can keep using after the initial analysis is done. The goal isn't dependency on me; it's a working operating view they can run themselves.

Operational automation. Draft customer communications, intake tracking, order-flow support, and other workflow automation that reduces overhead and makes follow-through easier.

What changes when a CSM has the analytical horsepower they didn't used to have: the customer conversation stops being bottlenecked on whether someone has time to manually pull and reconcile data. The follow-up email that used to take a day takes an hour. The QBR slide that used to require a separate analyst doesn't.

This is the kind of work I want to keep doing.

Background

Cloudflare — Customer Success and Partner Success, 4+ years. Started as a CSM during a major plan-to-offering transition, taking on a 295-customer book — roughly 10x the typical starting portfolio. Promoted to Senior CSM on the Strategic Enterprise book, supporting media, sports, technology, financial technology, industrial, and IT services customers, including The Associated Press through the 2024 U.S. election cycle. Then promoted to Senior Partner Success Manager, owning post-sale success across three of Cloudflare's largest platform partners, including a strategic cloud-platform partnership where the majority of ACV came from the partner's own consumption.

Apple — Channel Sales, 12 years. Ran an $84M+ annual territory across consumer and wireless channel. 11 consecutive quarters at or above quota. Designed the onboarding toolkit that supported a 2x increase in US Channel Sales headcount. Ran a partner-website pilot that drove +16% sell-through on Apple Watch (amazon.com) and +21bp on MacBook Pro (bestbuy.com).

Resume available on request via email. LinkedIn: linkedin.com/in/mike-bogdan.

Contact

If you're building a strategic enterprise customer success team and want to talk about how this work would translate, reach out.