top of page

A comprehensive Q&A from GrowthSutra’s 202X Vision LinkedIn Live session, “Moving Your AI – From PoC to Proof of Profit for a Faster Scale.”


Host: Vishwendra Verma, Founder & CEO, Growth Sutra
Expert Panelists:

  • Gayatri Devi Jayan, Founder, ADVICE; Chapter President, Applied AI Association; former Global BU Head, L&T Group

  • Praveen Sawant, Independent Global Advisor – Board Governance & Technology; former CDO and CTO across Indian and multinational enterprises

  • Arun Mathew, EVP – Systems & Technology, MullenLowe Lintas Group; 25+ years in IT operations on both client and consulting side


Setting the Stage: From Experiments to Accountability


Q: Why did you choose “From PoC to Proof of Profit” as the core theme for this session? What’s changed in boardrooms between 2025 and 2026?


Vishwendra Verma: In Davos, Satya Nadella warned that if AI only benefits tech firms, it is a bubble. ServiceNow’s vice chairman called the first wave of AI “throwing spaghetti at the wall,” and Wipro’s CEO summed up the shift: 2025 was experimentation, 2026 is accountability. Accountability now means one thing: does this move the P&L or not? Research shows 95% of AI pilots fail to deliver measurable ROI, only about 10% scale, and just 6% achieve bottom-line impact, which makes the gap between PoC and profit the most urgent problem for leadership teams.


Segment 1 – The AI “Valley of Death” Personal “Valley of Death” Moments


Q: You’ve all lived through AI and tech “valley of death” moments. Can you share one experience where the PoC worked technically, but value died on the way to production?


Praveen Sawant: We rolled out an AI-based employee sentiment chatbot across two South India offices as a pilot. The dashboards were excellent, with rich slicing and dicing by function, team, and level, and the model’s analytics worked as promised. But we made a fatal error: there was no workflow or ownership to act on the insights. Employees shared sentiments expecting change, but nothing happened, responses dropped sharply in subsequent cycles, a trust gap emerged, and eventually the CFO refused to fund expansion. The project slid to the back burner—not because AI failed, but because process and accountability were never designed.


Gayatri Devi Jayan: In IT services, most AI efforts are still stuck in experimentation. GenAI became a flashy toy everyone wanted, but few organizations defined what “success” actually means. Unlike ERP or CRM rollouts that sat on a stable baseline, AI tools change so fast that teams hesitate to fix a baseline to build on. We are still largely in PoC mode, and the more we learn, the more we realize what we don’t know. That’s why calling every PoC failure a “valley of death” is misleading—some failure is the very purpose of PoCs. The real valley appears when we confuse experiments with production-ready bets without rethinking metrics and ownership.


Arun Mathew: From sitting on both client and consulting sides, the pattern I see is simple: the technology often works, but processes are half-baked. You can deploy AI anywhere, but if you haven’t fully thought through how data will flow in, how outputs will be consumed, and who will act on them, you will see guaranteed failure. In AI projects where I was a bystander, the missing piece was always workflow design, not the model’s capability.


Four Failure Modes: Learning, Workflow, Trust, Investment


Q: MIT research highlights four failure modes from PoC to profit—learning gap, workflow gap, trust gap, and investment gap. Which one kills most AI initiatives, and how do they reinforce each other?


Praveen Sawant: In reality, all four are interlinked and create a domino effect. PoCs typically tackle a narrow slice of the process to prove a specific outcome, so by design they don’t capture the full “as-is” and “to-be” ecosystem. Even big consulting firms struggle to map business processes without gaps, so learning gaps are baked in. Those gaps lead directly to workflow issues—unaccounted subprocesses, interfaces, and owners—which then manifest as trust gaps when users experience inconsistent or unexplained outcomes. Once trust erodes, investment dries up, because business leaders are not buying tools; they are buying outcomes in business language. When the grapevine is stronger than the data, one or two bad stories can collapse support and funding.


Gayatri Devi Jayan: From the AI engineering side, there are two stacked views. On the build side, nobody today can claim to be an end-to-end enterprise AI expert; the domain is too broad, so learning gaps are inevitable. Those learning gaps cause workflow gaps, and both together create trust deficits. On the enterprise side, leaders often want AI impact quarter-to-quarter, but AI demands a different posture—ring-fenced budgets, patient RD, and a focus on “return on intelligence” before classical ROI. If you invest to upgrade people, data, and workflows, you convert learning gaps into capability. If you only expect quarterly P&L lift, you trigger the very domino that kills the initiative.


Arun Mathew: We saw the same pattern with earlier technologies like SAP HANA. Initially, there was huge hype about speed and analytics, but adoption took 15 years because organizations had to evolve processes, not just buy compute. AI is the same. If you treat it as a new tool that must serve process and business, your gaps become design inputs. If you treat it as a magic project to “do AI,” those gaps become landmines.


ROI vs. “Return on Intelligence”


Q: If ROI is still far away for many AI initiatives, what should teams measure in the interim? And how should they think about metrics?


Gayatri Devi Jayan: I’m not saying “don’t watch ROI,” I’m saying rename it temporarily as “return on intelligence.” Right now, every PoC—successful or failed—is buying you intelligence about how AI behaves in your specific context. That learning has compounding value. Traditional metrics like “5% productivity gain” don’t work if you’ve never baselined current productivity or clarified what exactly you will reduce—headcount, hours, or error rates. You must define success metric by metric for your business and industry, and accept that classical SDLC metrics don’t directly transfer. AI requires rewiring how you define productivity and value, not just applying old KPIs to new tech.


Host note: External studies group AI value into four buckets—cost and productivity, revenue growth, speed and acceleration, and trust and governance—but several panelists argued that these categories are not new; what must change is how leaders define success in a world where AI can both create and destroy value at scale.


Wait-and-Watch vs. First-Mover Advantage


Q: Is it wiser to be a fast follower and wait for AI to stabilize, or to push for first-mover advantage with an “AI-first” approach?


Gayatri Devi Jayan: “Wait and watch” is not a viable strategy anymore. The right framing is: be the first learner, not necessarily the first hyped adopter. Just as some of us built vocabulary for GMAT or CAT even if we never sat for the exams, you need to build AI literacy even before you know exactly where you’ll apply it. Understand the tools, understand your business, and then decide where AI genuinely makes sense. Sitting on the fence and ignoring the wave is the only truly dangerous option.


Praveen Sawant: It’s not a binary choice. You shouldn’t chase first-mover status just because a tool is available, and you also shouldn’t freeze in the name of caution. The real question is where there is a defensible business case. Be a fast learner about AI and a disciplined investor in well-defined use cases. “First mover” only matters where the business, not the technology, gains a lasting edge.


Arun Mathew: Think in terms of low-hanging fruit versus strategic bets. Use what’s already bundled—many enterprises have AI capabilities sitting inside tools like Office 365 and CoPilot but barely use them. Start there, learn cheaply, and then decide where to double down.


Cost, Infrastructure, and Hidden Line Items

The Infrastructure Shock


Q: Research suggests infrastructure can consume 60–80% of AI budgets, and 80–85% of organizations misestimate AI costs by 10–25%. What are the hidden cost line items that never appear in PoC budgets but dominate production P&Ls?


Gayatri Devi Jayan: For India’s price-sensitive market, the truly underestimated costs are people and learning, not just GPUs and cloud. When you upskill developers from legacy stacks to Python, ML, and genAI, their market value doubles, and you must decide who pays for that uplift. There are also no mature estimation standards for AI projects. In classical SDLC, data appeared late in the lifecycle; in AI, high-quality data is the starting point. That rewires how you scope, estimate, and staff. Training, reskilling, and sustained experimentation are recurring line items that rarely show up in PoC spreadsheets but hit hard in production.


Praveen Sawant: On the enterprise side, data OpEx is a nasty surprise. In one project, we used a cloud computer-vision API to auto-tag images. As accuracy improved into the high 90s, the dataset exploded and so did our hot-storage bills on Azure. We had underestimated storage tiering, housekeeping, and data lifecycle policies, so cloud costs quietly escaped our budgets. You must treat storage, data movement, and inference as design parameters from day one, not as “IT’s problem” that you’ll handle later.


Services vs. Product Economics


Q: Indian IT services firms face a tougher challenge than product companies. They can’t build one internal tool; they must support hundreds of client contexts. How does this services reality complicate cost and infrastructure planning?


Gayatri Devi Jayan: Services firms must standardize their own stack while honoring client choices. For internal productivity, you may standardize on a suite—say, CoPilot plus your chosen cloud—and lock that behind corporate firewalls. But for clients, your teams must navigate multiple clouds, models, and security regimes. That creates a double cost curve: one for your own AI backbone and one for client-specific environments. The only way to avoid a margin squeeze is to be transparent with clients about people costs, training investments, and data costs, and to actively advise them onto architectures that are both AI-ready and cost-conscious. This is where true “consulting” begins—helping clients avoid shiny but expensive complexity they don’t need.


Build vs. Buy – Where Does Building Make Economic Sense?


Q: Research shows a 2:1 success edge for vendor partnerships over internal builds, and building often costs more and takes longer. When does it actually make economic sense to build AI instead of buying or partnering?


Arun Mathew: Use a simple rule: buy the commodity, build the moat. General-purpose intelligence—large models, generic copilots—is a solved problem you can rent. There is rarely a case for replicating those capabilities inside your organization. What you must build is the layer that sits on your proprietary data and your proprietary workflows. That’s where competitive advantage lives. Clean your data, clarify your processes, and then layer your own models or domain-specific fine-tuning on top of existing LLMs. In that stack, building is about encoding your unique knowledge, constraints, and decisions—not about re-creating a frontier model from scratch.


Praveen Sawant: A one-line test: build only where you need to truly own decisions and accountability; everywhere else, buy or partner.


India’s Bet on Small Language Models


Q: For India specifically, is the bigger economic opportunity in small/domain-specific language models (SLMs) over renting giant generic LLMs?


Arun Mathew: In domains where India is structurally unique—agriculture, local law and regulation, public-sector logistics, railways—there is a strong case for domain SLMs. Soil patterns, climatic nuances, crop cycles, and subsidy structures are India-specific; training models specifically on those datasets can yield disproportionate value. The same applies to regulatory compliance and supply-chain patterns that are unique to our ecosystem. In those spaces, domain SLMs are not just cheaper—they’re more accurate, more trusted, and strategically defensible.


Segment 2 – The Breakthrough Blueprint

Deployment Patterns That Avoid Pilot Purgatory


Q: Industry research points to three patterns that help avoid “pilot purgatory”: incremental envelope, shadow production, and phased federation. Which one do you recommend for PoC-to-production transitions?


Arun Mathew: Shadow production is the most practical path. You run the new AI system in parallel with existing processes, compare outputs, and only gradually move users and decisions across as confidence grows. This approach systematically addresses the learning, workflow, and trust gaps before you ask finance to commit big dollars. It also acts as insurance—if the AI layer misbehaves, you haven’t bet the company on it yet.


Governance, DPDP, and “AI Sandwich”


Q: With DPDP and similar regulations, AI governance is non-negotiable. How do you balance “move fast, break things” with accountability for AI outputs?


Praveen Sawant: It’s a false dichotomy to frame governance versus innovation. The right framing is innovation with governance. A useful mental model is Gartner’s “AI sandwich”: at the bottom, you have clean data and controls by design; in the middle, the AI model; at the top, a human in the loop. Innovation happens in the model layer, but it is constrained by robust data controls below and human responsibility above. Problems arise only when governance is retrofitted as a box-ticking exercise after systems are shipped, instead of being baked into architecture and workflows from day one.


Top-Down vs. Bottom-Up AI Strategy


Q: Should AI be driven top-down by a central strategy group, or bottom-up by teams bringing use cases? Do some industries suit one better than the other?


Gayatri Devi Jayan: It has to be both. People on the ground—nurses, agronomists, operators, analysts—understand the real pain points and where AI could help; they supply reality. Leadership holds the responsibility for pacing, ethics, and system-wide impact; they supply practicality. Bottom-up discovery of problems plus top-down guardrails and capital allocation is the only sustainable combination.


Praveen Sawant: If AI becomes just a “science project” owned by a central lab, it will die; if it becomes just a compliance/risk project, it will also die. It must be co-owned by business, tech, and risk teams—ground reality plus strategic direction.


Final Takeaways – A Practical Playbook


Q: If you had to leave leaders with a short playbook to move from PoC to proof of profit, what would it include?


Panel Summary (hosted by Vishwendra Verma):

  1. Business problem first, not technology first. Start from P&L-relevant problems and journeys; AI is a means, not an end.

  2. Scale in steps, not big bang. Favor shadow production and incremental envelopes over all-or-nothing cutovers.

  3. Build governance in from day one. Treat DPDP-style constraints and “AI sandwich” controls as design inputs, not post-facto hurdles.

  4. Measure relentlessly—but broaden your lens. Track “return on intelligence” and learning metrics alongside hard ROI, and redesign success metrics for an AI-shaped world.

  5. Shift from fun-to-do projects to purpose-built outcomes. The top 5% of organizations are already making this shift; the remaining 95% risk staying stuck in pilot purgatory.|


Session Date and Recording


This Q&A is based on GrowthSutra’s “202X Vision : Moving your AI - From PoC to Proof of Profit for a Faster Scale” session held on 22 January. You can watch the full conversation and framework deep dives on GrowthSutra’s YouTube channel to see the complete discussion in context.

bottom of page