3 Takeaways From the IndiaAI Summit
The biggest stories at Bharat Mandapam, and what they mean for India’s AI future.
The India AI Impact Summit is over. $250B in commitments were announced. 88 nations signed the New Delhi Declaration. Sarvam launched its sovereign models. TCS partnered with OpenAI. Infosys partnered with Anthropic. Google committed $15B to Vizag. Everyone declared India’s AI moment had arrived.
We were there. And beneath the announcements, three big truths emerged. About sovereign AI, Indian IT’s future, and who actually captures value from India’s AI demand.
1. Sovereign AI Is a National Project, Not Just a Technical One
The entire summit week revolved around Sarvam AI.
Their pavilion drew more crowds than Google’s or OpenAI’s. PM Modi wore their Kaze smart glasses on camera. They launched Sarvam-30B and Sarvam-105B, mixture-of-experts models trained from scratch on domestic compute supporting all 22 scheduled Indian languages. The 105B reportedly outperformed China’s DeepSeek R1 and GLM 4.6 Air on several benchmarks. They announced partnerships with Qualcomm, HMD, and Bosch to push models to edge devices. They launched a ChatGPT-style assistant called Indus. They even shipped hardware.
It was, without question, the most important technical announcement of the week. And what makes it even more impressive is the context. Sarvam built these models with access to roughly 4,000 GPUs under the IndiaAI Mission and a core engineering team of about 15 people. Frontier labs in the US and China typically train models on clusters 10–50× larger, with teams far bigger.
India’s government backed 12 labs under the IndiaAI Mission’s Foundation Models pillar, allocating roughly ₹2,000 crore in grants to accelerate domestic model development. While Sarvam was one of them, a few others showed promising work as well. BharatGen launched Param2, a 17B parameter model. Gnani.ai demonstrated Vachana, a multilingual voice-cloning system supporting a dozen languages. The broader effort is still early, and many of these programs are multi-year bets. But the summit made one thing clear: building frontier AI capabilities is incredibly difficult.
The goal of sovereign AI isn’t necessarily to build the single best model in the world. It is strategic capability. Nations increasingly view frontier AI the same way they view space programs, nuclear programs, or semiconductor fabs. Not every experiment succeeds. Not every research lab produces breakthroughs. But the capability itself becomes a pillar of national security and technological independence.
From that perspective, even a single success like Sarvam matters enormously. It proves that India can train large models domestically, build language systems across its own linguistic landscape, and deploy AI infrastructure without depending entirely on foreign labs.
At the same time, sovereign models will likely coexist with the best systems developed globally. The most successful AI startups in India will probably combine both worlds:
using world-class foundation models while layering India-specific intelligence on top — language, workflows, regulation, distribution, and domain expertise.
2. From Labour Arbitrage to AI Leverage: The Next Chapter of Indian IT Services
Another major theme of the summit was the wave of partnerships between global AI labs and India’s IT services giants.
TCS announced a multi-dimensional strategic partnership with OpenAI. The agreement includes ChatGPT Enterprise for hundreds of thousands of employees, Codex for software engineering workflows, and joint go-to-market for agentic AI solutions. TCS also signalled plans for a large-scale data centre buildout through its HyperVault platform. Just a day earlier, Infosys announced a strategic collaboration with Anthropic, integrating Claude models into Infosys Topaz and launching an Anthropic Centre of Excellence focused on regulated industries like telecom, financial services, and manufacturing.
On the surface, these announcements looked like traditional partnership deals. But they reflect something deeper: a structural shift in how enterprise technology services will be delivered in the AI era. For decades, the Indian IT services model was built on a simple equation. Complex enterprise work from global companies was decomposed into thousands of smaller tasks and executed by large engineering teams in India.
AI changes that equation. When frontier models can modernise legacy systems, generate documentation, automate testing, and assist with large-scale migrations, the amount of human labour required to deliver the same outcomes declines significantly. That doesn’t eliminate the need for services — but it does change their nature. Instead of providing large pools of engineering labour, IT firms increasingly become AI implementation partners: helping enterprises integrate frontier models into existing systems, workflows, and regulatory environments.
But it would be naïve to pretend this transition won’t have real consequences. At the summit, Vinod Khosla put it bluntly on one of the panels: as AI systems become capable of performing large portions of engineering, documentation, support, and compliance work, many of the tasks currently handled by IT services and BPO teams could eventually disappear. India’s IT services sector generates roughly $350B in annual exports and employs millions of engineers & BPO workers. For a country where IT services has been one of the largest engines of white-collar job creation for three decades, that shift is not just a technology story. It is an economic and workforce transition that India will have to navigate carefully.
3. India’s First AI Unicorns Are Beginning to Emerge
For the past two years, one question has quietly hung over India’s AI ecosystem: When will the first wave of Indian AI unicorns actually appear?
At the summit, the early signals began to materialise. The most striking came from the infrastructure layer. Neysa announced a $1.2B financing package - $600M in equity led by Blackstone alongside $600M in planned debt, valuing the company at $1.4B conditional on milestones. It is the largest AI-focused funding round in Indian history. Blackstone, which has backed global data centre leaders such as QTS, AirTrunk, and CoreWeave, is now making a significant bet on India’s AI infrastructure. Neysa plans to deploy 20,000+ GPUs domestically for training and inference workloads. Today, India has fewer than 60,000 GPUs deployed nationwide. Blackstone expects that number to scale more than 30× to over two million in the coming years.
The second signal came from the application layer. Emergent crossed $100M ARR in just eight months, making it one of the fastest companies globally to reach that milestone. For comparison, Slack took roughly two years, and Zoom took three. More than 6M builders across 190+ countries have already created over 7M applications on the platform. Roughly 70% of revenue comes from the US and Europe, even as the company’s engineering team remains overwhelmingly based in Bengaluru. Emergent did not train a foundation model. Instead, it rode the global vibe-coding wave, building an application-layer product that sits on top of existing models and serves developers worldwide. In doing so, it has become one of the strongest early proof points that globally relevant AI products can be built from India.
And then there is Sarvam. Following the launch of its frontier models at the summit, investor interest in the company has intensified dramatically. While Sarvam has not yet crossed unicorn status, the trajectory is clear, and it represents one of the most credible attempts to build frontier AI capability from India.
Taken together, these signals point to something important. After years of discussion about India’s potential in AI, the first wave of AI-native unicorns is now beginning to form across infrastructure, applications, and foundation models. Some are being built to power India’s own AI ecosystem, while others are building global products from Indian talent. For the first time, both of those stories are unfolding at the same moment.
India AI News Roundup
The most impactful AI developments & announcements shaping India in recent weeks.
Anthropic Valuation Eclipses All Indian IT Combined
NVIDIA deepens early-stage push into India’s AI startup ecosystem
India Earmarks $1.1 Billion for AI & Deep Tech Fund of Funds
UAE’s G42 teams up with Cerebras to deploy 8 exaflops of compute in India
Anthropic opens Bengaluru office and announces new partnerships across India
AI Impact Summit: US, China among 88 signatories to New Delhi Declaration on AI
Startup Signals
Spotlighting brand new emerging AI startups from India every month, early and undiscovered.
Von Neumann Computing - The Personal AI Computer
The thesis is simple and contrarian: cloud is wildly overpriced, and the “cloud tax” is 10x worse than the “CUDA tax.” Von Neumann Computing’s answer is JOHNAIC - a plug-and-play personal AI server that runs AI and SaaS workloads from your office or home, on a normal power socket, with no subscription. The hardware ships with an Nvidia 5070 Ti 16GB GPU, 64GB RAM, 1TB SSD, and an optional UPS, all pre-loaded with a fully open-source software stack. Customers at People+AI (Nandan Nilekani’s EkStep initiative) validated the model — they deployed production workloads on JOHNAIC boxes hosted in their own offices. Still early, still scrappy, but the edge compute + data sovereignty angle is structurally interesting as India’s AI workloads scale beyond what cloud economics can serve.
Eyecandy Robotics - Physical AI Characters
Three founders, all under 25. One product: Tensaur, a physical AI character designed for entertainment and companionship - essentially a robot brought to life through AI. The company was incorporated in Bangalore just weeks ago (January 2026) and immediately selected for Lightspeed’s India Ascends cohort, alongside three other deep tech teams. Founders Alqama Shaikh, Raghuvamsi Velagala, and Mankaran Singh are building at the intersection of physical robotics and foundation models - the same design space as Figure, 1X, and Unitree globally, but aimed at consumer entertainment rather than industrial use. Pre-revenue, pre-product.
https://www.eyecandyrobotics.com/
Autoloops - AI Agents for Meta Ads
Quietly building AI agents that autonomously manage and scale Meta ad campaigns for D2C and e-commerce brands. The headline claim: 47% average ROAS improvement. Autoloops sits in a fast-heating category - agentic ad ops - where AdsGency just raised $12M, Madgicx is scaling aggressively, and Meta itself is embedding its Manus AI acquisition into Ads Manager. The Indian angle matters: India has the highest density of Shopify-powered D2C brands outside the US, and performance marketing agencies here run on razor-thin margins that AI agents can structurally compress. Autoloops is pre-announcement and nearly invisible publicly - exactly the profile of a company worth tracking before the round hits the press.

The Sarvam context is the one worth sitting with: Sarvam-105B trained on roughly 4,000 GPUs with a 15-person team, reportedly outperforming DeepSeek R1 and GLM 4.6 Air on several benchmarks. That's not just impressive ..... it changes the implicit assumption that frontier AI requires hyperscaler-scale compute. If that benchmark holds up in third-party evals, it has real implications for how India prices its sovereign AI ambition. The Emergent data point is the other piece worth unpacking. $100M ARR in 8 months from Bengaluru, 70% of revenue from the US and Europe ..... that's proof that India can build globally relevant AI products, not just Indian-market products. These two stories sitting side by side at the Summit might be the most important signal yet: both the infrastructure play and the application play are now showing real proof points at the same moment.