Discussion about this post

User's avatar
Sabyasachi B.'s avatar

The Sarvam context is the one worth sitting with: Sarvam-105B trained on roughly 4,000 GPUs with a 15-person team, reportedly outperforming DeepSeek R1 and GLM 4.6 Air on several benchmarks. That's not just impressive ..... it changes the implicit assumption that frontier AI requires hyperscaler-scale compute. If that benchmark holds up in third-party evals, it has real implications for how India prices its sovereign AI ambition. The Emergent data point is the other piece worth unpacking. $100M ARR in 8 months from Bengaluru, 70% of revenue from the US and Europe ..... that's proof that India can build globally relevant AI products, not just Indian-market products. These two stories sitting side by side at the Summit might be the most important signal yet: both the infrastructure play and the application play are now showing real proof points at the same moment.

No posts

Ready for more?