How Small, Smart, Applied AI Advances Enterprise Transformation
Insights from Venkata Dakshina Murthy Kolluru [Managing Director, TekFrameworks] and Jagan Mohan Jami [Chief Operating Officer, Acuvate]
About This Episode
In this milestone 10th episode of Coffee Conversations, Jagan Mohan Jami sits down with Dr. Dakshinamurthy V. Kolluru—a rare blend of academic rigor and enterprise pragmatism to unpack how small, smart, and applied AI is reshaping the future of enterprise transformation.
Drawing on his journey from rocket science to data science, Dr. Murthy shares how decades of AI evolution from symbolic systems to deep learning to mini language models are now converging into a new era.
Together, Jagan and Dr. Murthy explore the quiet revolution of Small Language models, milli models and applied AI, revealing how enterprises can make smarter, faster, and more cost-efficient transformations.
Key Topics Discussed
- The Real Evolution of AI: Why AI isn’t a sudden phenomenon but a 70-year journey of theory, hardware, and applied innovation from symbolic reasoning to generative agents.
- Reimagining the Enterprise with AI: How leaders should think about AI in two dimensions: engineering what already works and re-engineering what was previously impossible.
- When AI Fails and Why: Real stories from the trenches: where brilliant models faltered not because of math or code, but because of flawed definitions, unrealistic timelines, and missing data fundamentals.
- The Rise of Small, Smart, and Specialized AI: Explore how small language models (SLMs) and milli models can be customized, cost-effective, and privacy-safe for enterprises.
- Building an Enterprise AI Culture: How leaders can create a culture that encourages failing fast, prototyping right, and reskilling deeply bridging research with real business value.
Want to hear the full conversation?Fill out the form to watch the complete episode and learn how AI at the right scale can unlock massive enterprise value quietly, intelligently, and sustainably.
From Generative AI to Industrial Robotics - FAQs
The conversation outlines the evolution of AI through four distinct generations:
- Symbolic AI: The era of converting human knowledge into computer rules (e.g., Deep Blue).
- Machine Learning: The shift where computers began generating their own rules based on provided data and outcomes (e.g., Watson).
- Deep Learning: Starting around 2012, this stage involved dumping massive datasets into systems that figured out the correlations themselves.
- Generative AI (GenAI): The current generation. The defining breakthrough of Generative AI Implementation is the removal of the coding barrier, allowing users to control powerful AI using plain English prompts rather than programming languages like Python.
Business leaders should categorize their initiatives into two buckets:
- Engineering Processes (Applied AI): This involves doing existing tasks better. For example, using AI to estimate customs tariffs with 96% accuracy compared to human experts at 80%.
- Revolutionizing (Generative/New AI): This involves doing things the company never could before. An example is using AI to monitor thousands of machine inputs simultaneously to predict gas leaks a task difficult for humans.
Understanding the distinction in Applied AI vs Generative AI helps leaders decide whether they are optimizing a current cost center or creating entirely new value.
While Large Language Models (LLMs) get the hype, Small Language Models for Enterprise (SLMs) are often the superior strategic choice for four reasons:
- Privacy: They mitigate data leakage risks, which is critical for Enterprise AI Data Governance.
- Cost: They eliminate expensive token/subscription costs associated with managed LLMs.
- Infrastructure: They can run on existing CPUs rather than requiring massive GPU clusters.
- Specialization: Unlike generic models, SLMs can be trained on “tribal knowledge,” ensuring better adherence to specific Data Governance for Generative AI protocols within the organization.
A major cause of failure is confusing a prototype with a Minimum Viable Product (MVP). A successful AI Prototype vs MVP Strategy adheres to this distinction:
- Prototype: Built in less than one week. It tests a single hypothesis, is not for sale, and requires no rigorous Data Governance for AI or metrics.
- MVP: A product ready for use. It requires proof that the “right tool” (e.g., time series vs. LLM) was used, must have defined performance metrics (because “that which can’t be measured is not science”), and must include explainability for stakeholders.
To manage the frenetic pace of innovation without succumbing to FOMO, Dr. Venkata Dakshina Murthy Kolluru prescribes three pillars for a robust Enterprise AI Strategy:
- Establish R&D: Dedicate a team solely to AI experimentation, separate from billable day-to-day work.
- Form a View Forward Committee: Create a group to analyze emerging tech (AI, Quantum, Meta) and identify which specific disruptions matter to your business.
- Deepen Fundamentals: Invest in training staff on the mathematical fundamentals of AI. While prompting is easy, building reliable, scalable systems requires deep engineering knowledge.