In an interview with businessline, Mohammed Rafee Tarafdar,
Chief Technology Officer, Infosys, shared insights into the company’s evolving
approach to AI, emphasising how the landscape has shifted from large-scale,
generalised models to more specialised, domain-specific AI solutions. Some
takeaways from the interview:
If Agentic AI provides
autonomous decision-making without human intervention, will we see a scaling of
companies without adding much manpower?
There will be phases
with Agentic AI. With AI assistants, we are largely doing augmentation, which
means the same work using these AI tools for higher productivity and
efficiency. The next step beyond that will mostly be automating a few tasks,
either through bots or RPA tools.
However, the autonomous state of agents is the last level of evolution. It will
take some time before we get there because many steps are required in between.
Plus, initially much adoption of these technologies will require a human in the
loop because the accuracy will improve over time. Getting to a level of
maturity to move to an autonomous mode will take time. In the next three years, some tasks will be automated, but we will also
need human intervention to improve and iterate.
Infosys recently came
out with SLMs when LLMs were the rage. What was the intention? Is the cost why
some large Indian IT services companies are hesitant to invest in LLMs?
There are different
roles for LLMs and SLMs. We wanted to ensure the SLMs we built are specialised
to a business or a domain, that they use a lot of permissive data, and that the
run cost is less. For that particular
domain, it can operate at the same level or better than LLMs. That was the
rationale. We were also strategic in choosing the areas. For example, we
picked banking because whatever model we use here, we are integrating into a
Finacle product. So, it’s a vertically integrated AI solution offered as part
of Finacle. Second, we did for IT operations. We are integrating this into the
LEAP platform, which runs operations for our clients, to improve productivity
and efficiency.
Another is
cybersecurity, which is getting integrated into our Cyber Next platform to run
the SoC operations for our clients. We picked areas where we have platforms so
we can integrate and create value for those domains. SLMs and LLMs will coexist in most enterprises because when you need
larger generalised knowledge, you need reasoning capabilities for which you
must rely on an LLM. But when you are doing specialised, domain-intensive tasks
or for a business area, you want to do it in a secure, compliant manner with IP
you retain, where SLMs will have a role. The cost proposition is also better
than that of LLMs.
We realised the time
and cost to build these larger models has come down. You can see with DeepSeek
that you can build at a fraction of the cost and time, but there has to be a
business reason. We found an opportunity to build it in a domain-specific
manner that aligned with our IP strategy. Tomorrow, if we find value in midsize
models, we will look at it. But we are seeing it more from a value and business
perspective. I don’t think cost and time are a big factor, given both have come
down.
If LLMs are at the
core of AI agents, what foundational models are you using?
We use over 10
different models. It’s a combination of commercially available models like GPT,
Gemini and Cloud, open-source models like Llama, and Mistral, and our
specialised models-- our fine-tuned SLM. Our strategy is Poly AI. This means we
want to pick the best model, the best AI provider and the best platform
depending on the task.
What percentage of
deals in your pipeline are AI or GenAI related?
Almost every deal
today has some form of AI embedded, even large deals because our clients expect
us to use GenAI to deliver value efficiently.