Ironclad Journal icon Ironclad BLOG

What To Expect In 2025: Predicting the Year In AI

January 16, 2025 4 min read
AI generated image of a woman with short hair wearing VR glasses and looking up at a planet

1. AI power and performance may plateau

It’s one of the biggest questions around AI. How fast will we see LLMs progress in terms of power and capability? For a while it seemed as though we might be heading into a plateau, as models came with less frequency and with less new power.

Sure, the just-announced o3 model from OpenAI seems to contradict that narrative, with the company claiming that it offers human-like intelligence (we’ll know more once it hits general release late in January). And there are plenty of other models expected soon that could exceed expectations. Still, I will be surprised if we see anything this year that feels like the huge leap forward we were seeing back in 2023.

Should we really be surprised? We’ve seen this before. After all, Natural Language Processing (NLP) has advanced in fits and starts for the past few decades. Periods of rapid advancement (e.g. FastText in 2015) have been followed by years of slow progress (e.g. the seeming stagnation in the 7 years before ChatGPT exploded into the mainstream in 2022).

The pattern is clear: Every so often a new paradigm fuels a sharp increase in innovation for a while, then slows down. If we are headed for a plateau, it could last a while and carry significant implications for the future of the AI space.

2. More expansion and innovation in interface design

If the Great LLM Slowdown does occur, it won’t necessarily be a bad thing. It will give us a chance to figure out how to use this new technology. Unable to rely on huge leaps in underlying power, AI innovators will have to find new ways for users to engage with the technology.

Just because the underlying tech isn’t getting more powerful doesn’t mean it won’t feel more powerful, or achieve better outcomes. This is because LLM power is only one element behind AI impact. I expect to see substantial improvements in other areas. Already, we’re seeing companies like OpenAI and Google explore novel combinations of new technology, for example “advanced voice mode” in ChatGPT or the ability to natively output images, speech, and text in Gemini 2.0.

I expect we will see more use of novel design approaches, such as multi-agent architectures (embedding multiple AIs within a solution that collaborate and validate each other) and model routing (dynamically deciding to source results from a choice of several models).

Most of all, I expect substantial investment and progress in the area of user interface and design. What OpenAI did with Canvas suggests one interesting direction. By embedding a powerful word processor-like UI into their chatbot, they enabled a very different experience for their users. We did something similar with Jurist recently. We designed it because we wanted to give Legal a truly capable assistant optimized for creating, modifying, and adapting written legal documents.

Why is this important? Because text chat, while undeniably powerful, has real limits in both effectiveness and user comfort. It depends so much on phrasing of prompts, a hit-and-miss challenge that can lead to results that feel erratic or off the mark. By creating a more structured and defined experience around the chatbot, designers can also reach users who might otherwise feel intimidated by AI.

3. AI wins over more skeptics and resisters

While AI has come very far, very fast, it’s important to remember that not everyone has been open to the technology.

Plenty of enterprises set a firm “no AI” policy. Here in Legal, we saw some firms and major corporate clients impose total bans on the use of AI. In some creative roles and fields, “ChatGPT” became a dirty word. In most schools and academic institutions, use of AI on assignments is still heavily restricted or forbidden.

2025 could be the year where the tide shifts. Now that it is clear that AI is nowhere near replacing human beings outright for most challenges, attitudes have been softening, a trend I expect to gain momentum. Outright rejection of AI in certain industries will give way to cautious adoption, or even excitement when practitioners realize their jobs are still secure, and that AI can eliminate some of the drudgework.

As we all live with AI, and experience first-hand its time-saving and productivity-expanding benefits, we can start to see attitudes shift. I expect there will be more understanding that AI can be a legitimate element of human creative work. We will see norms and expectations evolve, recognizing that, say, using AI to help outline or sketch out an initial idea is not the same as using it to create the final result.

4. Laws, policies, and guidelines around AI will clarify

There is a pattern with disruptive new innovations. The technology comes first, the social and cultural norms next, and the legal and regulatory structures last of all.

We are now starting to see restrictions and limitations emerging around AI. So far the responses have ranged from stringent and far-reaching regulations, like the EU Artificial Intelligence Act, to emerging industry-specific regulations, to voluntary commitments such as corporate policies. But it is clear that much more is on the way.

Courts and policymakers are debating whether AI itself can hold copyright or if the rights should default to the creators of the AI system or its users. Courts in multiple countries, including the US, have tended to reject AI-generated patents. AI companies themselves are involved in numerous lawsuits over their use of copyrighted data in training. While the legal and regulatory picture is still largely unsettled, there is already clear progress.

This matters! Remember: It isn’t necessary to remove every bit of legal risk from AI to use it. You can also be selective in how you engage with it to manage the risk. Just to offer one example: I recently heard of a gaming company that used AI extensively in generating early options for visual and audio game assets, only to then have human artists and designers take over the rest of the process to guard against loss of the IP.

Every little bit of guidance and clarification helps to drive more investment and adoption against AI. And as big copyright holders get involved, representing large and powerful financial interests, we might expect AI to go through a progression similar to what happened to the music industry after the emergence of Napster.

Want more content like this? Sign up for our monthly newsletter.

Book your live demo

Cai GoGwilt is CTO and Co-Founder of Ironclad. Before founding Ironclad, he was a software engineer at Palantir Technologies. He holds a B.S. and M.Eng. in Computer Science from the Massachusetts Institute of Technology.