4 min read

Predictions for the year

I was not going to start off this newsletter with a set of predictions for the year. That seems like a surefire way of derailing the effort.

But, I read through a few predictions over the course of the last few weeks, and I'd like to share those with you.

I'm going to cover AIOps/MLOps, DevOps/DevSecOps, ChipOps, FinOps, and CloudOps. Basically EveryOps. If I've missed out on any "Ops" that you'd like to see me cover, hit reply and let me know!

AIOps/MLOps

First up, AI predictions found here and here.

The crux of the predictions is that the year will belong to Agentic AI and to expanding AI capabilities in various fields. It'll also belong to Small Language Models and Edge Compute, including mobile phones running more custom models. (I know I'm late with this newsletter, and the news about DeepSeek breaking NVIDIA is already out there. I promise I'll cover that in the next newsletter.)

Google (AgentSpace), ServiceNow, and Salesforce (Agentforce) are all building Agentic AI to make it easier for enterprises to integrate AI into their workflows. This will impact developers along with almost every other function - marketing, sales, compliance, vulnerability remediation, to name a few.

Agentic AI is perfect to recoup costs that large companies like Salesforce and Google (and vendors like Anthropic and OpenAI) are investing in model training and hosting, and also to find a path forward for ML. Once it is successful in the enterprise space, it'll drive innovation to create much more fine-tuned Agentic models instead of the massive LLMs we've seen over the last few years. It's somewhat the same as going from a generally knowledgeable human phone operator to a much more constrained IVRS system that can handle a lot more calls about a very specific set of problems.

But will AI Agents replace humans? The many times AI companies have failed in this past year to showcase a successful human-replacement model says that will not happen. Agentic AI will be a tool to make work more complex for the average worker, in a good way.

The flip side is that AI capabilities won't just improve in the desired fields, but also in undesirable fields like ransomware, impersonation (see here and here), and much more complex software supply chain attacks, including malicious open source ML models and ML dependencies themselves.

Further, seeing Apple's spectacular Apple Intelligence failure, I expect mobile phone companies to work on SLMs (Small Language Models) more fine-tuned to the ARM chips they are shipping in their devices. This will bring key differentiation to mobile companies, which will be able to create locale-specific and much smaller and faster language models (and hopefully mobile-centric Agentic AI) with a fallback API call to OpenAI to ensure customers continue seeing AI everywhere.

DevOps/DevSecOps

One good prediction list in the DevOps space is here. According to it, developers will see more IDPs to help deal with self-service Platform Engineering style tasks. I suspect these will include Zero-Trust security to ensure more security at the application level.

In core DevOps, the focus will remain on Security. Threat vectors are increasing, and bad actors are finding new and novel ways of getting into corporate and government systems. Critical infrastructure like data networks, backbone systems, and VPN systems will continue getting attacked and one of the biggest ways companies can secure their data and protect their reputation is by securing every bit of code they put out. With the increased threat to Open Source Software, this is getting more difficult, since the very foundation of enterprise software - Open Source Software - is getting attacked much more consistently.

In the DevSecOps world, expect LLMs to play an even more significant role in everything from code-creation and debugging to vulnerability detection and remediation. While moonshots like Devin may fail and issues like ML-backed unnecessary CVE reporting will continue, it'll taper off to real work. Just like fuzz-testing, which is a great tool if you know what you're doing, ML-based vulnerability scanning needs in-depth understanding of what kinds of false positives to ignore. Hopefully this work will not lie in the lap of the open source development community in the future.

ChipOps

In the chip world, expect competition to heat up between ARM and RISC-V architectures as to who can steal a bigger piece of the ML pie from NVIDIA. While ARM has the lead, NVIDIA itself is working to see which architecture is better suited for ML inference workloads, and RISC-V may win out. This doesn't mean ARM is losing any time soon. It's moving up from mobile computing to PC computing and slowly but steadily displacing Intel over the next decade, with Microsoft, Dell, and Lenovo all opting to build cheaper, more energy efficient ARM laptops using Snapdragon's many, many chips. Sure, let's count Apple too in the world laptop tally of ARM users.

FinOps

FinOps will continue to dominate - as in - we will continue to see a massive shift towards lower cost of operations and stress on SaaS companies. But eventually, the markets will grow and money will get cheaper and SaaS will make a comeback as companies spend more to make more. But will that happen this year? Remains to be seen.

CloudOps

SaaS may suffer ups and downs, but the Cloud will continue to dominate. Companies will host more private infrastructure in public clouds, and B2C vendors like mobile phone companies will host more services in public clouds. Everything is already an API and everything has moved to Kubernetes. Next up will be securing the clusters and scaling them to be even more responsive to bursts of traffic. We've seen that customers are wary of serverless computing. As cloud maturity increases across verticals, more companies will experiment with serverless. Most will reject it as the "holy grail" of cloud hosting. Every tool has its use, and serverless computing is great for dealing with very specific issues and instances. So that's how it'll be used.

Fin

Well, that's it from me. I hope this long-ish writeup didn't scare you. I expect future newsletters to be short. Lots more links and bite-sized commentary instead of wordy essays. This time I just wanted to cover what I expect the coming year to be like. Let's see how much of it I get right!