6 min read

Week 7 of the year 2025

Welcome to another edition of EveryOpsGuy!

First, a small note about editing - my previous post was lightly touched by AI. I got recommendations from ChatGPT on verbiage. This was immediately caught and pointed out by a friend and reader of this newsletter. Henceforth, I'll skip the shenanigans and only have my own words delivered to you. If I do ever use an editor, it'll be a human editor instead of an AI one. Let's keep it simple, shall we?!

This week, we'll talk about DeepSeek (again!), how countries around the world are looking at LLMs going forward, some malware news, home lab automation, and a somewhat expected acquisition news.

First, there's DeepSeek.

After the dust has settled around DeepSeek's initial R1 launch, researchers have got down to the business of understanding if the model is as good as DeepSeek claims. They're discovering - not.

Sure, it runs at a tenth of the cost of OpenAI's models and is every bit as elegant as promised. But researchers are discovering that it can be jailbroken relatively easily compared to other models. Granted, other models have had time to mature, and their jailbreaks tamped down.

Companies have become so confident that Anthropic, makers of the Claude LLM, have come up with "Constitutional Classifiers" and a challenge. These classifiers act as guardrails, protecting Claude against jailbreaks both at the input and output stages. Anthropic has put a $30,000 bounty for anyone who can break these classifiers. The classifiers have themselves been trained on synthetic data which emulates jailbreaks with as few false positives as possible. Let's see who cracks them first.

DeepSeek R1's jailbreaks allow users to extract harmful information from the model, such as how to create bioweapons or how to create a social media campaign promoting self-harm in teens. Such jailbreaks, other than being bad for society in general, are also problematic for enterprises looking to self-host the LLM model or leverage the DeepSeek API, because it can lead to leaks of confidential information, or allow consumers to buy new cars for only a dollar.

Aside: I'd be remiss if I didn't talk about what jailbreaks are. A jailbreak in the software context, is any solution that allows you to make a system do what it was not intended to do based on the gates put on it by the creators of the system. In iOS, a jailbreak means installing cydia and any other third party app not approved by Apple through its App Store. It can also mean theming iOS, or changing which app loads then you hold the camera button at the bottom of the lock screen - features that were not available natively in iOS till recently. In the LLM context, a jailbreak simply means getting a response which the creators of the model don't want it to return. Most often, this means responses such as self-harm or bomb-making, or how to harm others. Such responses make such a model unsafe for use by everybody and can often lead to embarrassing, libelous or illegal situations for companies. All of these make an LLM that can be easily jailbroken unfit for use by the mass market and by enterprises. DeepSeek R1 may be an amazing LLM, but we'll have to wait for a much safer R2 perhaps, to see real industry adoption.

YourCountryNameGPT

As I mentioned last time, DeepSeek R1 has deep-seated cultural biases. The rise of the model has prompted countries all over the world to consider how to proceed in this polarizing environment. India, for example, has warned government officials against using both ChatGPT and DeepSeek's eponymous app.

India is also developing its own LLM foundational model, which would take into account many Indian languages, the Indian political context, and ensure data security and locality.

The EU is thinking of AI Independence as well. They have launched the Open Euro LLM initiative with the objective of a fully regulatory compliant model and a means of providing a boost to local research institutions and companies.

DeepSeek, OpenAI, Google, and Meta might want their LLMs to work in every part of the world. But the same data security and technology hegemony issues that complicate current web services are already in the crosshairs of countries (and whatever the definition of the EU is) regarding LLM services as well. So it's no wonder that everyone is rushing to make their own LLMs. DeepSeek just made it easier for everyone to do so when they showed both how much cheaper it is, and how critical, to train your own model. If you want your population to not rely on the political motivations of others for answers to their day-to-day queries, you're going to develop your own LLM. It's the equivalent of publishing your own maps now.

Let's talk malware

Kaspersky is claiming that it has found malicious code in mobile apps that is geared towards scanning screenshots for cryptocurrency related information on the user's device with the intention of stealing their coins. The twist is that this malware has been found not just in Android apps, but also in iOS ones. Apple promotes its walled garden as a place where it protects user data from all kinds of bad actors. So this first known case is certainly disturbing.

The attack flow is that when a user tries to use the chat feature in one of the infected apps, the app requests access to the photo gallery and then uses OCR to scan all available photos for screenshots of crypto data.

This is not a sophisticated attack. The vector is very limited to screenshots in user photo libraries. The apps are bespoke and not very popular. Besides, Apple has done a splendid job of enabling users to not grant access to their entire libraries. It's also very easy for the company to remove such offending apps quickly.

The threat, though, is that this could be a proof of concept from one bad actor to showcase the technology to others. Once the tech is proven, they can sell access or the SDK for a price and others can implement these in other, more popular apps.

These kinds of software supply chain attacks are difficult to root out at the consumer level because you never know what's baked into the app you're using. But at the developer level, using a solution that blocks malicious packages from ever entering your development environment makes it that much harder for bad actors to hijack your end product for their purposes.

Home Lab corner

As an Ops junkie, it's a pleasure for me to run my own homelab. I don't have much of a setup right now - just a couple of machines running twenty or so services on docker or natively. I keep them accessible while I travel via Cloudflare Tunnels and Tailscale. While I'm not worried much about data security on these services (the heaviest service I run is an RSS feed reader), I am bothered by the lack of updates on the docker images. I prefer stability, but I'd also like to be able to get major updates to the services I use. This is why I'm experimenting with Watchtower after reading this article on the topic.

Watchtower is a very simple tool. There's no GUI (though a webUI is planned) and nothing much to configure if you're using off the shelf docker containers as I am. It simply connects to your docker instance and monitors all currently running containers for image updates. Soon as it finds one, it reboots the container with the new image. Simple.

This simplicity might not be for everyone, maybe not even for me. I do like to maintain explicit control over software updates, reading and vetting updates before applying them. This is generally a good practice - let the open source community look through a new release for bugs, zero days vulnerabilities, and general tomfoolery, before adopting it. This caution definitely makes sense in an enterprise environment, where stability is paramount. But my homelab is a space for me to experiment and learn, so it's the perfect place to try out something as stealthy as Watchtower.

Winds of change

The last item on the agenda is SolarWinds. The company burst into our collective IT consciousness in 2020 after it was targeted in a 2019 network breach, during which attackers were able to inject malicious code into their Orion management system. Since this product was used by thousands of organizations in the US and around the world, including the US government, the scale of the attack was unprecedented.

No surprise, then, that the company is going private for four point four billion dollars. The price is still a 35% premium per share, which shows the value of the company and its wide array of products and services. This includes the website monitoring service Pingdom, which I've used in the past for my own sites. Hopefully, this move will help SolarWinds shed the past and move on with rebuilding its brand image.

That's all for now, folks.

Hope you liked this edition of EveryOpsGuy. If you did, send it to someone who likes newsletters! You can also follow me on mastodon - @everyopsguy