Meet OpenClaw: How an AI Agent Framework Runs My Life
February 6, 2026BlogAI,OPENCLAW,MOLTBOT,CLAWDBOT
Meet OpenClaw: How an AI Agent Framework Runs My Life
By Jay Ralph, Technical Cloud Architect at Cloudable
I need to make a confession before we go any further. I didn’t write this article. Well, I did, but I also didn’t. Let me explain.
Right now, an AI agent called Quill is drafting these words. Another agent called Satoshi will review them for technical accuracy. And a third agent, Jarvis, will coordinate the whole thing and eventually publish it to this blog. I’m just the bloke who asked for it to happen and then made a cup of tea.
Welcome to my life with OpenClaw.

What Even Is This Thing?
OpenClaw is an AI agent framework. If that sentence made your eyes glaze over, let me try again: it’s software that lets you create and coordinate AI assistants that can actually do things, not just chat.
Think of it like having a team of specialists on call. Each one has their own personality, their own workspace, and their own set of tools. They can work independently, collaborate with each other, and even spawn temporary sub-agents for specific tasks. All orchestrated through natural conversation.
I built it because I was frustrated. Not with AI itself, but with how disconnected all my AI usage was. I’d have a conversation with ChatGPT, get useful output, then manually copy it somewhere. I’d ask Claude to help with code, then paste it into my editor. Every interaction was a dead end. Nothing connected to anything else.
OpenClaw changes that. My AI assistants can read files, run commands, send messages, browse the web, access my calendar, check my email. They live in my actual computing environment rather than in a sandbox somewhere in the cloud.
Meet the Team
Let me introduce you to my current roster. It’s grown a bit.
Jarvis is the main agent. Yes, I know, very original. But the name fits. Jarvis handles direct conversations, coordinates the others, and manages the day-to-day automation. When I message my personal Telegram bot at 2am asking “what’s on my calendar tomorrow?”, Jarvis answers.
Quill is my writer. Give Quill a brief and a deadline, and you get polished content. Blog posts, documentation, emails that require actual thought. Quill has opinions about prose and isn’t afraid to voice them.
Satoshi handles technical review and deep research. Named after, well, you know. When I need something fact-checked or want to understand a complex technical topic, Satoshi digs in.
Aldous does general research. Named after Huxley, because good research often reveals brave new worlds of information you weren’t expecting.
Dash is my software developer. When I need code written, refactored, or debugged, Dash handles it. Built significant chunks of the Moonshot trading bot, handles API integrations, the lot.
Forge does code review and architecture. When Dash writes something, Forge often reviews it. Forge also produced that 34-item prioritised fix list for the trading bot that saved my sanity.
Mark handles marketing strategy. LinkedIn content, social media planning, content calendars. Mark recently churned out weeks of humanised LinkedIn posts for my consultancy.
Sloane covers sales and outbound. Lead research, CRM work, identifying opportunities. Still early days with this one.
Marshall does policy research, particularly around my volunteer police work. Policy papers, procedural analysis, that sort of thing.
Reeve handles daily briefings. Every morning I get a summary of calendar, tasks, team activity, and anything needing attention. Reeve compiles it all.
And there are a few specialists for specific projects: Moonshot for the trading bot, Probe for investigative digging, Pixel for visual work.
The beautiful thing is they work together. I don’t micromanage the handoffs. Jarvis can spawn Quill for a writing task, Quill can ask Satoshi to verify technical claims, and the whole thing happens while I’m doing something else.
The Moonshot Story
Let me tell you about the moment I realised this was actually powerful.
A few months back, I got interested in Solana meme coins. Specifically, the sniping game: buying tokens the moment they launch, before the inevitable pump. I knew nothing about Solana. I’d never touched Rust, never used their SDKs, barely understood how the blockchain worked.
Normally this would mean weeks of learning before I could build anything useful. Instead, I spent an evening explaining what I wanted to Jarvis. A trading bot that could monitor new token launches, analyse them against certain criteria, and execute buys automatically.
Three days later, I had a working sniper bot.
I didn’t write most of the code myself. The agents did. But here’s the crucial bit: I understood all of it. Every function, every API call, every piece of logic. Because building it was a conversation. I’d describe what I wanted, the agent would implement it, I’d ask questions about how it worked, they’d explain, I’d request changes, and round we went.
The bottleneck shifted completely. It wasn’t “can I do this?” anymore. It was “can I explain what I want?”
That’s a fundamentally different constraint. And honestly, it’s a more useful one. Being forced to articulate your requirements clearly tends to expose the gaps in your thinking before they become bugs in your code.
The Force Multiplier Effect
I run an IT consultancy called Cloudable. I’m also a volunteer police officer. I have a life outside of work that occasionally demands attention. Time is, as they say, at a premium.
OpenClaw acts as a force multiplier. Not in the buzzword sense, but in the actual military sense of the term: a capability that makes the whole unit more effective rather than adding another soldier.
Some examples from the past month:
Research: I needed to understand the licensing implications of a particular open-source dependency for a client project. Instead of spending an afternoon reading license texts and Stack Overflow threads, I asked Aldous to research it. Twenty minutes later, I had a summary with citations and edge cases I wouldn’t have thought to check.
Writing: This one’s personal. I’m dyslexic. I fucking hate writing. Always have. It takes me three times longer than most people, and I second-guess every sentence. Having Quill handle first drafts isn’t a convenience, it’s genuinely life-changing. I provide the ideas and direction, Quill handles the words. Blogs, emails that need careful wording, documentation, proposals. The stuff that used to drain me now just… happens.
Automation: Jarvis handles my reminders, checks my calendar, monitors certain inboxes, and prods me when things need attention. Not because I couldn’t do these things manually, but because I’d rather not.
Coordination: When a task spans multiple agents, Jarvis orchestrates. “Research this topic, then write a summary, then have it reviewed, then send it to me.” One instruction, three agents, no babysitting required.
The compound effect is significant. I’m not 10x more productive, that would be ridiculous. But I’m noticeably more productive. And more importantly, I’m productive at the things I actually want to focus on.
The Security Reality
Before anyone starts hyperventilating about AI agents with access to everything, let me explain the setup.
OpenClaw runs on a dedicated Mac Mini that my company bought specifically for this purpose. It’s not sharing resources with client data or sensitive business systems. It lives in its own little sandbox, albeit a well-connected one.
More importantly, there are hard boundaries. OpenClaw doesn’t have access to my bank accounts. It can’t read my personal email. It doesn’t touch my calendar (yet, though that’s on the roadmap with appropriate controls). The integrations that exist are deliberate choices, not default permissions.
We also did security hardening on the setup. API keys are stored properly, access is logged, and there are guardrails around what actions can be taken without confirmation. It’s not paranoia if the AI occasionally hallucinates and tries to do something unexpected.
The dedicated hardware also means I can experiment freely without worrying about affecting anything important. If something goes wrong, the blast radius is contained. This separation isn’t just good security practice, it’s also peace of mind.
Being Honest About Limitations
Here’s where I have to pump the brakes a bit, because if this sounds too good, I’m not doing my job properly.
AI still makes mistakes. Constantly. The agents hallucinate, misunderstand context, forget things they should remember, and occasionally do something so bizarre I can only stare at my screen in confusion.
The 20% edge cases matter. A lot. When things work smoothly, they really work. When they don’t, you’re debugging AI behaviour, which is a special kind of frustrating because the failure modes aren’t deterministic. The same prompt might work perfectly five times and then fail mysteriously on the sixth.
Human oversight isn’t optional. I review what the agents produce. I don’t blindly trust their output. I definitely don’t let them send emails or publish content without checking it first. The automation is supervised automation.
This isn’t fully autonomous AI running my life. It’s more like having very capable interns who need supervision but can handle a lot of the legwork. Good interns, to be fair. Interns who can code, write, and research at a high level. But still interns.
And here’s the thing: this setup works because I’ve got 20 years of IT and architecture experience behind me. I can sniff out the cheese most of the time. When an agent suggests something that’s subtly wrong or architecturally dodgy, alarm bells ring. When Satoshi reviews code, I can tell if the feedback is solid or hallucinated nonsense.
The AI is a force multiplier, but it’s multiplying my expertise, not replacing it. Someone without the domain knowledge would struggle to quality-check the output. The interns are good, but they still need a senior to review their work.
The skill ceiling is also real. Getting good output from AI requires skill. Knowing how to phrase instructions, when to break tasks into smaller pieces, how to provide useful context. It’s not as simple as “just ask it to do the thing.” There’s craft involved.
The Meta Moment
So let’s complete the circle.
Right now, as you read this, you’re reading words that were drafted by Quill, an AI agent running in my OpenClaw framework. The article was commissioned by Jarvis in response to my request. Satoshi reviewed it for technical accuracy. Jarvis will publish it.
I provided the brief: the key points to hit, the tone, the length, the constraints. And I’ll read the final version before it goes live. But the actual writing? That’s happening in a subprocess on my Mac while I do other things.
Is this cheating? I don’t think so. The ideas are mine. The direction is mine. The quality control is mine. The AI is a tool that helps me express those ideas in a form that (hopefully) you find readable.
Though I’ll admit there’s something slightly surreal about having an AI write about how AI writes for you. It’s turtles all the way down.
What’s Next
OpenClaw is still evolving. Every week I find new ways to use it, new agents to create, new workflows to automate. The framework itself is getting more capable, and the underlying models keep improving.
I’m particularly interested in multi-agent collaboration for complex projects. Having agents that can genuinely divide and conquer, working on different aspects of a problem and synthesising their outputs. We’re not quite there yet, but the direction is clear.
For now, though, I’m just enjoying having digital assistance that actually assists. Not in the way that most AI tools “assist” by generating content you then have to heavily edit. Real assistance. The kind where you can hand off a task and get back something useful.
It’s not perfect. It’s not magic. But it is genuinely useful.
And sometimes, that’s enough.
Jay Ralph is the founder of Cloudable, an IT consultancy helping businesses make sense of cloud architecture. He’s also a volunteer police officer, which has nothing to do with AI but felt worth mentioning. This article was drafted by Quill, reviewed by Satoshi, and published by Jarvis. Jay mostly just drank tea.
Building a Solana Sniper Bot in 3 Days: An AI Pair Programming War Story
February 6, 2026BlogAI,CRYPTO,SOLANA
Building a Solana Sniper Bot in 3 Days: An AI Pair Programming War Story
How I built Moonshot, lost nearly half my capital on day one, and learned that AI can scaffold fast but can't replace real-world testing.
The Moment Everything Clicked (And Then Didn't)
February 4th, 2026. 11:27 GMT. My Solana sniper bot had been live for exactly 43 minutes when I saw it: Simpstein hit +1100%.
My heart rate spiked. This was the dream. Catching a meme coin rocket in the first minutes of launch. The bot had done its job. Entry at 0.136 SOL, now worth... wait.
The dashboard showed the position as "unknown."
I scrambled through the logs. The bot had bought Simpstein correctly, but somewhere between the purchase and the profit check, a persistence bug had eaten the position data. My first 10x winner and the bot had forgotten it existed.
I ended up selling manually through Phantom wallet. 0.374 SOL returned on a 0.136 SOL entry, a respectable +175% gain. But I couldn't shake the feeling that I'd just watched a preview of everything that would go wrong over the next 72 hours.
This is the story of building Moonshot, a Solana token sniper bot, in three days using AI as my pair programmer. It's not a success story. Not yet anyway. It's a war story about what happens when you deploy financial software too fast, the bugs that emerge only in production, and what I actually learned about AI-assisted development along the way.
What We Set Out to Build
This project started as an experiment. I'd been building Clawdbot, an AI agent framework, and I wanted to push it to its limits. Not on something safe and familiar, but on a domain I knew absolutely nothing about.
Solana meme coin trading fit the bill perfectly. I'd heard people made money sniping pump.fun launches. I had no idea how any of it actually worked. Could I go from zero knowledge to a functioning trading bot using AI as my guide? How far could Clawdbot take me before I hit a wall?
The premise was simple: pump.fun launches hundreds of tokens per day. Most are worthless. Some 10x in minutes. If you could scan them fast enough, filter out the garbage, and execute trades before the crowd arrives, you could theoretically print money.
The reality, as always, was messier.
I wanted to build a bot that would:
1. Scan for new pump.fun token launches in real-time
2. Evaluate each token against quality filters (holder distribution, liquidity, rug indicators)
3. Execute buys via Jupiter/PumpSwap if the token passed
4. Manage positions with stop-losses and take-profit triggers
5. Track everything for post-mortem analysis
The tech stack: Python, Solana RPC connections, Jupiter aggregator for swaps, WebSocket feeds for real-time data. Nothing exotic.
The timeline: I gave myself a weekend.
The AI Pair Programming Setup
Here's a confession: I knew almost nothing about Solana when I started this project.
I understood the basics: it's a blockchain, it has tokens, people trade meme coins on it. But the actual mechanics? Program addresses, associated token accounts, bonding curves, the difference between Raydium and PumpSwap? No clue. I'd never written a line of Solana code in my life.
This is where AI pair programming became less "helpful assistant" and more "primary researcher." I wasn't just asking Claude to write code. I was asking it to explain the entire ecosystem while we built.
"What's a bonding curve and why does pump.fun use them?"
"How do I get token balances on Solana?"
"What's the difference between getAccountInfo and getTokenAccountBalance?"
"Why did this transaction fail with 'insufficient funds' when I have 2 SOL?"
The answers were usually correct. Usually. More on that later.
The workflow looked like this:
- I'd describe what I wanted ("build a position tracker that persists to disk")
- Often I'd first need to ask "what even is the right way to track positions on Solana?"
- Claude would explain the concept, then generate the implementation
- I'd review, test locally, and identify issues
- We'd iterate until it worked
This worked remarkably well for scaffolding. On February 3rd, the first commit created the entire core architecture: config.py, main.py, scanner.py, positions.py, executor.py, exits.py, risk.py. Modular from day one.
By end of day one, I had a bot that could theoretically do everything I wanted. The word "theoretically" is doing a lot of heavy lifting in that sentence.
Going Live: The First Trade Rush
February 4th, 11:27 GMT. I deployed with 4.21 SOL (~$800 at the time). For context: I'd only put in about £150 of fresh money via Coinbase, topped up with some crypto that had been sitting untouched in a wallet for years. Not life-changing money, but enough to sting if I lost it all.
(Side note on crypto on-ramping in 2026: that £150 Coinbase purchase got my credit card blocked within hours. Thanks, Virgin Money. Nothing says "fraud protection" like blocking a legitimate purchase I made myself while letting actual scammers through. Welcome to the joys of trying to move money into crypto from traditional finance.)
The bot opened 10 positions in the first few minutes. This felt incredible. Watching it make autonomous decisions, finding tokens, checking filters, executing trades. The dopamine hit of seeing "Position opened: 0.085 SOL → MOONCAT" in the logs was real.
Then Simpstein happened. Then the bugs started surfacing.
Bug #1: The Silent Balance Check Failure
Here's a piece of code that looks perfectly reasonable:
balance = await client.get_token_account_balance(token_account)
Except I was passing a raw dict instead of TokenAccountOpts. The Solana RPC client didn't throw an error. It just returned garbage. Every balance check silently failed, which meant the bot had no idea how many tokens it actually held.
This is the kind of bug that never shows up in unit tests. The code is syntactically correct. It even returns something. But in production, with real money, it meant positions got marked as having 0 balance when they actually held tokens worth hundreds of dollars.
Bug #2: The Persistence Problem
Moonshot saves position state to a JSON file. Simple, portable, works fine. Except when you update the file, you need to make sure the read-modify-write cycle is atomic. My initial implementation wasn't.
Race condition: bot reads positions file, processes a trade, another process writes to the file, first process writes back stale data. Simpstein's position entry got overwritten before the bot ever saw it.
The fix was trivial: file locking. But the damage was done.
The Reality Check: Day 1 Numbers
By end of day one, I had the data to see exactly how badly things had gone:
- Starting capital: 4.21 SOL
- Ending capital: 2.34 SOL
- Loss: -1.87 SOL (-44.4%)
- Total positions: 214
- Win rate: 22%
Twenty-two percent. Nearly four out of five trades were losers.
But the aggregate number hid something important. When I split the data by entry path, the picture got clearer:
| Entry Path | Win Rate |
|------------|----------|
| Scanner (with filters) | 36% |
| WebSocket snipe (no filters) | 13% |
The WebSocket snipe path, designed to catch tokens milliseconds after launch, was bypassing ALL quality filters. It would see a new token, buy immediately, and only later realise the top holder owned 78.9% of supply (AGENT token, perfect example).
I'd built a speed optimisation that made everything worse.
The Rug Pull Tax
27 positions (14% of the total) exited with exactly 0 SOL returned. These were rug pulls, tokens where liquidity was yanked before I could exit, or contracts that simply stopped allowing sells.
Another 77% of "stale exits" (positions closed due to timeout) had peaked at less than 3% above entry price. Dead on arrival. These tokens were never going anywhere. They just slowly bled to zero.
The lesson: speed matters less than selection. A bot that buys garbage fast is just a very efficient way to lose money.
The PumpSwap Revelation
About halfway through day one, I noticed something odd. A token called "Joogle" had done +790% in 2.5 hours. My bot never touched it.
Why?
I spent an hour tracing through logs. The scanner saw Joogle. It passed all the filters. The executor tried to buy. But the swap failed silently.
Here's what I didn't know: when pump.fun tokens fill their bonding curve and "graduate" to a proper DEX, they used to land on Raydium. My bot was monitoring Raydium for these graduations. But sometime in late 2025, pump.fun launched their own DEX called PumpSwap and started graduating tokens there instead.
Both Raydium and PumpSwap still exist. Raydium handles plenty of other Solana trading. But for pump.fun tokens specifically, the graduation destination had changed. I was watching the wrong place for the tokens I actually cared about.
The fix required adding PumpSwap monitoring alongside the existing Raydium integration. The bot now watches both DEXes: Raydium V4, Raydium CPMM, and PumpSwap. Claude helped me understand PumpSwap's program instruction format, but the real lesson was about assumptions.
I'd assumed pump.fun still used Raydium because that's what the tutorials said. The tutorials were six months old. In crypto, that's ancient history.
Systems We Built (That Actually Helped)
Not everything was a disaster. Several systems worked exactly as designed and prevented losses from being even worse.
The Velocity Confirmation System
Instead of buying immediately on a new token detection, the bot waits 20-30 seconds while sampling the price at intervals. If the price isn't trending up, don't buy.
This sounds obvious in retrospect. But the impulse when building a "sniper" is to prioritise speed above all else. Reality: most pump.fun tokens dump immediately after launch. The ones that actually run tend to show positive momentum in the first minute.
The velocity filter caught maybe 70% of the garbage that would have otherwise been instant losses. It also meant missing some legitimate runners that spiked and dumped in the first 30 seconds. That's a trade-off I'm comfortable with.
The Heat System (Adaptive Position Sizing)
This one was built after day one's carnage, but backtesting showed it would have saved real money.
The heat system tracks the last 10 trades. Each consecutive loss reduces the next position size by 20%. If win rate drops below 15%, the bot stops entering entirely until it cools down.
Between 18:00 and 19:00 on day one, the bot hit a particularly bad streak, losing -1.21 SOL in that single hour. When I analysed the logs afterward, I realised a position-sizing circuit breaker would have helped. I built the heat system that night. Backtesting against day one's data showed it would have reduced those losses to ~-0.50 SOL.
The psychology here matters too. A trading bot doesn't have emotions, but it can still tilt in the sense of making worse decisions after a string of losses. Forcing it to slow down after bad results is like a circuit breaker for bad judgement.
Quality Filters (When They Actually Ran)
The filters that worked:
- Top holder concentration: ≤30% of supply in top wallet. Higher means rug risk.
- Minimum holders: At least 15 holders before buying. Filters out tokens with only the creator and bots.
- Liquidity floor: ≥$5k in the pool. Below this, slippage eats you alive.
- Market cap ceiling: ≤$500k at entry. Higher than this and the easy gains are gone.
- Rug checks: Freeze authority enabled? Don't buy. Mint authority still active? Don't buy.
When the WebSocket path started respecting these filters (commit: "snipe path now applies quality filters BEFORE buying"), win rate on that path went from 13% to something approaching the scanner path's 36%.
Technical War Stories
Beyond the trading logic, the bot taught me plenty about deployment and infrastructure.
Deployment: The nohup Lesson
First deployment: I SSH'd into the server, ran the bot, and went to make coffee. Came back to find it dead. The SSH session had timed out, taking the bot with it.
Solution: nohup python -m moonshot > logs/nohup.log 2>&1 &
This is embarrassing to admit. I've deployed plenty of services before. But the urgency of going live made me skip the basics. In trading, "skip the basics" translates directly to "lose money."
Jupiter API Rate Limits
Jupiter's swap API is generous but not unlimited. My first implementation hammered it. Every position check triggered a quote request. I hit 429 rate limits within an hour.
Fix: 4× exponential backoff on 429 responses, plus local caching of recent quotes. The bot now waits 1s, 4s, 16s, 64s on consecutive failures before giving up.
Memory Leaks: The _seen_tokens Set
To avoid re-processing the same tokens, the scanner maintained a set of already-seen token addresses. Simple, effective, and leaking memory.
Over 24 hours, _seen_tokens grew to contain 40,000+ entries. Each entry is just a string, but collectively they were eating 50MB+ of RAM and making set lookups slower.
Fix: TTL-based cleanup. Tokens older than 2 hours get purged from the set. Memory usage stabilised.
P&L Tracking Is Harder Than Trading
This surprised me. The actual trading logic (find token, check filters, execute swap) was the easy part. Accurately tracking profit and loss was a nightmare.
Problems encountered:
- Entry price captured before velocity confirmation, so it showed the pre-confirmation price (off by sometimes 90%+)
- Reconciliation path marked positions closed without computing final P&L
- SOL price fluctuation during position lifetime wasn't accounted for
- Failed sells still marked positions as closed
The fix for entry price (FIX-017 in our tracking doc) was to calculate from actual execution: (SOL_spent sol_usd_rate) / tokens_received. This sounds obvious, but the original code grabbed a price quote before* the swap executed.
I still don't fully trust the P&L numbers. There's probably another bug hiding in there.
The Git History as Narrative
50+ commits in 3 days tells its own story. A few highlights:
Initial commit
Add velocity confirmation system
FIX: executor passing raw dict instead of TokenAccountOpts
snipe path now applies quality filters BEFORE buying
FIX: entry price captured after execution, not before
tighten stop-loss 30%→15%
TP1: 100% → 35%
Add PumpSwap program monitoring
Heat system: adaptive position sizing
FIX: memory leak in _seen_tokens
Add file locking to positions.py
Each commit is a lesson learned. The stop-loss tightening (30%→15%) came after watching positions bleed slowly instead of cutting losses quickly. The TP1 change (100%→35%) came from watching tokens spike 50% then dump before reaching the 100% target.
The iteration cycle was fast. See problem in production, discuss with Claude, implement fix, deploy, repeat. This is where AI pair programming actually works: not in getting it right the first time, but in iterating quickly once you know what's wrong.
Honest Numbers: Where We Stand
Starting capital: 4.21 SOL (~$800)
Day 1 result: -1.87 SOL (-44.4%)
Manual Simpstein win: +0.238 SOL
Current capital: ~2.58 SOL
Status: Still implementing fixes. Not profitable yet.
I could spin this as "valuable learning experience" and that would be true but also cope. The reality is I deployed a trading bot too early, lost real money to bugs that should have been caught in testing, and I'm still digging out of the hole.
The question now is whether the fixes (velocity confirmation, quality filters everywhere, heat system, PumpSwap monitoring) are enough to flip the win rate positive. Early signs post-fixes are better (36% vs 22%), but I don't have enough data yet to claim victory.
What I Actually Learned About AI Pair Programming
The hype version: AI writes all your code instantly, 10x productivity, ship products in hours.
The reality version, from this project:
Meta: This Article Was Written by AI Too
Here's a fun detail: you're reading an article about AI-assisted development that was itself AI-assisted.
I use a system called Clawdbot, an AI agent framework that lets me spin up specialised sub-agents for different tasks. When I decided to write this article, I didn't open a text editor. I told my main agent (Jarvis) to brief a writing-focused sub-agent (Quill) on the project, including all the context: memory files, git history, bug reports, P&L data.
Quill wrote the first draft in under two minutes. Then I had another sub-agent (Satoshi) review it for technical accuracy against the actual codebase. Satoshi caught two errors: a math mistake (I'd written -9.7% when the real loss was -44.4%) and a misleading framing of the heat system timeline.
Fixes applied, reviewed, done. The whole article process, from "write a blog post about Moonshot" to publishable draft, took maybe 15 minutes of my actual attention.
This is the force multiplier that's hard to convey until you experience it. I'm not a writer. I don't enjoy writing. But I can direct AI agents to write, review each other's work, and iterate based on my feedback. The bottleneck shifts from "can I do this?" to "can I explain what I want?"
Same pattern as building the bot itself: AI handles the execution, I handle the direction and quality control.
What AI did well:
- Scaffolding initial architecture (entire modular codebase in day one)
- Explaining unfamiliar APIs (Solana RPC, Jupiter protocol)
- Generating boilerplate (position tracking, logging, config management)
- Debugging known error messages
- Iterating quickly on fixes once I identified the problem
What AI couldn't do:
- Catch the TokenAccountOpts type error (syntactically valid, semantically wrong)
- Know that pump.fun switched from Raydium to PumpSwap
- Predict that the WebSocket path needed filters too
- Test against production Solana RPC rate limits
- Understand the financial implications of P&L tracking bugs
The danger of learning through AI:
When you learn a technology the traditional way (tutorials, documentation, building toy projects) you develop intuition. You make mistakes in safe environments. You understand why things work, not just how to make them work.
I skipped all that. I went from "what's a bonding curve?" to "executing live trades" in about 48 hours. AI made that possible, but it also meant I had no intuition for when something was wrong. I couldn't smell bad code because I'd never written enough Solana code to know what good code smelled like.
The PumpSwap/Raydium confusion is a perfect example. If I'd spent a week reading pump.fun's Discord, following Solana developers on Twitter, and manually executing a few trades, I would have known about the migration. Instead, I asked Claude, got an answer based on six-month-old training data, and deployed a bot watching the wrong DEX.
The pattern: AI is excellent at the 80% of coding that's straightforward implementation. It's poor at the 20% that requires real-world context, integration testing, and understanding of consequences.
For trading bots specifically, that 20% is where all the money is made or lost.
Would I Do It Again?
Yes, but differently.
What I'd change:
1. Paper trading first. Even 24 hours of simulated trades would have caught the filter bypass bug.
2. Smaller initial capital. Start with 0.5 SOL, not 4.21 SOL.
3. More defensive coding. Assume every API call can fail silently.
4. Better monitoring. I was checking logs manually; should have had alerts.
5. Research the ecosystem more. The PumpSwap switch wasn't secret. I just didn't look.
What I'd keep:
1. The modular architecture. Made fixes easy to isolate.
2. The iteration speed. AI pair programming meant changes in minutes, not hours.
3. The honest tracking. CHANGES.md with 34 prioritised fixes meant nothing got forgotten.
4. The circuit breakers. Heat system and velocity confirmation were worth their weight in SOL.
The Conclusion Nobody Wants to Hear
AI is a tool, not a wizard. It made me faster at building something, but it couldn't make that something correct without real-world testing. The three-day timeline was achievable because of AI assistance, but it was also too fast because AI assistance made it feel more ready than it was.
Moonshot isn't profitable yet. It might never be. The Solana meme token space is adversarial. You're trading against bots faster than yours, developers who rug intentionally, and market dynamics that shift weekly.
But the process of building it taught me more about Solana, about trading systems, and about the limits of AI-assisted development than any tutorial could have. That education cost me ~1.6 SOL so far. Whether that's cheap or expensive depends on what happens next.
The bot's still running. The fixes are deployed. The data's accumulating.
I'll let you know how it goes.
This article documents the Moonshot project built February 3-5, 2026. The code, bugs, and losses described are real. Nothing here is financial advice. If this article doesn't make that obvious, I don't know what would.
When £20,000 Vanishes: The Hidden Cost of Password Reuse and SIM Swaps
A few months ago, a family member of mine was defrauded out of almost £20,000. Thankfully, the money was eventually recovered, but the damage went far beyond the financial loss. Their trust in online banking and technology was shattered. Every login, every "security" text, and every email now feels like a potential trap.
The worst part is that it could have been prevented, both through better personal security habits and stronger authentication by the bank.
How It Happened
It started with password reuse, the silent killer of digital security. The credentials were stolen from some long-forgotten online service, one of those pointless accounts that had not been used in months, but the password was the same one used elsewhere.
Those details ended up for sale on the dark web. Attackers used them to access the victim’s mobile provider account and carry out a SIM swap, moving the phone number to a new device.
Once they controlled the number, everything else fell apart. The attackers intercepted text messages, reset online banking credentials, and authorised a £20,000 transfer, all through SMS verification.
No step-up authentication.
No confirmation call.
No "this looks suspicious" flag.
Just one recycled password and one text message.
For a five-figure transaction, that is beyond negligent!
Convenience Over Security
Banks love to talk about balancing convenience and security, but that balance has tipped too far. SMS-based authentication has not been fit for purpose in years, yet it remains the default method for major transactions.
At my bank, a transfer of that size would have triggered secondary verification, an app confirmation, biometric approval, or at least a voice check. The bank in this case did none of that. It is not that they could not, it is that they did not.
When fraud prevention becomes a tick-box exercise instead of a real control, customers end up paying the price, both financially and emotionally.
The Aftermath: Weeks of Fallout
Even after the refund, the cleanup has been brutal. Weeks spent combing through credit reports, checking for new accounts or applications, and manually changing credentials across every major account, from banking to utilities, retail, and entertainment.
It has been a complete ballache.
Fraud does not end when the money comes back. The admin and anxiety drag on long after. For someone who is not in IT, the psychological damage is huge. My family member went from confident and capable to hesitant and suspicious of everything online.
They have now decided to move back to a bank they can walk into, somewhere they can see a face, talk to a person, and feel a bit more secure. Honestly, I cannot blame them.
Lessons Learned
-
SMS is not security. It is the weakest form of multi-factor authentication and should be retired, not relied upon.
-
Risk-based authentication matters. A £20,000 transfer needs more than a text message.
-
Stop reusing passwords. A password manager is far safer and simpler than remembering multiple variations of the same one.
-
Lock down your mobile account. Add a porting PIN or passphrase, every UK carrier supports this.
-
Monitor your credit file. Fraud rarely stops at one account.
-
Never assume your bank has your back. Some still operate as if it is 2008.
The Bigger Problem
Banks continue to invest millions in "AI-driven fraud detection" while still relying on 1980s telecom infrastructure to secure customer savings. They will happily refund fraud cases but rarely address the systemic weaknesses that make them possible in the first place.
They do not measure the emotional impact, the time lost, or the erosion of trust. They simply mark the case as resolved.
Fraud prevention should not end with a refund. It should begin with authentication that actually works.
The Takeaway
This was not a sophisticated cyberattack. It was a familiar story that happens daily: password reuse, outdated SMS verification, and complacency disguised as convenience.
The money came back, but the confidence did not.
The lesson is simple. Never let your phone number be your last line of defence, and never assume your bank’s idea of "secure" matches yours.
Stop Emailing Like It’s 2013: Microsoft Is Finally Pulling the Plug on @onmicrosoft.com
September 9, 2025GuideSecurity
If you’ve still got mailboxes or services firing off emails from something@yourtenant.onmicrosoft.com, consider this your polite nudge (well, Microsoft’s) to stop.
Because in classic Microsoft fashion, it’s not just a suggestion anymore — they’re throttling it.
Yes, seriously!
The Change
Starting October 15, 2025, Microsoft will start throttling outbound email sent from .onmicrosoft.com addresses to 100 recipients per tenant, per day. It’s a phased rollout, with full enforcement by June 2026.
After that? Every message over the limit gets bounced faster than your expense claim for a “technical lunch” at Gaucho.
🧾 Full details here on TechCommunity
Why the Sudden Crackdown?
To be fair, Microsoft’s letting you down gently. The reasons behind this move are solid:
-
Shared reputation – Your
.onmicrosoft.comdomain shares an IP rep with every other tenant. That includes legitimate businesses… and also dodgy spam farms. -
Trust and branding – No one feels good getting an invoice from
accounts@widgets-inc.onmicrosoft.com. It just doesn’t inspire confidence. -
Security – Spoofing an
onmicrosoft.comaddress is relatively easy for attackers. This change makes that harder — and forces orgs to clean up their setup.
What Actually Breaks?
Here’s the fun bit: it’s per tenant, not per user.
So if multiple users or automated services are still sending from @onmicrosoft.com, you’ll all be queuing for that same 100-email daily allowance. Go over, and Microsoft slaps you with this lovely NDR:
550 5.7.236– Message rejected due to sending limits.
That means:
-
Support mailboxes stop replying
-
CRM notifications don’t arrive
-
Your legacy scanner in Accounts can’t send its daily scan of someone’s elbow
What You Should Be Doing Instead
This really shouldn’t be news. But hey, if your setup still leans on the freebie domain, here’s your to-do list:
✅ Register a Real Domain
Use something official — ideally the same domain your users sign into.
No myrealbusinesssolutions365v2.biz, please.
✅ Add It to Microsoft 365
Go to Admin Centre > Settings > Domains and follow the prompts.
Set up your DNS records — SPF, DKIM, DMARC — all the good stuff.
✅ Set As Default
Make sure new users and services get assigned your real domain automatically — not @onmicrosoft.com.
✅ Fix Existing Mailboxes
Use PowerShell to change addresses:
Set-Mailbox -Identity user@onmicrosoft.com -PrimarySmtpAddress user@yourdomain.com
Don’t forget to double-check login UPNs and app dependencies.
One careless change and suddenly half your staff can’t log into Teams.
✅ Audit Everything Sending Mail
Check for services, apps, Power Automate flows, old scanners, or hybrid mail relays still sending from the wrong domain. Microsoft’s Message Trace or Defender XDR can help.
But… Why Was I Using It Anyway?
Short answer: because it was easy.
Long answer: it was easy 10 years ago.
The .onmicrosoft.com domain was always meant to be a placeholder — for testing, tenant setup, and temporary use. Not for external mail, marketing comms, or service account spam.
Would you send corporate mail from yourbusiness@hotmail.com?
(…don’t answer that if you’re still doing it.)
Bonus Round: Do Some Security While You’re There
While you're cleaning up your domain usage, it’s a great time to:
-
✅ Set up SPF to say who can send on your behalf
-
✅ Enable DKIM to sign your mail
-
✅ Configure DMARC so spoofers get blocked
-
✅ Add a Transport Rule to stop future sends from
.onmicrosoft.comjust in case someone tries again
You’ll sleep better at night — promise.
Final Thought
If you haven’t sorted this already, don’t worry — there’s still time. But make no mistake, this change is coming whether you’re ready or not. And while fixing it might feel like a chore, not fixing it is worse.
Avoid outages, broken processes, and embarrassing email bounces.
Use a real domain. Email like a grown-up.
Your support desk will thank you. And so will your customers.
TL;DR
-
Microsoft is throttling
.onmicrosoft.comemail sends from October 2025 -
The cap is 100 recipients per tenant per day
-
Use your real domain — now
-
Audit your setup and fix anything that sends from the default tenant domain
-
Update SPF/DKIM/DMARC while you’re there
Intune Done Right: Automating App Packaging and Updates with PowerShell and WinGet
June 27, 2025Intune,Windows 11,GuideWinGet
Keeping applications up to date is one of the most tedious, time-consuming tasks for any modern endpoint admin. Between version sprawl, vendor updates, and testing cycles, it’s no wonder many organisations either fall behind or burn far too much time keeping things current.
Luckily, with PowerShell, WinGet, and a few clever tools, you can automate the process — without breaking the bank or your sanity.
The Problem with Traditional App Management
Historically, there have been two common approaches to app lifecycle management:
- Manual packaging: Admins download and repackage every update as a .intunewin file, update detection logic, and redeploy through Intune — an arduous process that often introduces inconsistency.
- Neglect: Apps go untouched, often falling several versions behind, introducing security risks and compatibility issues.
And even if you do get the apps in, there’s still a big gap:
- No business ownership of applications — meaning there’s often nobody responsible for testing or approving updates
- No schedule — since updates arrive whenever the vendor feels like it
- Change process misalignment — most organisations don’t have a change control process built for weekly app updates
Neither approach scales well, especially with remote workforces and short-staffed IT teams.
Enter WinGet and PowerShell
WinGet, Microsoft’s native Windows package manager, has become powerful enough to support real-world enterprise deployment. When used with PowerShell and Intune, it can:
- Identify the latest available app versions
- Download and install silently with version-specific control
- Package and deploy via Intune
- Maintain consistency across a fleet with minimal manual intervention
Full Walkthrough: Automating with PowerShell + WinGet + Intune
⚠️ Important Note for Enterprises
While this Winget-based approach is excellent for SMEs and dev-focused environments, I don’t recommend it as a wholesale solution for large enterprises or regulated organisations. It lacks native version control, rollback capability, and structured testing flows.
Use this as a supplement — not a replacement — for enterprise-grade patch management and change control.
1. Identify Your Application Set
Start by defining which applications you want to manage. You can do this by:
winget export -o baseline-apps.json
This provides a JSON file listing all apps installed via WinGet on a reference machine. Trim this down to only include approved apps.
2. Script the Installation with Silent Flags
WinGet supports silent install switches out of the box. Here's an example script to silently install 5 common LOB (line-of-business) apps:
$apps = @(
"Microsoft.Teams",
"Microsoft.PowerToys",
"Notepad++.Notepad++",
"7zip.7zip",
"VideoLAN.VLC"
)
foreach ($app in $apps) {
winget install --id $app --silent --accept-package-agreements --accept-source-agreements
}
Wrap this in a PowerShell script. You can either:
- Package each app individually — allows granular control, targeted assignments, and version-specific detection logic
- Use a single core app script — great for Autopilot or shared machines needing a standardised baseline
💡 The benefit of a core app script is that you control the install order, unlike Intune's Required apps, which install in an unpredictable sequence. This ensures critical apps install first, reducing delays for the user.
Choose based on your organisation’s needs. Enterprises may prefer one-per-app for change control, while SMEs benefit from the simplicity of a single script.
3. Package as a Win32 App
Use the Win32 Content Prep Tool from Microsoft:
IntuneWinAppUtil.exe -c "source-folder" -s install.ps1 -o "output-folder"
Deploy the resulting .intunewin package via the Intune admin portal.
4. Configure Detection Rules
Use detection logic such as:
- Registry key path & version
- File version
- Existence of an installed .exe
Example for PowerToys:
Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\PowerToys" | Select-Object DisplayVersion
⚠️ Note for core install scripts: If you're installing multiple apps from a single script, you’ll likely need to write custom detection logic for one or more of the apps to ensure Intune knows the install was successful. This might include checking registry values, file versions, or installed MSI product codes for each app — and you'll need to pick a representative one for the app’s detection rule.
Keeping Apps Updated Automatically
Even if you’ve nailed your first-time install process, keeping apps updated is just as important — especially with the frequency of updates in 2025.
To solve this, I came across an excellent open-source script: Winget-AutoUpdate by Romanitho. It’s simple, effective, and gets the job done — use at your own risk, but if you're supporting SMEs or dev teams, it's honestly a lifesaver.
This script:
- Scans for updates to installed Winget apps
- Downloads and installs new versions silently
- Logs to Event Viewer and text logs
- Gives the user a great little notification to tell them what's going on
Install it using:
winget install Romanitho.Winget-AutoUpdate
💡 Particularly useful for SMEs or kiosk devices that don't require tight update controls. It saves time, avoids packaging repetition, and ensures devices stay current — even after Autopilot provisioning. For enterprises that require version control or pre-release testing, Winget-AutoUpdate may not be suitable as a standalone solution.
When This Approach Works Best
Ideal for:
- SMEs and startups: Where "latest version" is typically fine and risk is low
- Developer devices: Where agility and staying current outweigh strict change control
- Kiosk/field devices: Where rapid, unattended updates are essential
But less suitable for:
- Regulated environments
- Apps with tight version dependencies
- Situations requiring rollback/version testing
In those cases, consider tools like:
- Patch My PC: For deep version control and compliance
- Chocolatey for Business: Internal repos and testing workflows
- Custom WinGet manifests with version locks
Bonus Tips
- Baseline Golden Images with
winget export - Use Proactive Remediations to check for outdated apps
- Combine with Delivery Optimisation to reduce bandwidth by enabling peer-to-peer sharing of app content across devices on the same network. This lightens the load on WAN links and accelerates deployments, especially in branch offices. To configure this in Intune:
- Go to Devices > Configuration profiles > Create profile
- Choose Windows 10 and later > Templates > Delivery Optimization
- Set Download Mode to
LAN (1)orLAN and Group (2) - Configure peer caching, cache size, and cache age settings
- Always include robust detection logic to prevent loops or unnecessary reinstalls. A common mistake is triggering reinstallation when the detection method fails due to minor version differences or missing registry entries.Example for MSI-installed apps. Ensure detection is specific, version-aware, and avoids overmatching.
Final Thoughts
App management in 2025 doesn’t need to be painful.
For SMEs and agile orgs, tools like Winget and Winget-AutoUpdate can replace traditional packaging entirely. For enterprises, they offer a complementary approach that handles the 80% of apps that don’t require slow UAT and change boards.
In short:
- Automate where you can
- Control where you must
- And stop babysitting update downloads manually
The future of app deployment is scripted, scheduled, and silent.
Stay tuned for our next post: "Stop Building On-Prem Group Policy Castles in the Cloud."
Intune Done Right: Killing Local Admin Rights Without Killing Productivity
It’s 2025. If your users still have local admin rights "just in case," you’re not managing risk — you’re managing liability. Modern endpoint management has moved on, and your approach to admin rights needs to move with it.
Gone are the days when the only way to keep users productive was to give them local admin and hope for the best. With tools like Intune Endpoint Privilege Management (EPM), LAPS, and third-party options like BeyondTrust, you can now strip out excessive rights without creating a helpdesk nightmare.
Why Local Admin Rights Are Still a Thing (Unfortunately)
Let’s be real: users ask for admin rights because something doesn’t work. They want to install Zoom, update a printer driver, or run legacy software. And IT teams often give in because they don’t have the tools to offer a better experience.
But every local admin account is a security hole waiting to be exploited. Admin rights:
- Let users install unapproved software.
- Increase malware risk.
- Create compliance nightmares.
- Make zero trust enforcement nearly impossible.
- Enable privilege escalation from malware or untrusted software.
- Allow users to disable perts of the Endpoint security stack.
In short: they’re the low-hanging fruit that attackers love.
The Right Way to Manage Admin Rights in 2025
Here’s how smart organisations are doing it now:
1. Intune Endpoint Privilege Management (EPM)
Microsoft's built-in EPM lets you elevate specific applications without giving users full admin rights. Users can request elevation when needed, and approvals are audited. It’s not perfect yet, but it’s getting better with each release. Elevations are policy-driven, and logged — and it integrates directly with Intune's device management stack.
That said, Intune EPM has some fairly serious limitations. It only really makes sense if you're already paying for the Intune Suite or plan to use multiple products from it. As a standalone value prop, it can be hard to justify for organisations that just need elevation for a few apps.
Security: Decent, assuming elevation policies are tightly scoped.
User Experience: Improving, but users may be confused by elevation prompts and inconsistent delays.
One important caveat: Intune EPM is limited to elevating .exe, .msi, and .ps1 files only. Elevation rules rely on static identifiers like file hashes, paths, and optional certificates, which can be problematic for apps that update frequently. If a file’s hash changes — say, after a patch or version update — the elevation rule breaks unless you manually update the policy. That makes it fragile for anything dynamic, like dev tools, self-updating apps, or scripts with changing content. Reusable certificate-based rules help a bit, but for fast-moving environments, this can be a real headache.
2. LAPS (Local Admin Password Solution)
Use LAPS to rotate unique, strong local admin passwords per device. This gives IT a secure break-glass option if elevation fails — without leaving backdoors open. The new Intune-integrated LAPS makes password recovery seamless for authorised support teams while eliminating the shared password problem.
Security: Excellent for emergency access; passwords are rotated and logged.
User Experience: Not user-facing — strictly for IT use.
3. BeyondTrust EPM or CyberArk EPM
Let’s be blunt: Entra PIM won’t save you here. It’s too slow to sync and isn’t designed for endpoint-level privilege. If you need fast, responsive, local elevation with full logging and policy control, these enterprise-grade tools are the gold standard. They also offer automated workflows, policy-based approvals, and better user experience than the current Microsoft stack.
Security: Top-tier. Granular policy control, full audit trails, real-time logging.
User Experience: Excellent. Slick, responsive UI with clear prompts and low friction.
4. Stop Needing Elevation in the First Place
Half the time users want admin access, it’s to install or update something. Fix that:
- Use Patch My PC or Chocolatey to automate application deployment.
- Keep your app catalogue fresh with ongoing lifecycle management.
- Package apps as Win32 in Intune and use supersedence to manage updates.
- Ensure all devices follow a consistent deployment baseline through Autopilot.
Making apps available through the Company Portal avoids users even thinking about admin access. Fewer helpdesk calls, fewer breaches, and far fewer frustrated sighs.
Security: Excellent — no admin rights required.
User Experience: Seamless if application packaging is up to date.
Product Comparison Table
| Solution | Security Rating | UX Rating | Best Use Case | Limitations |
|---|---|---|---|---|
| Intune EPM | Moderate | Moderate | General users, basic app elevation | Requires Intune Suite, limited standalone value |
| LAPS (w/ Intune integration) | High | N/A | IT break-glass local access | No user-facing capability |
| BeyondTrust/CyberArk EPM | High | High | Developers, power users, secure elevation | Cost, deployment overhead |
| Patch My PC/Chocolatey | High | High | App management without elevation | Limited to app ecosystem |
| Entra PIM | Moderate | Low | Cloud role elevation | Not suitable for endpoint elevation |
The Big Use Cases: Support Teams and Developers
Let’s not ignore the usual suspects:
- Support technicians often ask for local admin to run diagnostics or install drivers in the field. Instead, equip them with escalation tools like EPM or provide a secure break-glass admin login managed through LAPS.
- Developers are notorious for wanting admin rights to compile code, install packages, or tweak system settings. Instead of blanket elevation, create containerised dev environments or use role-based elevation tools that grant permissions only when needed. If you need to, give them a dedicated dev VM with full access — not unrestricted access to their day-to-day laptop.
Both groups can function securely without standing admin rights. It just requires a bit of design thinking and the right tools.
Bad Practices That Still Need Killing
- Giving temporary local admin via GPO and forgetting to remove it.
- Letting devs or contractors have permanent elevation "for convenience."
- Assuming Conditional Access policies protect devices with full admin.
- Using shared local admin accounts across machines.
These aren’t just outdated; they’re dangerous.
What About Admins Themselves?
Your IT team shouldn’t be excluded from this conversation:
- Use Privileged Access Workstations (PAWs) with no standing access.
- Use Entra PIM for elevating to tenant-wide roles like Global Admin — but not for local rights.
- Keep the principle of least privilege everywhere.
- Review access regularly and tie all admin access to MFA and logging.
Even your sysadmins shouldn’t be above policy.
Final Thought - The Productivity Myth
Some argue admin rights boost productivity. In reality, they create inconsistency, downtime, and rework when machines drift from the baseline. By automating app installs and reducing the need for intervention, you make users faster and IT more scalable. With devices now out in the wild connected via home Wi-Fi, Starlink, or mobile tethering you can't rely on traditional network security perimeters. That makes endpoint security critical. And admin rights are the soft underbelly.
Stripping them out reduces lateral movement, kills off common malware vectors, and forces you to build a better, automated management experience that scales.
It also means that when devices are compromised, the blast radius is smaller — no local admin, no domain escalation. Removing admin rights isn’t about making life harder for users — it’s about giving them a better, safer experience without the risk. The tooling is here. The excuses aren’t.
Just remember: in 2025, default admin is a default fail. Choose tools that match your user base, automate what you can, and ditch the habit of treating every laptop like a domain controller in disguise.
MDM Isn’t Enough: Why You Still Need a Real Security Strategy
You’ve deployed Intune. Devices are enrolling, compliance policies are lighting up green, and someone’s gone full hero mode because Defender says your estate is “secure.”
Sorry to be that guy, but: MDM isn’t the endgame. It’s the start line!
In 2025, modern attackers don’t care about your BitLocker compliance. They’re jumping across cloud sessions, hijacking tokens, exploiting stale service accounts, and laughing at environments that think mobile device management equals a security strategy.
Let’s unpack why MDM on its own is dangerously incomplete — and what a real enterprise security posture should look like.
MDM: Great at Policy, Awful at Visibility
Intune and other MDM platforms (like JAMF, Workspace ONE, or MobileIron) are brilliant for configuration. They enforce device settings, deploy apps, and ensure a certain level of hygiene across your fleet. But what they don’t do is monitor, detect, or respond to threats.
They’re like a bouncer checking IDs at the door, but once someone’s inside, they’re not watching the room.
MDM can:
- Enforce encryption (BitLocker/FileVault)
- Require a PIN or biometric login
- Set compliance policies for OS version, AV, etc.
- Deploy apps and apply device restrictions
MDM can’t:
- Detect lateral movement or token replay
- Analyse cloud sign-ins and behavioural anomalies
- Prevent data exfiltration in real-time
- Intervene during an active attack
- Correlate endpoint and identity risk
If your estate is “secure” because Intune says it’s compliant, you’ve got a false sense of safety. And that’s worse than none at all.
Common MDM-Only Mistakes We Still See
1. Conditional Access with More Holes Than Swiss Cheese
Let’s say you’ve deployed CA policies – brilliant. But then come the exclusions:
-
“We’ll skip MFA for VIPs”
-
“This app doesn’t support modern auth, just allow it”
-
“Printers can’t do Conditional Access, so bypass them”
You’ve just created your own attack surface, piece by piece. Attackers love legacy – they’ll happily sidestep your security by abusing the same gaps you made for “convenience.”
2. Assuming Defender Antivirus Is the Same as Defender for Endpoint
Built-in AV? Great. But where’s your threat intel? Where’s the behavioural analysis? Where’s the 24/7 monitoring?
If you’re not backing MDM with Defender for Endpoint (Plan 2) or an equivalent EDR/XDR stack, you're blind to what’s happening after login.
3. Admins with Too Much Access, For Too Long
Let’s be crystal clear - A Global Admin in Entra ID (Azure AD) isn’t just a Microsoft 365 superuser — they’re a tenant god.
With the right API access, a Global Admin can:
-
Delete users, data, and services
-
Change or remove security policies
-
Modify Conditional Access rules
-
Reset other admins’ credentials
-
Delete your entire Azure subscription
Yes, really. If you’re integrated with Azure and using a CSP or enterprise subscription, GA rights extend across Microsoft 365 and Azure. One compromised GA account can lead to full platform loss, including compute, networking, identity, and storage.
So if you're handing out GA because someone “needs to make a mailbox,” you’re putting your entire estate on the line.
Security tip:
- Adopt a Least Privilege Model – Only assign the minimum permissions needed for the task. Most users don’t need GA. Most IT staff don’t either. Delegate roles like Exchange Admin or Security Reader instead.
- Use Privileged Identity Management (PIM) – Just-in-time access with approval, MFA, timeouts, and justification. No more standing admin rights.
- Separate Admin Accounts – Never allow daily-use accounts to have admin privileges. Admin accounts should be isolated and used only when required.
- Enforce MFA on All Admin Roles – And audit for any exclusions. MFA should be non-negotiable.
- Monitor Admin Sign-Ins – Set alerts on Global Admin activity, especially from new locations or devices.
- Review Access Regularly – Make it part of your quarterly checks. If someone doesn’t need GA anymore, revoke it.
In short: If one compromised login can delete your entire subscription… you don't have a secure environment.
4. Zero Control Over App Updates
Intune can deploy apps, sure. But who’s updating them? Your 500-user estate might be rocking:
-
Chrome v91
-
Java runtimes from 2016
-
12 versions of Zoom
Modern attacks are exploiting apps more than OS. If you’re not automating third-party patching with Intune Suite, Patch My PC, or Chocolatey, you're living on borrowed time.
What You Actually Need: A Layered Security Strategy
Security isn’t one tool. It’s an architecture. A mindset. A set of non-negotiables backed by automation, not best guesses.
Here’s what a real security stack for modern management looks like:
| Layer | Tooling Example | Purpose |
|---|---|---|
| MDM & Compliance | Intune, JAMF | Enforces baseline device hygiene |
| Access Control | Conditional Access, PIM | Ensures right access at the right time |
| Threat Detection | Defender XDR, Sentinel, Splunk | Detects, analyses, and correlates suspicious activity |
| Identity Protection | Entra ID Protection | Flags risky users, impossible travel, sign-in anomalies |
| App Management | Patch My PC, Intune Suite, Chocolatey | Keeps apps secure and updated |
| Data Protection | TLS 1.2/1.3, Purview DLP | Protects data in transit and at rest |
This isn't just "nice to have" anymore. With hybrid work, BYOD, and cloud services everywhere, your devices are exposed all the time.
But What About BitLocker PINs and BIOS Lockdown?
Let’s have the honest conversation: BitLocker PINs sound good on paper, but in practice, they’re a user - hostile security placebo.
Sure, they add an extra hurdle at boot — but let’s be real: most of your threats aren’t sitting in car parks with a crowbar and 30 minutes to brute-force a laptop. And if they are? A six-digit PIN isn't stopping them.
What a BitLocker PIN actually does:
- Slows down boot time and frustrates users (especially in policing, healthcare, or emergency services).
- Adds support overhead when someone forgets it at 5am.
- Gives a false sense of protection against physical threats.
And what it doesn’t do:
- Prevent nation-state or skilled attackers with physical access from getting in.
- Stop firmware-level attacks, bus sniffing, or TPM-side exploits.
- Protect against 99% of real-world threats — like phishing, token theft, or lateral movement.
If you’re relying on a boot PIN for security, you’re fighting yesteryear’s threats. Most data theft today isn’t someone stealing a laptop – it’s someone phishing credentials and logging in with full access.
What Actually Works?
Modern endpoint security is built on layers — not PINs. Here’s what actually protects your users and estate:
- UEFI Secure Boot to block bootkits.
- TPM 2.0 with BitLocker.
- Code Integrity + VBS + HVCI to enforce secure kernel operations.
- Zero trust enforcement via Intune compliance and Conditional Access.
- FIDO2 key-based auth to kill off password-based logins entirely.
- Privileged Access Workstations (PAWs) for admins — no standing access.
- TLS 1.2/1.3 and SSL everywhere for data in transit.
This is what actually holds up under scrutiny — even if your adversary isn't a petty thief, but a nation-state with time, tools, and talent.
Your posture should prevent logical access, not just physical.
Final Thoughts: If Intune Says You're Compliant, Are You Actually Secure?
That green tick might feel good. But it doesn’t mean you’re protected.
- Compliance ≠ Protection
- MDM ≠ Monitoring
- Access ≠ Identity validation
- Config ≠ Threat detection
Real security is layered, automated, and assumed breached until proven otherwise. If you’ve deployed Intune, great. Now back it with Conditional Access, EDR, automated patching, and identity threat detection.
Because when the breach happens — and it will — you don’t want to be the one saying “but we had Intune.”
Further Reading / Sources:
Need help pulling this together in your organisation?
Drop me a line. Or better yet — review your Conditional Access exclusions before someone else does.
Windows 10 Is Out, Windows 11 Is In: Why It’s Time to Upgrade
January 21, 2025BlogIntune,Windows11
The clock is ticking for Windows 10. Microsoft has officially marked October 14, 2025, as the end-of-life date for the beloved OS. That might sound like a distant deadline, but in IT terms, it’s right around the corner. If your organization hasn’t started planning for Windows 11, now’s the time to get moving.
Let’s talk about what end-of-life means, why you can’t ignore it, and how to make the jump to Windows 11 without losing your mind (or your weekends).
What Does End-of-Life Mean for Windows 10?
When Windows 10 hits its expiration date, here’s what you’ll lose:
- No more security updates: Vulnerabilities won’t get patched, leaving your systems exposed.
- No feature updates: Windows 10 will stagnate while the rest of the tech world moves forward.
- No official support: You’re on your own for troubleshooting and compliance issues.
Running an unsupported OS isn’t just risky—it’s a compliance and security nightmare. For most businesses, staying on Windows 10 past 2025 simply isn’t an option.
Why Upgrade to Windows 11?
Windows 11 isn’t just a cosmetic refresh. It’s built for the modern workplace, focusing on security, productivity, and future-proofing. Here’s why it’s worth the move:
- Zero Trust Security: Features like TPM 2.0, Virtualization-Based Security (VBS), and hardware root-of-trust support make it a fortress for modern threats.
- Productivity Enhancements: Snap Layouts, better multi-monitor support, and Teams integration help users work smarter, not harder.
- Hybrid Work Ready: Optimized for cloud environments, Windows 11 is built with hybrid and remote work in mind.
- Long-Term Support: Sticking with Windows 10 means falling further behind, while Windows 11 positions your organization for what’s next.
Planning Your Windows 11 Upgrade
The move to Windows 11 doesn’t have to be chaotic. Here’s a step-by-step approach to keep things smooth:
1. Assess Your Hardware
Windows 11 has stricter requirements than its predecessor (hello, TPM 2.0). Start by auditing your current hardware to see what’s compatible.
Use Intune for a Hardware Assessment
- Navigate to Endpoint Analytics in Intune.
- Check the Work From Anywhere report to identify devices that are Windows 11 ready.
Tip: Devices that don’t meet the requirements may need upgrading or replacing—start budgeting early.
2. Test the New OS
Don’t roll out Windows 11 to everyone at once. Test it with a small group of users first (preferably IT and power users). Use their feedback to refine your deployment strategy.
3. Leverage Intune for Deployment
If you’re managing your environment with Intune, upgrading to Windows 11 is straightforward. Use Windows Update for Business to deploy feature updates in a controlled manner:
- Go to Devices > Windows > Feature Updates.
- Create a new policy targeting Windows 11 24H2 or the latest version.
- Assign the policy to Azure AD groups.
Pro Tip: Use Update Rings to stagger the rollout and test updates before pushing them to all users.
4. Train Your Users
Even minor interface changes can confuse users. Prepare your teams with:
- Short training sessions on Windows 11’s new features.
- Guides or videos highlighting productivity boosters like Snap Layouts and Teams integration.
Overcoming Common Objections
Some users (or even IT admins) might resist the change. Here’s how to address the usual pushback:
- “We need to test everything before upgrading.”
Sure, but Windows 11 is part of an evergreen model—updates will keep coming. Adopt a test-and-deploy mindset instead of waiting for “perfect.” - “Our hardware isn’t ready.”
Start identifying devices that need replacing now. You’ve got time, but don’t leave it too late. - “We can’t afford downtime.”
Use Intune’s zero-touch deployment capabilities and tools like Windows Autopilot to minimize disruption.
Why Procrastination Is a Bad Idea
October 2025 might seem far away, but large-scale migrations take time. If you wait until 2025 to start planning, you’ll be scrambling to meet the deadline. Here’s why early action is critical:
- Avoid the rush: As the deadline approaches, vendors and support teams will be swamped.
- Stay secure: Upgrading sooner means fewer months spent on an unsupported OS.
- Get ahead of the curve: Windows 11’s modern tools and features can drive productivity gains right now.
Final Thoughts
The end of Windows 10 isn’t just a deadline—it’s an opportunity. Moving to Windows 11 brings better security, improved productivity, and a platform built for the future of work. With tools like Intune and Windows Autopilot, the upgrade process can be streamlined and efficient.
So, don’t wait. Start planning your move today, and by the time October 2025 rolls around, you’ll already be reaping the benefits of Windows 11.
Ready to Get Started?
Let me know how I can help you make your Windows 11 upgrade seamless and stress-free!
Windows 11 Hotpatching: Because Nobody Likes Reboots
December 15, 2024BlogWindows11,24H2
Patch Tuesday just got a whole lot less painful. With Hotpatching, Microsoft is bringing a server-grade feature to Windows 11 Enterprise, letting you install security updates without the dreaded reboot. Think of instant protection, no downtime, and fewer interruptions for your users. Sound too good to be true? Let’s break it down.
What is Hotpatching?
"Hotpatching" is all about applying security updates directly to in-memory processes—no reboot required. It’s been around in Windows Server for a while, but now it’s making its way to Windows 11 Enterprise (version 24H2).
Here’s how it works:
- Updates are applied immediately to running processes.
- No restart means users can carry on working without disruption.
- Your systems stay protected with near-zero downtime.
It’s a win for productivity and IT sanity alike.
Why Does Hotpatching Matter?
Traditional updates usually require a reboot, which means:
- Interrupting users mid-task (cue angry emails).
- Scheduling downtime for critical systems.
- Delayed patching, leaving vulnerabilities exposed longer.
Hotpatching flips the script. Here’s why it’s a game-changer:
- Instant protection: Updates take effect as soon as they’re applied.
- No downtime: Systems stay up and running—no user complaints about “another restart.”
- Fewer reboots overall: Microsoft reduces the yearly reboot count from 12 (monthly) to just 4 (quarterly).
How Does It Work?
Hotpatching follows a simple quarterly update cycle:
1. Quarterly Baseline Updates: At the start of each quarter, a cumulative update installs the latest features and security patches. This one does require a reboot, but only four times a year.
2. Monthly Hotpatch Updates: For the next two months, hotpatches deliver security fixes without restarting the system.
This streamlined process means fewer interruptions for your users and faster adoption of critical security updates.
Getting Started with Hotpatching
To take advantage of Hotpatching, here’s what you’ll need:
- Windows 11 Enterprise (24H2 or later): Build 26100.2033 or above is required.
- Microsoft Intune: For managing update policies.
- Licensing: Windows Enterprise E3 or E5 subscription.
How to Enable Hotpatching in Intune
Setting up Hotpatching is straightforward:
1.Log into Intune:
- Head to the Microsoft Endpoint Manager admin center.
2.Create a New Update Policy:
- Navigate to Devices > Windows > Update rings for Windows 10 and later.
- Click + Create Windows quality update policy (preview).
- Configure the policy to enable Hotpatching.
3.Assign the Policy:
- Target specific device groups for the update ring.
4.Monitor Compliance:
- Use Intune’s Update Compliance Reports to track which devices are up to date and identify any issues.

When Should You Use Hotpatching?
Hotpatching is perfect for organizations that can’t afford downtime but still need to stay secure. Key use cases include:
- Enterprise Desktops: Keep users productive without disruption.
- Healthcare and Finance: High-availability environments where downtime isn’t an option.
- Critical Systems: Protect machines immediately without scheduling reboots.
Limitations to Keep in Mind
While Hotpatching is a massive improvement, it’s not a magic bullet:
- It’s only available for Windows 11 Enterprise (sorry, Pro users).
- Major feature updates and some patches still require reboots (but far fewer).
Final Thoughts: Updates Made Easier
Hotpatching is a breath of fresh air for IT teams juggling user productivity and system security. By applying updates without restarts, you get the best of both worlds: up-to-date systems and happy end-users.
If you’re running Windows 11 Enterprise, now’s the time to embrace this feature. Set up your policies, roll it out, and say goodbye to the endless cycle of reboots. Hotpatching is here to make your life easier—use it!
What’s Next?
Got questions about setting up Hotpatching or managing updates? Let me know—I’m here to help.
Let’s keep those systems secure and running.
Intune Done Right: Wrangling App Chaos, One Update at a Time
Managing applications across a diverse set of devices has always been a challenge, especially in enterprise environments. With Intune, you’ve got a powerful toolset for app deployment, version control, and automated updates. But Intune alone has its limits—especially when it comes to updating third-party apps and managing large app libraries. Here’s how to optimize your application management strategy in Intune, where supersedence and third-party tools like Microsoft Intune Suite, Chocolatey, and Patch My PC can help make app management smooth, scalable, and headache-free.
1. Choose the Right Deployment Strategy for Your Apps
Intune offers several deployment methods, allowing you to tailor delivery to different app types and user needs. However, one pitfall to avoid is making all apps required. While Intune does handle dependencies well, it installs required apps in a random order, meaning it has no concept of which apps are most important to the user.
Imagine a user with 15 required apps. They could be left waiting for the specific app they need to complete a task while other, less critical apps install first. This unpredictable order can lead to user frustration, wasted time, and unnecessary IT support requests.
To sidestep these issues, consider deploying only essential apps as required and using Available for Install for non-critical apps that users can download as needed from the Company Portal. This approach allows users to access important apps immediately, without waiting for the entire set to install.
- Available for Install: Perfect for optional applications that users can download as needed through the Company Portal.
- Required Install: Ensures that essential apps are deployed automatically to all targeted devices or groups, with no user intervention required. Ensure that only the most essential apps are "required"
- Uninstall: Quickly removes apps from specified devices, helping you maintain compliance or remove outdated versions easily.
Selecting the right deployment type based on your app’s function and necessity allows you to give users the flexibility they need while ensuring critical applications are always up-to-date.
2. Simplify App Version Control with Supersedence—But Beware the Manual Workload
Intune’s supersedence feature is a useful tool for updating Win32 applications when you’re managing apps through Intune alone. Supersedence lets you specify that a new version of an app replaces an older one, automatically removing outdated versions across your environment. This is especially valuable for controlling Microsoft applications and Win32 apps, helping to keep app versions consistent.
However, supersedence relies on manually packaging and updating each application—an enormous workload if you’re managing a large app library. Imagine you’re handling 500 apps, each requiring manual packaging and configuration updates with each new version. Without automation, supersedence can become a bottleneck in maintaining an evergreen environment. Here’s where tools like Microsoft Intune Suite, Chocolatey, and Patch My PC shine.
3. Filling the Gaps: Intune Suite, Chocolatey, and Patch My PC
If your app library is extensive, you need more than just supersedence to keep applications current. Microsoft Intune Suite, Chocolatey, and Patch My PC offer automation and streamlined packaging capabilities, making large-scale app management far easier. Here’s a breakdown:
- Microsoft Intune Suite: Expanding on Intune’s core capabilities, Intune Suite provides enhanced automation, security, and management features. Its app management capabilities offer deeper support for automating updates, no packaging, and enhancing visibility, allowing IT to manage large app portfolios more effectively without the manual work required by supersedence.
- Chocolatey: This package manager simplifies the deployment, updating, and removal of third-party applications and integrates well with Intune. Chocolatey automates the packaging and updating process for a wide range of applications, eliminating the manual steps required with supersedence alone.
- Patch My PC: Specifically designed for third-party app patching, Patch My PC integrates seamlessly with Intune to provide automated updates for a wide array of third-party apps. With robust reporting, version control, and auto-update capabilities, Patch My PC ensures your app library stays evergreen without constant manual intervention. This tool is especially valuable for large app libraries, allowing IT teams to automate patching and package updates with ease.
Using one of these tools alongside Intune reduces the manual work involved in packaging, deploying, and updating apps across a large enterprise, helping you maintain an up-to-date, secure app ecosystem with minimal hands-on effort, yes they come at a cost but that is offset in far reduced effort and complexity.
4. Streamline App and OS Updates with Windows Autopatch for a Fully Evergreen Environment
Modern management isn’t just about deploying applications—it’s about keeping everything, from apps to the operating system, up-to-date in a secure, evergreen state. Windows Autopatch, available as part of Intune, takes OS updating a step further by automating Windows updates across your organization. Unlike WUfB, Autopatch is a fully managed service that handles Windows quality and feature updates on your behalf, freeing up IT resources and ensuring a consistent, optimized update process.
When paired with third-party tools like Patch My PC and Chocolatey to automate updates for non-Microsoft applications, Autopatch enables a comprehensive, evergreen environment. This integrated approach ensures all software stays secure and current without manual effort, providing a seamless experience for end-users and a more resilient setup for IT.
5. Provide Flexible, Self-Service Options for User Empowerment
Users often need quick access to certain apps that may not be “required” for everyone. Intune’s Company Portal allows you to publish optional apps, giving users the freedom to install what they need, when they need it. By using this self-service model, you enable users to install optional apps or essential updates immediately without relying on IT.
This self-service approach is especially useful when updates are rolled out across the organization. Users can check the portal for the latest versions or download optional tools as their needs evolve. The flexibility to access apps on-demand improves user satisfaction, cuts down on IT support requests, and provides a more agile, responsive experience.
Wrapping Up: Optimizing App Management with Intune
When used to its full potential, Intune streamlines app deployment, updates, and management in ways that meet the needs of modern enterprises. From automated updates and lifecycle management to custom configurations and proactive monitoring, Intune enables your organization to stay flexible, secure, and ready to support user productivity.
However, for large app libraries, relying on Intune alone (and supersedence) for updates can lead to a high manual workload. Third-party tools like Microsoft Intune Suite, Chocolatey, and Patch My PC take Intune’s capabilities to the next level by automating patching, packaging, and updating processes that would otherwise require extensive hands-on effort. By combining Intune with specialized tools, you’re setting up your organization for smoother, more efficient app management that meets the demands of today’s dynamic, evergreen IT environment.







