DISPATCH

THINKING ABOUT SHADOW AI

SEPTEMBER 4TH, 2025
Andrew Melville: So, what I've been thinking about is this phenomenon that's happening - let's call it shadow AI. Employees at many different companies are using personal AI accounts on Claude or ChatGPT to do their work. In a lot of cases, this is because the company either has banned AI, or is using some internal tool or co-pilot that doesn't deliver the performance they need. Rather than wait around for traditional IT procurement as would have happened in the past, they're just paying with their own credit card and using it.

Ramsay Brown: Yeah, putting myself in the perspective of someone responsible for thinking about how a company's going to win - seems like a complete black hole.



AM: That's exactly it. It's interesting if we look at the past - there was no way for employees to just decide "I'm going to start using SAP" or "I'm going to start using Oracle databases" in their work. In the past, you didn't have that option. It was the executives and the procurement team that made decisions about what you use. That started to change a bit with cloud and SaaS. If you've worked in many organizations, there might have been a brainstorming tool or whiteboarding tool where companies allowed employees a little flexibility on what they might use - maybe a free version that would roll into a paid version.

But this is really the first time you have what I'd call industrial-grade software where an employee could literally take a code base they're working on, dump it into AI and start using it. Your financial team could take quarterly reports, sensitive financial information, put that in there and do analysis around it - competitive analysis, strategic information. It's literally just a browser away. They're able to dump this essentially onto the open web. Once that goes into one of these public LLMs like ChatGPT or Claude, it now belongs to the LLM. They keep it indefinitely and can train their models on it.

So it's interesting that literally it's a copy-paste away - very sensitive data and potentially protected data in the case of financial data, customer data, where the company is actually breaking the law by this information going into a public LLM. But I think one of the challenges is because it's only a tab and copy-paste away, there's such a small barrier to using this technology that it doesn't seem sometimes like it's as big an issue as it is.

RB: This issue is huge. There was a crop of technology companies - ours included - as of early 2024 providing technical solutions to things like data loss prevention, as it's called. The idea of preventing information from leaving your system - you could run scans, whitelists or blacklists of ideas or terms that shouldn't go into LLMs. There was a batch of companies confidently building in this space. All of us looked at it really early and said that's going to be a big problem. We know how to solve it with technology. Yes, it's still a cultural issue, a management crisis, but we felt confident with the technology. Everyone built that and then it became just like an open-source commodity. No one team has really won in that space, even though this idea has been around for a while. But now suddenly everyone is talking about it. Now you can't build an empire company, can't build a venture-scale return company on that idea anymore. But why now are people talking about it as a real threat instead of 18 months ago when people were trying to build this? What's suddenly changed?

AM: I think some of what has changed is simply that AI was out of sight, out of mind - that was just the infancy of it. I remember right in the months after ChatGPT first happened, at the company I worked at, people would use it to write a talk for an event and they would tell everyone "Hey, I used AI for this!" It was fun, part of getting on board with this new technological change. Looking back, it was interesting - no one really perceived it as a security threat. Nobody perceived it as "don't put sensitive information in that thing." You would have thought there would have been major red flags and alarm bells about what this was.

It speaks to how unfamiliar most people at most organizations were with AI. People weren't thinking about it. It was this cool new tech that did stuff. I'm sure some companies did - I know companies perhaps in finance and protected industries, I've talked to friends who said they were very quick to address it. But it does seem as though the panic over data privacy loss, exposure of sensitive and protected information, is really just now starting to bubble up as a major thing.



RB: Based on what you're describing, there was a handful of - let's call them the black hat, DEFCON, cybersecurity, CIO-adjacent style folks who looked at this, admittedly when Mission Control did back in the day, and said "Oh, that's going to be a headache. We should start building things now to reduce that headache because people are starting to copy and paste things into ChatGPT they shouldn't." Then it turned out there was this massive lag between technologists seeing that and management realizing - god, it's gone on for 18 months. Now people care, now people are talking about it. There was a lag between the actual solution that was good and management realizing this problem is now actually out of control. It feels like a legal or data black hole. It's also a management black hole - you can't plan effectively. You don't have any idea what's actually going on inside your company.

AM: No, there's huge strategic and operational risks around this. If you think about people at your organization using these tools, I liken it to having a team of invisible employees working at your company and you don't know what they're doing. Completely off book.



RB: All off books. You don't know the rules, you don't know their PTO policy, you've never seen their dog on a Zoom call. This Claude guy sounds kind of shifty - I never see him anywhere.



AM: Exactly. And this is something I've not been hearing anybody talking about, which is: what happens when you have maybe a large number of people at a company secretly using AI? They're using it to brainstorm ideas, debug code, automate workflows in some cases. They may be using it for a while, get pretty proficient with it, might build very sophisticated workflows. Maybe they've cut some work tasks from hours and days or weeks down to minutes. So there may be these massive productivity and efficiency gains that your employees are finding by using this under-the-radar AI stuff.

If they have to hide it because there's an outright ban or the company has a gray policy about AI, then managers don't know essentially how employees are spending their time, on what tasks. Those efficiency gains and productivity gains aren't being reflected. Your timelines, project plans, budgets for various things are going to stay the same as though nobody's using AI. I don't mean to sound like a narc - I think you want to applaud the AI-savvy employees that have a bit of an arbitrage opportunity. Maybe they have a few extra hours a week to walk the dog or do something else and they're still getting all their work done. But I think the bigger picture and longer term, this isn't really great for the company or the employees.

The employees aren't being rewarded or credited for their productivity gains or new skills they're building. And the company isn't able to really build more complicated AI transformation. If everything is limited to individuals using it secretively, then the company's not really able to unlock what AI can actually do. It stays in the shallow end.

RB: I love the framing here. You could imagine two reductively absurd frames on this. Like it was IT staffing and you go into an office and suddenly the office is absolutely overloaded with server racks, wires running everywhere, they're loud, creating heat, and someone asks "What the hell is this?" "What are you talking about? There's infrastructure here." "What's it doing?" "Who knows? I heard someone might have bought one or two things, but we don't really have that conversation out loud." "Dude, I just tripped over some Cat 5 cable - it's everywhere." That would be patently absurd. Or worse, that it wasn't IT infrastructure, it was people. You had randos off the street being handed materially sensitive information and starting to work on it. "Hey, who's that?" "That's Chuck. I found him behind the dry cleaners. He's going to work with us now." "Really? Who's he report to?" "Don't worry about that. Chuck's cool. I vouched for him." This would be asinine to think of an organization running like that, trying to figure out who's doing what work and how you meaningfully plan around that. I don't think those framings are actually that absurd because that's kind of what's going on for organizations taking the wait-and-see path - which probably should be properly recategorized as the "f*** around and find out" path now. Because wait-and-see just means your employees are going to be told all day on TikTok, "Hey, you use the magic secret internet brain that does your work for you, right?" "No." "You should. It's free." People are going to do that. They're going to do it on their phones or laptop at home versus the ThinkPad you gave them. That's just going to happen. So wait-and-see might have accidentally become f*** around and find out. Now this is the find-out phase. People are realizing we're trapped in this place. We've got all this theoretical productive capacity and no means to reasonably accommodate or account for it, because that also involves everybody coming clean. Not to take the narc perspective on this, but that's going to be a lot of people coming back tail between their legs saying "Yeah, I haven't actually written a TPS report in six weeks, but no one noticed." And the manager saying "I haven't read a TPS report in six weeks and no one noticed." That reckoning moment's going to happen.

AM: You hit on something that I think is true - some of what's driving this problem, the risks of shadow AI and slowing down adoption and transformation at a lot of these firms, is that it raises unpleasant, uncomfortable conversations around how long tasks should take, what exactly is work. I don't want to wade too deeply into those conversations, but I think it's easier for leadership to put an outright ban or turn a blind eye to what's happening and take a wait-and-see approach. But in reality it's not waiting and seeing - employees are either finding ways to use this off-book, other companies are in full-scale transformation mode with this.

This is very different from IT transformation to server racks in the '90s or cloud software in the 2000s where you could wait around a year or two and see what the first movers were doing, make sure you were making an educated decision. But with these tools, the longer you wait, the farther your workforce and organization is falling behind the curve. In a sense, truly waiting is risky in this instance. That's not to say you just recklessly jump into it - there's a structured way to approach AI transformation. But this notion that we're just going to sit back and see what happens for six or 12 months, then make a decision, talk to some consultants - I think companies that do that may find themselves quite behind when they decide to move forward in a year.



RB: To your point, the wait-and-see approach worked when you're trying to buy SAP - that worked 20, 30 years ago for the history of IT procurement. Wait-and-see might have been viable because no one was going to go get a $10 a month SAP license. That wasn't real life. But now that can happen. Your employees can go "Yeah, I'm not waiting around for my team to get its shit together. I'm going to get a copy of ChatGPT Plus and start putting work documents into GPT-5 because no one can stop me and I don't really care." That actually became the reality, which goes back to what I said - wait-and-see might have accidentally been f*** around and find out, where a default safe conservative path actually opens you to strategic risk because your employees are going to do it anyway. It's going to happen off the books. Not to put on our boots and wade too much into the fundamental structure of work, but I could imagine for some of our partners who are already deploying synthetic workers - imagine the sufficiently advanced autonomous digital workforce we're building. Imagine the nightmare situation where you're some senior manager and wake up one day realizing your firm has quietly, completely changed how it delegates work tasks between humans and synths over the course of six months. Incredibly quietly, in little hushed conversations off Teams: "Yeah, so you gave that to a synth, right?" "Yeah, I did too." Great. Now you don't just have shadow IT or shadow AI. You almost have a full-blown shadow organization running inside your company from which you've got no systematic understanding of what they're doing. It's not just onesie-twosie "hey, an email got drafted, you shouldn't have put that credit card information in there." It becomes - you have things doing work with email addresses and computers they're using that is unreportable.

AM: There's the immediate operational challenge of how do you optimize and run your business when you don't know who is doing what and how long things take. That's a challenge just from basic strategic planning. Not to mention that a lot of these efficiency gains aren't impacting P&L statements and share prices, which I think is a whole other conversation. There's an economic cost for a lot of companies to waiting. If your employees are being significantly more efficient on certain tasks - in some cases, as we've seen with projects we've done, tasks being done a thousand times faster at 90% greater efficiency - there's significant economic gain that if people are doing this off-book, none of that is being recorded on your balance sheet.

But I think there's an even more acute risk: what happens when the employees who have built out these sophisticated shadow workflows - where they've strung together a few agents doing something, built a workflow over here, done frankly a really impressive piece of bootlegged engineering - what happens when they leave? Business continuity is always a headache, so what I'm saying isn't entirely groundbreaking. We've all backfilled somebody in a role and you can't find certain data or they built Excel spreadsheets in a strange way. There's always a bit of loss of fidelity when new people move into roles.

But with something like people running extensive workflow automation on private accounts, when they walk out the door, all that data, all that workflow, all that audit trail, all that capability just suddenly vanishes. A new person coming in - it's not a matter of eventually figuring it out, rebuilding the data table or whatever would have happened in the past. You might have a situation where the person literally can't do the job, can't do the deliverable because they don't have the prompts or the same AI acumen as the person they replaced. At scale especially, this could create some real problems for companies.

RB: This is so interesting because this is a new take on the knowledge preservation problems we've been thinking about internally. We've thought about the three capabilities that synthetics support - project-based big hairy one-off projects, lots of rope work, SOPs, and knowledge preservation. We've done great work on what it means for a synthetic to be used for knowledge preservation, but it was about human knowledge - because you've been here for 30 years, survived three mergers and acquisitions, know how everything works, and then you leave. You take with you your knowledge, your past understanding, your connective idea frameworks about how things work that wouldn't be obvious. You take that with you from 30 years. But then if any of those folks had set up these sufficiently advanced work-generating types of systems that you're describing, and those become business process dependencies where other people start leaning on them, and then they leave - now you've got not just their 30-year institutional knowledge walking out the door. You have the only person who understood how the month-end close machine ran also walking out the door. That is a new flavor of strange knowledge to have to preserve. If they did it in their $20 a month account on N8N or cobbled together some LangGraph that they then took with them - you either immediately lose the capability or you lose the ability to maintain the capability over time. Both of which are significant business continuity challenges. It's just a question of when the train derails.

AM: Another way to think about this - if a person has built the equivalent of a team of five or ten people using AI, then their departure from a workforce perspective is the equivalent of losing a team of five or ten people.



RB: God. Yeah. Steve was the human, but Steve becomes an interesting flavor of problem.



AM: All these invisible employees that had been doing things without any oversight - Steve leaves and everybody leaves with Steve.



RB: Because to your point about the inability to account for productive capacity or cost of revenue - now you actually have no idea how many human FTE equivalents you have on staff. When you're trying to do succession planning or figure out someone's about to go, or our workforce is going to change on sheer demographics alone - now you don't know what the impact's going to be at all. You don't know what multiplier you're supposed to add to that and how much of your actual value that you created or captured was attributable to humans or to autonomous software systems that were off-books and quiet and you couldn't discuss because they were problematic.



AM: I don't want to over-rotate on this because ultimately knowledge transfer and business continuity has always been a challenge. Businesses will find ways to do this, but it's not something being talked about because once again, it's an awkward conversation. The idea that Steve has the equivalent of a team of five or ten invisible digital workers working with his workflow - that becomes a very uncomfortable conversation for Steve's manager to have.

To your point from earlier, it's easier just to not talk about this stuff and assume things will work out, which in many cases with business continuity, that's what you're told. I've backfilled people's roles in the past, couldn't figure something out, no one seemed very stressed about it. They say "You'll figure it out." But the reason you'd figure it out is because everybody was operating with the same set of tools, roughly the same proficiency. You could spend a little time in an Excel spreadsheet and figure out what Steve did before.

But the difference in proficiency between two people in terms of building complicated workflows and writing prompts using what could be a number of different AI tools - this really could be something where it actually is different this time.

RB: Yeah, this is a problem worth discussing. So punchline - you've been thinking a lot about this. What do you think fixes this?

AM: The punchline is that for many organizations, your AI transformation is already underway. Your employees weren't waiting for the strategy deck, weren't waiting for the kickoff, weren't waiting to be given the right access. They just got to work. There's all the strategic and organizational risks we talked about, but another risk is that businesses are going to lose people because your best employees want to use these tools. They want to get better at their jobs, be more efficient, be prepared for their next job.

Companies that don't start moving forward in a meaningful way around AI transformation - yes, there's all the risks we talked about with shadow AI, but the other risk is that these employees who get very proficient using shadow AI, your best and most productive employees, they're going to go find a job where it doesn't have to be shadow AI. They're going to find a job where they can take those skills and productivity and do their job somewhere they can make use of it.

Really the punchline is - we've said it a few different ways - wait-and-see is not wait-and-see this time. The train has already left the station in many cases. Other companies are already figuring out pilot strategies and how to move forward on this. There's significant risks to banning it, significant risks to shadow AI existing at your company. It may seem like there are significant risks to beginning AI pilots and your AI transformation, but I'm not sure many of those fall into the same category of risk as what we've discussed. AI transformation will be a challenge, a headache, difficult and frustrating as every business transformation. For me, the punchline is that in this case, waiting is not waiting. AI transformation has already started and it's best to start making meaningful progress rather than pretending that shadow AI and these other risks don't exist.

RB: All right. So I've got a call tomorrow with a leader at an enterprise-scale organization who's trying to figure out how to undo this damage. What do you think is the thing that you would tell them?

AM: Number one - they have to be willing to have honest, awkward, and safe conversations about what's going on. If a company already has shadow AI throughout their organization in different ways, the path to AI transformation - because every company's AI transformation path is going to be different - if they know or think employees are probably using this stuff in a lot of different ways, then the path involves first getting that out in the open. Making it a safe thing - nobody's getting fired for using shadow AI.

The advice to your potential customer tomorrow would be: have an honest conversation and lean into the awkwardness that it is happening. Figure out a way to bring it out into the open and make sure the company creates a safe and productive environment to identify how AI is being used within the company currently. Then begin identifying good use cases - perhaps involve vendors and experts that know the space to help identify those use cases. The main thing is to get started by running pilots within an organization so people are learning the tool, understanding how to work with AI, and through that, companies can begin their AI transformation.