
Your AI Intern Just Started. Who's Supervising It?
Here's a thing nobody likes admitting in public.
Most of the AI rollouts I see in small businesses aren't really rollouts. They're individual employees, on their own, signing up for tools and figuring it out on weekends.
The AI button is in their email client. Another one is in their document editor. Another one in their meeting notes platform. It's already been deployed to them. Nobody asked. Nobody onboarded it. It just showed up.
And then the work it touches starts going out the door.
It is not actually an intern
The intern metaphor is useful for one reason. It reminds business owners that something new is in their building doing work without supervision.
But interns ask questions. AI doesn't.
Interns hesitate when they aren't sure. AI doesn't.
Interns look at their manager when something feels off. AI presents wrong answers with the same confidence it presents right ones, and most people on the receiving end can't tell which is which.
If you wanted a real comparison, it would be closer to handing a stranger a key to your office, telling them the printer is broken, and walking out.
What I see in healthcare and federal
I run an MSP and cybersecurity firm in the Baltimore-DC corridor. We work with healthcare practices and federal contractors.
In healthcare, the failure mode is HIPAA. A clinical assistant pastes a chart summary into a free chatbot to clean up the language. The chart summary contains identifiable patient information. Now that data is part of someone else's training corpus, in a system the practice does not own and cannot pull back. The HIPAA breach analysis treats that the same as a stolen laptop.
In federal contracting, the failure mode is CUI. The data comes out of a SharePoint or a portal that has access controls. It goes into a consumer AI that does not. The contract clause that says the contractor will protect Controlled Unclassified Information just got violated, and the contractor does not know yet.
The pattern in both cases is the same. It is not a malicious employee. It is a helpful one.
That is the part that makes this hard to manage by yelling at people.
What actually changes the risk
Banning AI is the wrong move. The market is not going to wait for you, and your team will route around any rule that asks them to work slower.
What works is something simpler. You decide which tools are part of the business, and you make those easier to reach than the random ones. Reviewing a draft before it goes to a client becomes a normal step instead of an exception. The categories of information that are not allowed to leave your environment get named, written down, and explained. Not in a policy nobody reads. In the same conversation where you talk about how the work gets done.
None of that requires technical sophistication. It requires somebody deciding to own it.
That is usually the part that does not happen.
The honest answer
If you ask me whether your business has an AI problem, I am not going to answer that from the outside.
The companies that are fine almost always have one thing in common. Someone in the building, with authority, has put thirty minutes into asking what people are using, what data is touching it, and where the work is going.
The companies that are not fine have not had that conversation yet.
If that conversation has not happened in your business, that is where I would start. Not with software. Not with a policy document. With a half hour, a list, and a person who is willing to make decisions.
(410) 684-4405 | crush@rushitllc.com | rushitllc.com


