A plain-language guide // 2026-03-13
A plain-language primer on how AI handles data, what privacy tools actually do, and how to tell the difference between real security and a sales pitch.
When someone shows up with a new AI tool, it's completely reasonable to ask "what happens to our data?" That's not paranoia — it's good business sense. The problem is that fear creates an opening for vendors to sell complexity you may not need. Before you evaluate any AI privacy solution, it helps to understand the basics of how AI actually processes information.
When you type text into an AI model, the model doesn't read your words the way you do. It converts them into numbers — specifically, into a list of numbers called a vector embedding. Think of it like GPS coordinates, but for meaning.
The word "cat" might become something like [0.82, -0.14, 0.67, 0.03...] — hundreds of numbers that together represent its meaning. "Kitten" would be very similar numbers. "Skyscraper" would be very different ones.
This is how AI finds meaning and context — by measuring the distance between these number clusters. Words that mean similar things cluster together in this "number space." The model reasons by navigating that space.
The important thing here: the model is working with your original text as plaintext before converting it. That's where the privacy concern lives.
When your firm sends sensitive data to an AI model hosted by someone else — whether that's Microsoft, OpenAI, or a cloud vendor — that data passes through shared infrastructure. In a regulated industry, the question is: who can see it, and when?
These are real concerns. But here's the thing — they need to be weighed against what's already happening in your organization right now.
If your team is sharing financial data over unencrypted Teams messages, emailing Excel files, or discussing client details on unrecorded Zoom calls — those are larger, more immediate risks than anything the AI model introduces.
Before reaching for advanced privacy tooling, consider what most compliance-conscious organizations actually do: replace sensitive identifiers with placeholders before the data ever leaves your system.
The model still understands the context and can reason about the financial situation, the accounting issue, the pattern — it just never sees the actual sensitive values. You keep a local mapping. The AI never touches real PII.
For most accounting and financial workflows, this approach is practical, auditable, and defensible — without adding complexity or degrading model performance.
This is where things get interesting — and where the sales pitches start arriving. Stained Glass Transform (SGT), developed by a company called Protopia AI, takes a different approach. Instead of replacing sensitive data with placeholders, it converts your text into obfuscated vector embeddings on your device before sending anything to the model.
The idea is clever: the embeddings preserve enough semantic meaning that the model can still reason about your data, but the transformation is mathematically irreversible — nobody can decode them back into readable text.
You're trading model accuracy and traceability for a privacy guarantee that regulators haven't actually required yet. That's a significant business decision — not a simple checkbox.
Here's what the regulations actually say — and what they don't.
The bottom line: no major regulation currently requires Stained Glass Transform or anything like it. The sales pitch is getting ahead of the law.
Before spending money on advanced AI privacy tooling, ask whether you've done the basics:
If any of these are "no" — and in most firms they are — adding Stained Glass Transform on top is like installing a bank vault door on a tent.
To be clear: Stained Glass Transform is not a bad technology. It's a sophisticated solution to a real problem. The question is whether that problem is yours.
The key concept here is threat modeling — asking honestly: who is your adversary, how motivated are they, and what are the consequences of exposure? The answer changes everything.
A counter-terrorism analyst querying classified signals intelligence on shared government infrastructure? That's the threat model Stained Glass was built for. A CPA firm analyzing client tax returns? It's not even close to the same problem.
Selling a blast-proof door to someone whose windows don't have locks isn't a security upgrade — it's a mismatch of solution to problem. Good technology applied to the wrong threat model is still the wrong call.
The goal isn't zero risk — it's proportionate, defensible, and actually implemented. A simple policy you follow beats a complex system you don't understand.
DFTZ — Don't Feed The Zombies. kill -9. Clear the stack.
← back