We Use AI. Here's What That Actually Means.
If you work in or around the Salesforce ecosystem right now, you can't avoid the conversation about Artificial Intelligence tools like Claude, ChatGPT, OpenClaw, etc. It's at every conference, every other LinkedIn post, and probably in your inbox more than you'd like.
Most of us in tech consulting are navigating the same questions: How do we use these tools responsibly? What do our clients need to know about them? Where do we draw the lines, and how do we hold them?
Why We Formalized Our AI Policy — And Why It Matters for the Organizations We Serve
I've been sitting with those questions for a while. And the more I thought about them, the clearer it became that for BrightHelm — given who our clients are and what they've trusted us with — the answer couldn't just live in our heads. It needed to be written down, communicated, and built into how we actually work.
So earlier this year, I formalized BrightHelm's AI usage policy. Not because we were caught doing something wrong, and not because a client demanded it. Because our values required it.
What We Actually Did
The beginning was realizing that not all AI tools, and not all usages, are created alike. Each use case is different in abilities and risks, and each requires its own handling. So we built a framework that governs exactly how and when AI tools can be used on client work. That framework is organized into four tiers:
Tier 1 - Researching & Troubleshooting
This is the most common way AI shows up in our work, and it's largely invisible to clients. General consulting work – like researching, drafting, and troubleshooting — doesn't involve client data at all. This also includes supporting our own skill development with training aids and AI-curated learning.
Tier 2 - Analyzing Data & Metadata
The second tier is submitting any client data or system configuration to an AI tool for analysis. This requires explicit, informed consent from our clients who own this data and metadata. We explain what we'd like to do, which tool we'd like to use, and what protections are in place. Then the client decides. And if they say no, that's the end of the conversation, and we find another way to perform the analysis.
It’s worth mentioning that clients say no for all sorts of reasons beyond the tool itself, such as concerns about environmental impact or changes in our society. Regardless of the reason, we respect these preferences.
Tier 3 - Implementing & Using Agentforce
This refers to Salesforce's Agentforce features, which operate through Salesforce's built-in trust layer. The technical safeguards are already there, so this is a fundamentally different situation — the protections are contractual and built into the product itself.
Tier 4 - Generating AI Code & Configuration
The fourth tier is AI-assisted or AI-automated creation of code and configuration. We use tools like Claude Code to help generate flows, validation rules, and other technical artifacts. Everything AI helps us build goes through the same peer review process as anything else we deliver. The method of creation doesn't change the standard.
There's one more line we don't cross, and it's worth naming directly: we don't connect third-party AI tools directly into any system that can deploy code or configuration. A human is always in that loop.The crew member analogy holds here — every crew member's work still gets checked before it ships.
AI usage and use cases are growing fast, and there may be an era when this exception doesn’t apply, but recent security lapses associated with tools like OpenClaw and Moltbook have made us confident we’re taking the right risk-management approach for now.
Personally Identifiable Information & Business-Tier Tools
Running through all four tiers is one absolute rule: we will never submit personally identifiable information — names, contact details, financial records, any of it — to an AI tool. No exceptions, and no amount of client permission changes that. It's a hard line.
Finally, we also only use business-tier tools with contractual data protections. These are services where a Data Processing Agreement exists, where training on customer data is off by default, and where we can point to something in writing if a client asks. Consumer plans don't always meet that bar, no matter how well-known the brand.
Why Transparency is the Policy, Not Just a Feature
I want to be honest about something: this wasn't only about protecting clients, though it is that. It was also about being the kind of firm I want to run.
Transparency is one of BrightHelm's core values — not as a marketing message, but as an operating principle. That means clients know how we work. They know which tools we use, what we do with their data, and what they can say no to. It means that when something changes — a vendor gets acquired, a privacy policy shifts — we tell them. It means if there's ever a security incident, they hear it from us before they read about it anywhere else.
What This Means if You're a BrightHelm Client
If you're already working with us, you’ve already heard from me. As we onboard new clients, we will explain our policy in writing and walk you through our simple process for stating your preferences. You can opt in fully, set some limits, or decline AI use on your data entirely. Any of those answers is the right answer if it's yours.
If you're not a client yet and this is part of what you're evaluating — I hope it's helpful. I'd rather you know exactly how we work before we shake hands than find out after.
We're not perfect, and the landscape keeps changing. But we're paying attention, we're being thoughtful, and we're telling you the truth about it.
That feels like the right place to start.