Back to Blog
AI Safety 30 April 2025 · 4 min read

How Safe Is AI Really? What You Need to Know as a Concerned User

I’ll be honest with you. I’ve been using generative AI tools every day for a while now, and I’m still cautious about it. On one hand, they help me write faster, brainstorm ideas, and automate things I used to spend hours on. On the other, even with years of experience in cybersecurity, I can’t help but think: what if I share something sensitive? What if the AI does something unexpected? Can I actually trust this thing?

If you’ve had the same thought, you’re not alone. And I think that caution is healthy.

What We’re Actually Talking About

AI safety is about making sure these systems do what they’re supposed to do, and don’t cause harm by accident. AI security is about protecting them from being manipulated, hacked, or misused.

For us as users, it comes down to two things:

  • Is this thing going to output something wrong, biased, or harmful?
  • Is my data (my prompts, my documents, my conversations) being handled properly?

If AI tools are part of your workflow, these aren’t abstract questions. They’re practical ones.

The Risks That Actually Keep Me Up

I’ve been in cybersecurity for years. Cylance, Cyware, military cryptography before that. So I think about risk differently than most people. Here’s what I actually worry about:

Hallucinations. AI makes things up. Not sometimes. Regularly. It’ll present completely fabricated information with total confidence. If you’re using AI output in client work, proposals, or decisions without checking it, you’re playing a dangerous game.

Data leakage. When you type something into an AI tool, where does it go? Is it stored? Is it used to train future models? Can someone at the company see it? The answers vary wildly depending on the platform, the plan you’re on, and the settings you’ve chosen. Most people never check.

Adversarial attacks. This one’s more technical, but it matters. Bad actors can manipulate inputs to make AI systems behave in ways they shouldn’t. Prompt injection is real, and it’s getting more sophisticated.

Bias. AI reflects the data it was trained on, and that data includes every bias, stereotype, and blind spot the internet has to offer. If you’re not aware of this, you’ll miss it in the output.

These aren’t hypotheticals. They’ve all happened in real-world situations. Multiple times.

What You Can Actually Do

I’m not here to scare you off AI. I use it every day and I think it’s genuinely transformative. But I use it with my eyes open. Here’s what I’d recommend:

Be deliberate about what you share. Don’t paste sensitive client data, financial information, or proprietary documents into AI tools unless you understand exactly where that data goes and who can access it. Read the privacy policy. Check if your plan includes data retention opt-outs.

Cross-check everything. AI is a first draft machine, not a source of truth. Treat its output the way you’d treat a junior team member’s work. Useful starting point, needs verification.

Stick to platforms you trust. Not all AI tools are created equal. The big providers (OpenAI, Anthropic, Google) have transparent data policies and invest heavily in safety. Random AI tools you found on Product Hunt? Maybe not so much.

Report bad outputs. Most platforms let you flag harmful or inaccurate responses. Do it. Your feedback genuinely helps make the tools safer for everyone.

The Good News

The industry is taking this seriously. Robustness, interpretability, alignment. These used to be academic concepts. Now they’re becoming standard parts of how AI systems get built and evaluated.

Regulation is catching up too. The EU AI Act is moving forward. The NIST AI Risk Management Framework is shaping how responsible AI development looks in practice. It’s not perfect, and it’s definitely not fast enough, but the direction is right.

Why You Should Care

You don’t need to be a security expert to think about AI safety. As someone using these tools in your work, making decisions based on their output, sharing data with them, recommending them to your team, you have a stake in how they evolve.

The more you understand about the risks, the better you can use AI without the anxiety. And the more you push for transparency from the tools you use, the better they get for everyone.

AI isn’t going away. So let’s use it properly.

About the author: Gary spent years in the military working in cryptography and secure communications, and has been an industry expert in cybersecurity across multiple companies including Cylance and Cyware.

Keep reading