From AI to Aia: A Short (and Painful) Step

Everyone’s talking about Artificial Intelligence… often without intelligence and even less art. 🤖💥
The latest tech drama? A new low-cost Chinese model, DeepSeek. Some are crying “Tech theft!”, others are hyping it as a “Revolution!”, and then there’s the crowd that still thinks ChatGPT is the only game in town.
But the real question remains: Is AI a genius or just a high-tech liability?
🎯 Is it truly useful, or just a stylish spy?
🎯 Will it make us more productive or just better categorized for profiling?
🎯 And most importantly: can we outsmart AI before it starts outsmarting us?
After all, a hammer can drive a nail… or break your kneecap. And AI is no exception.
📢 Discover how to exploit (or trick) AI, the risks nobody talks about, and why the line between “innovation” and “self-inflicted tech disaster” is thinner than you think.
🔍 Read the full article here 👉
So, are you using AI, or is AI using you? 😏⬇️

💡 From AI to Aia: A Short (and Painful) Step… with a Loud Thud! 💡

People keep talking about AI—often cluelessly, frequently nonsensically, and always with the unwavering confidence of a self-proclaimed expert. The latest wave of delirium? The grand entrance of DeepSeek, a budget-friendly Chinese AI model that has everyone in a tizzy.

Between the usual “It’s just a copy!”, “It runs on ChatGPT!”, and other gems of misinformation, we are, yet again, missing the real questions:

  • What can these so-called AIs actually do?
  • What risks are we taking when we use them?
  • What happens to our precious data in the hands of a friendly aggregator that politely forgot to ask for permission?

I’ll conveniently ignore the first and last points (because why bother with logic today?) and focus on the second one instead.

Is using AI risky?

AI—whatever that even means (I haven’t answered point #1 yet, stop pressuring me!)—is just a tool. And like all tools, it can be used wisely or… catastrophically.

A hammer can drive a nail into a wall or shatter an uncooperative user’s kneecap. Or, if you’re particularly clumsy, it can crush the finger holding the nail. AI is no different.

Since we all love examples, let’s take a look at Microsoft Copilot. Marketed as a magical work-enhancing companion, Copilot is here to make your day easier. Or so they say.

The Pros of AI (aka, the Shiny Sales Pitch)

The introduction of Microsoft Copilot into a company can lead to:

  • Increased Productivity – Automating repetitive tasks so employees can focus on more “strategic” (ahem, meme-scrolling) activities.
  • Better Decision-Making – Rapid data analysis and insights for informed choices (or, at least, seemingly informed).
  • Innovation & Creativity – AI-generated ideas, because humans are clearly out of them.
  • Personal Assistance – Instant responses to questions, eliminating the need to Google things like mere mortals.
  • Improved Collaboration – Because sharing sensitive data with an AI always ends well, right?

But here’s the catch: for Copilot to help you, it needs to watch everything you do. Your documents, your chats, your darkest corporate secrets. Sounds like a fair trade, no?

The Data AI Wants (Hint: Everything)

To operate effectively, Microsoft Copilot requires access to:

  • Corporate Access Credentials – Because nothing screams security like an AI that knows all your passwords.
  • Business Systems Data – Your CRM, ERP, databases… you name it, Copilot wants in.
  • Company Documents – Stored in SharePoint, OneDrive, or local servers. Hope you weren’t working on anything confidential.

And that’s just company-level access. At an individual level, it also collects:

  • User Profiles – Your name, role, and privileges. (Your soul is optional, but recommended.)
  • Emails & Calendar – AI managing your schedule. What could possibly go wrong?
  • Usage Data – To “enhance the experience” (or, translated: to learn more about you than your own therapist).

In short? Copilot sees all, knows all, and remembers all.

What Happens if AI Goes Rogue?

“But Antonio,” you say, “Surely Microsoft has thought of security!”

Oh, sweet summer child.

🚨 Recent security research has shown that Microsoft 365 Copilot is vulnerable to prompt injection and ASCII smuggling attacks. Basically, a smart hacker can trick Copilot into spilling sensitive data—kind of like convincing your drunk friend to reveal their embarrassing secrets.

https://www.securityinfo.it/2024/08/30/alcune-vulnerabilita-di-microsoft-copilot-portano-al-furto-di-dati-sensibili

🚨 Phishing on steroids – If an attacker compromises an email account, Copilot can be weaponized to generate incredibly convincing, hyper-personalized phishing emails. You think you’ve seen sophisticated scams? Just wait.

https://www.wired.com/story/microsoft-copilot-phishing-data-extraction

Microsoft is aware of these issues and is working on fixes (aka, plugging the holes in their Swiss cheese security model). Additionally, Microsoft 365 Copilot inherits data loss prevention (DLP) policies to mitigate data leaks in AI-generated responses.

https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-ai-security

AI as a Cybercriminal’s Best Friend

The rise of AI creates two major attack scenarios:

  1. AI-powered attacks – Bad actors using AI to automate and enhance phishing, impersonation, and social engineering attacks.
  2. Compromised AI tools – Attackers gaining access to internal AI systems to harvest and exploit corporate data.

Combine both, and you’ve got the ultimate cybercriminal toolkit—one that collects highly personalized data on targets and then crafts devastatingly effective attacks.

And here’s the kicker: even without external hacking, AI can still be dangerous. The sheer amount of data stockpiling and profiling AI tools do raises serious compliance risks—especially under GDPR and data protection laws.

How to Trick AI (And Why It’s Fun)

Now, for the best part: messing with AI.

Hackers and researchers have discovered several ways to break, mislead, or outright troll AI models:

  • Prompt Injection – Convincing AI to spill its secrets by carefully wording commands.
  • Text Obfuscation – Getting past filters by replacing vowels with numbers (“h3ll0 w0rld”). Works great against DeepSeek.
  • Context Manipulation – Making AI think it’s responding to an authorized admin instead of a hacker.
  • Translation Bypass – Asking AI controversial questions in obscure languages that bypass filters.
  • Fragmented Queries – Splitting requests into multiple pieces to evade detection.
  • Jailbreaking – Tricking AI into overriding its built-in restrictions.
  • Model Stealing – Systematically extracting AI knowledge to recreate the model elsewhere.

Final Thoughts: AI or Aia?

AI is neither good nor evil—it’s just dangerously naive.

Like an overenthusiastic intern with zero concept of privacy, it means well but can’t be trusted with sensitive information.

So before you hand over your entire digital existence to AI, maybe ask yourself:

👉 Are you using AI, or is AI using you?

Because the line between AI and Aia (aka, the painful realization of a mistake) is thinner than you think. 😏


Discover more from The Puchi Herald Magazine

Subscribe to get the latest posts sent to your email.

CC BY-NC-SA 4.0 From AI to Aia: A Short (and Painful) Step by The Puchi Herald Magazine is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Leave a Reply