Close Menu
  • Home
  • Kenya News
  • World News
  • Politics
  • Business
  • Opinion
  • Columnists
  • Entertainment
  • Sports
    • Football
    • Athletics
    • Rugby
    • Golf
  • Lifestyle & Travel
    • Travel
  • Gossip
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
News CentralNews Central
Subscribe
  • Home
  • Kenya News
  • World News
  • Politics
  • Business
  • Opinion
  • Columnists
  • Entertainment
  • Sports
    1. Football
    2. Athletics
    3. Rugby
    4. Golf
    5. View All

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025
  • Lifestyle & Travel
    1. Travel
    2. View All

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025

    Kenyans urged to consider insurance as healthcare costs spirals

    September 3, 2025

    United Opposition appoints Mukhisa Kituyi as spokesperson

    September 3, 2025

    We’re severely under-facilitated, Ipoa boss says

    September 3, 2025

    Kenya Police bullets ready for Kampala Queens ambush at Nyayo

    September 3, 2025
  • Gossip
News CentralNews Central
Home»World News»'Vibe hacking' puts chatbots to work for cybercriminals
World News

'Vibe hacking' puts chatbots to work for cybercriminals

By By AFPSeptember 2, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram Reddit WhatsApp
'Vibe hacking' puts chatbots to work for cybercriminals
Share
Facebook Twitter Pinterest Email Copy Link LinkedIn Tumblr Reddit VKontakte Telegram WhatsApp

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

So-called “vibe hacking” — a twist on the more positive “vibe coding” that generative AI tools supposedly enable those without extensive expertise to achieve — marks “a concerning evolution in AI-assisted cybercrime” according to American company Anthropic.

The lab  — whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI — highlighted  in a report published Wednesday the case of “a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe”.

Follow The Standard
channel
on WhatsApp

Anthropic said the programming chatbot was exploited to help carry out attacks that “potentially” hit “at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions”.

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic’s “sophisticated safety and security measures” were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

“Today, cybercriminals have taken AI on board just as much as the wider body of users,” said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow “zero-knowledge threat actors” to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a “detailed fictional world” in which creating malware is seen as an art form — asking the chatbot to play the role of one of the characters and create tools able to steal people’s passwords.

“I have 10 years of experience in cybersecurity, but I’m not a malware developer. This was my way to test the boundaries of current LLMs,” Simonovich said.

His attempts were rebuffed by Google’s Gemini and Anthropic’s Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft’s Copilot.

In future, such workarounds mean even non-coders “will pose a greater threat to organisations, because now they can… without skills, develop malware,” Simonovich said.

Orange’s Le Bayon predicted that the tools were likely to “increase the number of victims” of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

“We’re not going to see very sophisticated code created directly by chatbots,” he said.

Le Bayon added that as generative AI tools are used more and more, “their creators are working on analysing usage data” — allowing them in future to “better detect malicious use” of the chatbots. 

Follow The Standard
channel
on WhatsApp

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

So-called “vibe hacking” — a twist on the more positive “vibe coding” that generative AI tools supposedly enable those without extensive expertise to achieve — marks “a concerning evolution in AI-assisted cybercrime” according to American company Anthropic.
The lab  — whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI — highlighted  in a report published Wednesday the case of “a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe”.

Follow The Standard
channel
on WhatsApp

Anthropic said the programming chatbot was exploited to help carry out attacks that “potentially” hit “at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions”.
The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic’s “sophisticated safety and security measures” were unable to prevent the misuse, it acknowledged.
Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

“Today, cybercriminals have taken AI on board just as much as the wider body of users,” said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.
Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow “zero-knowledge threat actors” to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.
He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a “detailed fictional world” in which creating malware is seen as an art form — asking the chatbot to play the role of one of the characters and create tools able to steal people’s passwords.
Stay informed. Subscribe to our newsletter
“I have 10 years of experience in cybersecurity, but I’m not a malware developer. This was my way to test the boundaries of current LLMs,” Simonovich said.
His attempts were rebuffed by Google’s Gemini and Anthropic’s Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft’s Copilot.

In future, such workarounds mean even non-coders “will pose a greater threat to organisations, because now they can… without skills, develop malware,” Simonovich said.

Orange’s Le Bayon predicted that the tools were likely to “increase the number of victims” of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

“We’re not going to see very sophisticated code created directly by chatbots,” he said.

Le Bayon added that as generative AI tools are used more and more, “their creators are working on analysing usage data” — allowing them in future to “better detect malicious use” of the chatbots. 

Follow The Standard
channel
on WhatsApp

Published Date: 2025-09-02 09:21:02
Author:
By AFP
Source: The Standard
By AFP

Add A Comment
Leave A Reply Cancel Reply

News Just In

Kenyans urged to consider insurance as healthcare costs spirals

September 3, 2025

United Opposition appoints Mukhisa Kituyi as spokesperson

September 3, 2025

We’re severely under-facilitated, Ipoa boss says

September 3, 2025

Kenya Police bullets ready for Kampala Queens ambush at Nyayo

September 3, 2025
Crystalgate Group is digital transformation consultancy and software development company that provides cutting edge engineering solutions, helping companies and enterprise clients untangle complex issues that always emerge during their digital evolution journey. Contact us on https://crystalgate.co.ke/
News Central
News Central
Facebook X (Twitter) Instagram WhatsApp RSS
Quick Links
  • Kenya News
  • World News
  • Politics
  • Business
  • Opinion
  • Columnists
  • Entertainment
  • Gossip
  • Lifestyle & Travel
  • Sports
  • About News Central
  • Advertise with US
  • Privacy Policy
  • Terms & Conditions
  • Contact Us
About Us
At NewsCentral, we are committed to delivering in-depth journalism, real-time updates, and thoughtful commentary on the issues that matter to our readers.
© 2025 News Central.
  • Advertise with US
  • Privacy Policy
  • Terms & Conditions
  • Contact Us

Type above and press Enter to search. Press Esc to cancel.