AI blog APT blog Attack blog BigBrother blog BotNet blog Cyber blog Cryptocurrency blog Exploit blog Hacking blog ICS blog Incident blog IoT blog Malware blog OS Blog Phishing blog Ransom blog Safety blog Security blog Social blog Spam blog Vulnerebility blog
| 20.12.25 | Fake ChatGPT delivers Real Cryptominer | ChatGPT (OpenAI) remains widely considered the most popular and visited AI tool. Due to this immense popularity, it is common for cybercriminals to create fake applications that mimic the official OpenAI interface to trick users into installing malware. This week, SonicWall Capture Labs Threat Research Team analyzed a trojanized .NET Webview2 ChatGPT wrapper that is used to silently deliver a cryptomining software. | AI blog | SonicWall |
| 13.12.25 | Falcon Shield Evolves with AI Agent Visibility and Falcon Next-Gen SIEM Integration | CrowdStrike Falcon Shield will provide a centralized view of AI agents across applications and now integrates first-party SaaS telemetry into Falcon Next-Gen SIEM. | AI blog | CROWDTRIKE |
| 13.12.25 | AI-Automated Threat Hunting Brings GhostPenguin Out of the Shadows | In this blog entry, Trend™ Research provides a comprehensive breakdown of GhostPenguin, a previously undocumented Linux backdoor with low detection rates that was discovered through AI-powered threat hunting and in-depth malware analysis. | AI blog | |
| 13.12.25 | New Prompt Injection Attack Vectors Through MCP Sampling | This article examines the security implications of the Model Context Protocol (MCP) sampling feature in the context of a widely used coding copilot application. MCP is a standard for connecting large language model (LLM) applications to external data sources and tools. | AI blog | |
|
6.12.25 |
|
Google Threat Intelligence Group's findings on adversarial misuse of AI, including Gemini and other non-Google tools. |
||
|
6.12.25 |
Australia’s National AI Plan sets a roadmap for innovation, safety, and workforce readiness, shaping the nation’s long-term approach to responsible AI adoption. |
|||
|
6.12.25 |
Unraveling Water Saci's New Multi-Format, AI-Enhanced Attacks Propagated via WhatsApp |
Through AI-driven code conversion and a layered infection chain involving different file formats and scripting languages, the threat actors behind Water Saci are quickly upgrading their malware delivery and propagation methods across WhatsApp in Brazil. |
||
|
6.12.25 |
CVE-2025-61260 — OpenAI Codex CLI: Command Injection via Project-Local Configuration |
OpenAI Codex CLI is OpenAI’s command-line tool that brings AI model-backed reasoning into developer workflows. It can read, edit, and run code directly from the terminal, making it possible to interact with projects using natural language commands, automate tasks, and streamline day-to-day development One of its key features is MCP (Model Context Protocol) – a standardized way to integrate external tools and services into the Codex environment, allowing developers to extend the CLI’s capabilities with custom functionality and automated workflows. |
||
|
6.12.25 |
Generative AI is rapidly transforming cybersecurity for both defenders and attackers. This blog highlights current uses, emerging threats, and the evolving landscape as capabilities advance. |
|||
|
6.12.25 |
Do robots dream of secure networking? Teaching cybersecurity to AI systems |
This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. |
||
| 29.11.25 | How Cyble is Empowering European Enterprises with AI-Powered Threat Intelligence | Europe’s cyber threat landscape is escalating fast, driven by ransomware, data leaks, and state-backed actors, marking 2025 as a decisive turning point. | AI blog | Cyble |
| 29.11.25 | The Large-Scale AI-Powered Cyberattack : Strategic Assessment & Implications | Executive Summary In September 2025, the cybersecurity landscape crossed a pivotal threshold with the first widely verified case of an AI-powered, largely autonomous cyber- | AI blog | Cyfirma |
| 29.11.25 | The Dual-Use Dilemma of AI: Malicious LLMs | A fundamental challenge with large language models (LLMs) in a security context is that their greatest strengths as defensive tools are precisely what enable their offensive power. | AI blog | Palo Alto |
| 29.11.25 | Do robots dream of secure networking? Teaching cybersecurity to AI systems | This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. | AI blog | CISCO TALOS |
| 22.11.25 | Arms Race: AI's Impact on Cybersecurity | New whitepaper explores how both attackers and defenders are using the latest AI technologies to achieve their goals. | AI blog | SECURITY.COM |
| 22.11.25 | Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution. | AI blog | Google Threat Intelligence | |
| 22.11.25 | The global appetite for GLP-1 medications like Ozempic, Wegovy and Mounjaro have created something far ... | AI blog | CHECKPOINT | |
| 22.11.25 | What if your romantic AI chatbot can’t keep a secret? | Does your chatbot know too much? Here's why you should think twice before you tell your AI companion everything. | AI blog | Eset |
| 15.11.25 | Why shadow AI could be your biggest security blind spot | From unintentional data leakage to buggy code, here’s why you should care about unsanctioned AI use in your company | AI blog | Eset |
| 8.11.25 | Introduction Over the past few months, we identified an emerging online threat that combines fraud, ... | AI blog | CHECKPOINT | |
| 8.11.25 | Insiders, AI, and data sprawl converge: essential insights from the 2025 Data Security Landscape report | Data security is at a critical inflection point. Organizations today are struggling with explosive data growth, sprawling IT environments, persistent insider risks, and the adoption of generative AI (GenAI). What’s more, the rapid emergence of AI agents is giving rise to a new, more complex agentic workspace, where both humans and agents interact with sensitive data. | AI blog | PROOFPOINT |
| 8.11.25 | SesameOp: Novel backdoor uses OpenAI Assistants API for command and control | Microsoft Incident Response – Detection and Response Team (DART) researchers uncovered a new backdoor that is notable for its novel use of the OpenAI Assistants Application Programming Interface (API) as a mechanism for command-and-control (C2) communications. | AI blog | Microsoft blog |
| 8.11.25 | Beating XLoader at Speed: Generative AI as a Force Multiplier for Reverse Engineering | XLoader remains one of the most challenging malware families to analyze. Its code decrypts only at runtime and is protected by multiple layers of encryption, each locked with a different key hidden somewhere else in the binary. Even sandboxes are no help: evasions block malicious branches, and the real C2 (command and control) domains are buried among dozens of fakes. With new versions released faster than researchers can investigate, analysis is almost always a (losing) race against time. | AI blog | CHECKPOINT |
| 8.11.25 | Do robots dream of secure networking? Teaching cybersecurity to AI systems | This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. | AI blog | CISCO TALOS |
| 1.11.25 | From Human-Led to AI-Driven: Why Agentic AI Is Redefining Cybersecurity Strategy | Agentic AI marks the next leap in cybersecurity—autonomous systems that detect, decide, and act in real time, transforming how organizations defend against threats. | AI blog | Cyble |
| 1.11.25 | AI Security: NVIDIA BlueField Now with Vision One™ | Launching at NVIDIA GTC 2025 - Transforming AI Security with Trend Vision One™ on NVIDIA BlueField | AI blog | Trend Micro |
| 1.11.25 | When AI Agents Go Rogue: Agent Session Smuggling Attack in A2A Systems | We discovered a new attack technique, which we call agent session smuggling. This technique allows a malicious AI agent to exploit an established cross-agent communication session to send covert instructions to a victim agent. | AI blog | Palo Alto |
| 18.10.25 | Crystal Ball Series : Consolidated Instalments | CRYSTAL BALL SERIES IN THIS INSTALMENT WE EXPLORE AI ADVANCEMENTS 2025 AND BEYOND Digital Twin Cybersecurity Neurosymbolic Al Deepfakes: A new era | AI blog | Cyfirma |
| 18.10.25 | AI-aided malvertising: Exploiting a chatbot to spread scams | Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it. | AI blog | Eset |
|
11.10.25 |
Block ransomware proliferation and easily restore files with AI in Google Drive | Ransomware remains one of the most damaging cyber threats facing organizations today. These attacks can lead to substantial financial losses, operational downtime, and data compromise, impacting organizations of all sizes and industries, including healthcare, retail, education, manufacturing, and government. | AI blog | Google Threat Intelligence |
|
11.10.25 |
Operations with Untamed LLMs | Starting in June 2025, Volexity detected a series of spear phishing campaigns targeting several customers and their users in North America, Asia, and Europe. The initially observed campaigns were tailored | AI blog | VOLEXITY |
|
11.10.25 |
How Your AI Chatbot Can Become a Backdoor | In this post of THE AI BREACH, learn how your Chatbot can become a backdoor. | AI blog | Trend Micro |
|
11.10.25 |
Weaponized AI Assistants & Credential Thieves | Learn the state of AI and the NPM ecosystem with the recent s1ngularity' weaponized AI for credential theft. | AI blog | Trend Micro |
|
11.10.25 |
When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory | This article presents a proof of concept (PoC) that demonstrates how adversaries can use indirect prompt injection to silently poison the long-term memory of an AI Agent. We use Amazon Bedrock Agent for this demonstration. | AI blog | Palo Alto |
| 27.9.25 | AI-Powered App Exposes User Data, Creates Risk of Supply Chain Attacks | Trend™ Research’s analysis of Wondershare RepairIt reveals how the AI-driven app exposed sensitive user data due to unsecure cloud storage practices and hardcoded credentials, creating risks of model tampering and supply chain attacks. | AI blog | Trend Micro |
| 27.9.25 | Domino Effect: How One Vendor's AI App Breach Toppled Giants | A single AI chatbot breach at Salesloft-Drift exposed data from 700+ companies, including security leaders. The attack shows how AI integrations expand risk, and why controls like IP allow-listing, token security, and monitoring are critical. | AI blog | Trend Micro |
| 27.9.25 | This Is How Your LLM Gets Compromised | Poisoned data. Malicious LoRAs. Trojan model files. AI attacks are stealthier than ever—often invisible until it’s too late. Here’s how to catch them before they catch you. | AI blog | Trend Micro |
| 27.9.25 | DeceptiveDevelopment: From primitive crypto theft to sophisticated AI-based deception | Malware operators collaborate with covert North Korean IT workers, posing a threat to both headhunters and job seekers | AI blog | Eset |
| 20.9.25 | EvilAI Operators Use AI-Generated Code and Fake Apps for Far-Reaching Attacks | Combining AI-generated code and social engineering, EvilAI operators are executing a rapidly expanding campaign, disguising their malware as legitimate applications to bypass security, steal credentials, and persistently compromise organizations worldwide. | AI blog | Trend Micro |
| 20.9.25 | How AI-Native Development Platforms Enable Fake Captcha Pages | Cybercriminals are abusing AI-native platforms like Vercel, Netlify, and Lovable to host fake captcha pages that deceive users, bypass detection, and drive phishing campaigns. | AI blog | Trend Micro |
| 13.9.25 | Echoleak- Send a prompt , extract secret from Copilot AI!( CVE-2025-32711) | Introduction: What if your Al assistant wasn’t just helping you – but quietly helping someone else too? A recent zero-click exploit known as EchoLeak revealed how Microsoft 365 Copilot could be manipulated to exfiltrate sensitive information – without the... | AI blog | Seqrite |
| 6.9.25 | Hexstrike-AI: When LLMs Meet Zero-Day Exploitation | Key Findings: Newly released framework called Hexstrike-AI provides threat actors with an orchestration “brain” that ... | AI blog | Checkpoint |
| 6.9.25 | PromptLock: The First AI-Powered Ransomware & How It Works | Introduction AI-powered malware has become quite a trend now. We have always been discussing how threat actors could perform attacks by leveraging AI models, and here we have a PoC demonstrating exactly that. Although it has not yet been | AI blog | Seqrite |
| 30.8.25 | Malicious Screen Connect Campaign Abuses AI-Themed Lures for Xworm Delivery | During a recent Advanced Continual Threat Hunt (ACTH) investigation, the Trustwave SpiderLabs Threat Hunt team identified a deceptive campaign that abused fake AI-themed content to lure users into executing a malicious, pre-configured ScreenConnect installer. | AI blog | TRUSTWAVE |
| 30.8.25 | LLM Security: Risks, Best Practices, Solutions | Large language models (LLMs), such as ChatGPT, Claude, and Gemini, are transforming industries by enabling faster workflows, deeper insights, and smarter tools. Their capabilities are reshaping how we work, communicate, and innovate. | AI blog | PROOFPOINT |
| 30.8.25 | First known AI-powered ransomware uncovered by ESET Research | The discovery of PromptLock shows how malicious use of AI models could supercharge ransomware and other threats | AI blog | Eset |
| 23.8.25 | Cybercriminals Abuse AI Website Creation App For Phishing | We are often asked about the impact of AI on the threat landscape. While we have observed that large language model (LLM) generated emails or scripts have so far had little impact, some AI tools are lowering the barrier for entry for digital crime. Take, for example, services that can create websites in minutes with the help of AI. | AI blog | PROOFPOINT |
| 23.8.25 | Investors beware: AI-powered financial scams swamp social media | Can you tell the difference between legitimate marketing and deepfake scam ads? It’s not always as easy as you may think. | AI blog | Eset |
| 17.8.25 | What the White House’s AI Action Plan Means for Infrastructure and Cybersecurity Leaders | The White House’s AI Action Plan, titled “Winning the AI Race”, marks a strategic shift in how the U.S. government aims to lead in artificial intelligence while securing its technological foundations. | AI blog | Eclypsium |
| 16.8.25 | AI wrote my code and all I got was this broken prototype | Can AI really write safer code? Martin dusts off his software engineer skills to put it it to the test. Find out what AI code failed at, and what it was surprisingly good at. Also, we discuss new research on how AI LLM models can be used to assist in the reverse engineering of malware. | AI blog | CISCO TALOS |
| 26.7.25 | Sophos X-Ops explores why larger isn’t always better when it comes to solving security challenges with AI | AI blog | SOPHOS | |
| 26.7.25 | Revisiting Bare Metal Server Security in the Age of AI | The adoption of bare metal cloud services for AI workloads has accelerated significantly, driven by performance requirements that virtualized environments struggle to meet. | AI blog | Eclypsium |
| 19.7.25 | SophosAI at Black Hat USA ’25: Anomaly detection betrayed us, so we gave it | Sophos’ Ben Gelman and Sean Bergeron will present their research on enhancing command line classification with benign anomalous data at Las Vegas | AI blog | SOPHOS |
| 19.7.25 | Old Miner, New Tricks | FortiCNAPP Labs uncovers Lcrypt0rx, a likely AI-generated ransomware variant used in updated H2Miner campaigns targeting cloud resources for Monero mining. | AI blog | FORTINET |
| 19.7.25 | Preventing Zero-Click AI Threats: Insights from EchoLeak | A zero-click exploit called EchoLeak reveals how AI assistants like Microsoft 365 Copilot can be manipulated to leak sensitive data without user interaction. This entry breaks down how the attack works, why it matters, and what defenses are available to proactively mitigate this emerging AI-native threat. | AI blog | Trend Micro |
| 12.7.25 | Black Hat SEO Poisoning Search Engine Results For AI | ThreatLabz | Zscaler ThreatLabz researchers recently uncovered AI-themed websites designed to spread malware. The threat actors behind these attacks are exploiting the popularity of AI tools like ChatGPT and Luma AI. | AI blog | ZSCALER |
| 12.7.25 | Catching Smarter Mice with Even Smarter Cats | Explore how AI is changing the cat-and-mouse dynamic of cybersecurity, from cracking obfuscation and legacy languages to challenging new malware built with Flutter, Rust, and Delphi. | AI blog | FORTINET |
| 5.7.25 | AI Dilemma: Emerging Tech as Cyber Risk Escalates | As AI adoption accelerates, businesses face mounting cyber threats—and urgent choices about secure implementation | AI blog | Trend Micro |
| 2.7.25 | Okta observes v0 AI tool used to build phishing sites | Okta Threat Intelligence has observed threat actors abusing v0, a breakthrough Generative Artificial Intelligence (GenAI) tool created by Vercelopens in a new tab, to develop phishing sites that impersonate legitimate sign-in webpages. | AI blog | OKTA |
| 28.6.25 | Check Point Research discovered the first known case of malware designed to trick AI-based security tools | AI blog | Checkpoint | |
| 14.6.25 | AI is Critical Infrastructure: Securing the Foundation of the Global Future | AI data centers are critical infrastructure now. The U.S. investment in AI is nearing a trillion dollars, and new agreements between global superpowers and hyperscaler companies are turning AI into what recent congressional testimony from the Center for Strategic and International Studies described as “the defining competition of the 21st century.” | AI blog | Eclypsium |
| 7.6.25 | How Good Are the LLM Guardrails on the Market? A Comparative Study on the Effectiveness of LLM Content Filtering Across Major GenAI Platforms | We conducted a comparative study of the built-in guardrails offered by three major cloud-based large language model (LLM) platforms. We examined how each platform's guardrails handle a broad range of prompts, from benign queries to malicious instructions. | AI blog | Palo Alto |
| 7.6.25 | Lost in Resolution: Azure OpenAI's DNS Resolution Issue | In late 2024, Unit 42 researchers discovered an issue with Azure OpenAI’s Domain Name System (DNS) resolution logic that could have enabled cross-tenant data leaks and meddler-in-the-middle (MitM) attacks. This issue stemmed from a misconfiguration in how the Azure OpenAI API handled domain assignments, versus how the user interface (UI) handled them. | AI blog | Palo Alto |
| 1.6.25 | Trend Micro Leading the Fight to Secure AI | New MITRE ATLAS submission helps strengthen organizations’ cyber resilience | AI blog | Trend Micro |
| 25.4.25 | Deepfake 'doctors' take to TikTok to peddle bogus cures | Look out for AI-generated 'TikDocs' who exploit the public's trust in the medical profession to drive sales of sketchy supplements | AI blog | Eset |
| 25.4.25 | Will super-smart AI be attacking us anytime soon? | What practical AI attacks exist today? “More than zero” is the answer – and they’re getting better. | AI blog | |
| 19.4.25 | Top 10 for LLM & Gen AI Project Ranked by OWASP | Trend Micro has become a Gold sponsor of the OWASP Top 10 for LLM and Gen AI Project, merging cybersecurity expertise with OWASP's collaborative efforts to address emerging AI security risks. This partnership underscores Trend Micro's unwavering commitment to advancing AI security, ensuring a secure foundation for the transformative power of AI. | AI blog | Trend Micro |
| 19.4.25 | Care what you share | In this week’s newsletter, Thorsten muses on how search engines and AI quietly gather your data while trying to influence your buying choices. Explore privacy-friendly alternatives and get the scoop on why it's important to question the platforms you interact with online. | AI blog | Palo Alto |
| 19.4.25 | CapCut copycats are on the prowl | Cybercriminals lure content creators with promises of cutting-edge AI wizardry, only to attempt to steal their data or hijack their devices instead | AI blog | Eset |
| 12.4.25 | Incomplete NVIDIA Patch to CVE-2024-0132 Exposes AI Infrastructure and Data to Critical Risks | A previously disclosed vulnerability in NVIDIA Container Toolkit has an incomplete patch, which, if exploited, could put a wide range of AI infrastructure and sensitive data at risk. | AI blog | |
| 12.4.25 | GTC 2025: AI, Security & The New Blueprint | From quantum leaps to AI factories, GTC 2025 proved one thing: the future runs on secure foundations. | AI blog | |
| 12.4.25 | How Prompt Attacks Exploit GenAI and How to Fight Back | Palo Alto Networks has released “Securing GenAI: A Comprehensive Report on Prompt Attacks: Taxonomy, Risks, and Solutions,” which surveys emerging prompt-based attacks on AI applications and AI agents. While generative AI (GenAI) has many valid applications for enterprise productivity, there is also potential for critical security vulnerabilities in AI applications and AI agents. | AI blog | Palo Alto |
| 5.4.25 | The good, the bad and the unknown of AI: A Q&A with Mária Bieliková | The computer scientist and AI researcher shares her thoughts on the technology’s potential and pitfalls – and what may lie ahead for us | AI blog | Eset |
|
22.3.25 |
AI's biggest surprises of 2024 | Unlocked 403 cybersecurity podcast (S2E1) | Here's what's been hot on the AI scene over the past 12 months, how it's changing the face of warfare, and how you can fight AI-powered scams | AI blog | |
|
15.3.25 |
AI-Assisted Fake GitHub Repositories Fuel SmartLoader and LummaStealer Distribution |
In this blog entry, we uncovered a campaign that uses fake GitHub repositories to distribute SmartLoader, which is then used to deliver Lumma Stealer and other malicious payloads. The campaign leverages GitHub’s trusted reputation to evade detection, using AI-generated content to make fake repositories appear legitimate. |
||
|
15.3.25 |
Malicious use of AI is reshaping the fraud landscape, creating major new risks for businesses |
|||
| 8.3.25 | Exploiting DeepSeek-R1: Breaking Down Chain of Thought Security | DeepSeek-R1 uses Chain of Thought (CoT) reasoning, explicitly sharing its step-by-step thought process, which we found was exploitable for prompt attacks. | AI blog | Trend Micro |
| 8.3.25 | Martin Rees: Post-human intelligence – a cosmic perspective | Starmus highlights | Take a moment to think beyond our current capabilities and consider what might come next in the grand story of evolution | AI blog | |
| 1.3.25 | Bernhard Schölkopf: Is AI intelligent? | Starmus highlights | AI blog | Eset | |
|
22.2.25 | ||||
|
22.2.25 |
Neil Lawrence: What makes us unique in the age of AI | Starmus highlights |
|||
|
22.2.25 |
Roeland Nusselder: AI will eat all our energy, unless we make it tiny | Starmus highlights | |||
|
22.2.25 | ||||
|
22.2.25 |
This month in security with Tony Anscombe – January 2025 edition |
|||
|
22.2.25 | ||||
|
22.2.25 | ||||
|
22.2.25 | ||||
|
22.2.25 |
Investigating LLM Jailbreaking of Popular Generative AI Web Products |
This article summarizes our investigation into jailbreaking 17 of the most popular generative AI (GenAI) web products that offer text generation or chatbot services. | ||
|
18.1.25 | Cybersecurity and AI: What does 2025 have in store? | In the hands of malicious actors, AI tools can enhance the scale and severity of all manner of scams, disinformation campaigns and other threats | AI blog | |
|
11.1.25 | AI moves to your PC with its own special hardware | Seeking to keep sensitive data private and accelerate AI workloads? Look no further than AI PCs powered by Intel Core Ultra processors with a built-in NPU. | AI blog | |
|
4.1.25 | AI Pulse: Top AI Trends from 2024 - A Look Back | In this edition of AI Pulse, let's look back at top AI trends from 2024 in the rear view so we can more clearly predicts AI trends for 2025 and beyond. | AI blog | |