AI blog 2024 - 2026  2025  2024

AI blog  APT blog  Attack blog  BigBrother blog  BotNet blog  Cyber blog  Cryptocurrency blog  Exploit blog  Hacking blog  ICS blog  Incident blog  IoT blog  Malware blog  OS Blog  Phishing blog  Ransom blog  Safety blog  Security blog  Social blog  Spam blog  Vulnerebility blog

22.12.24

Link Trap: GenAI Prompt Injection Attack Prompt injection exploits vulnerabilities in generative AI to manipulate its behavior, even without extensive permissions. This attack can expose sensitive data, making awareness and preventive measures essential. Learn how it works and how to stay protected. AI blog Trend Micro

21.12.24

Philip Torr: AI to the people | Starmus Highlights We’re on the cusp of a technological revolution that is poised to transform our lives – and we hold the power to shape its impact AI blog

Eset

2.11.24

Deceptive Delight: Jailbreak LLMs Through Camouflage and Distraction

This article introduces a simple and straightforward technique for jailbreaking that we call Deceptive Delight. Deceptive Delight is a multi-turn technique that engages large language models (LLM) in an interactive conversation, gradually bypassing their safety guardrails and eliciting them to generate unsafe or harmful content. AI blog Palo Alto

2.11.24

How LLMs could help defenders write better and faster detection Can LLM tools actually help defenders in the cybersecurity industry write more effective detection content? Read the full research AI blog Cisco Blog

28.9.24

Evolved Exploits Call for AI-Driven ASRM + XDR AI-driven insights for managing emerging threats and minimizing organizational risk AI blog

Trend Micro

21.9.24

Identifying Rogue AI This is the third blog in an ongoing series on Rogue AI. Keep following for more technical guidance, case studies, and insights AI blog

Trend Micro

21.9.24

AI security bubble already springing leaks Artificial intelligence is just a spoke in the wheel of security – an important spoke but, alas, only one AI blog

Eset

31.8.24

AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI This issue of AI Pulse is all about agentic AI: what it is, how it works, and why security needs to be baked in from the start to prevent agentic AI systems from going rogue once they’re deployed. AI blog

Trend Micro

31.8.24

Unmasking ViperSoftX: In-Depth Defense Strategies Against AutoIt-Powered Threats

Explore in-depth defense strategies against ViperSoftX with the Trellix suite, and unpack why AutoIt is an increasingly popular tool for malware authors

AI blog

Trelix

24.8.24

Confidence in GenAI: The Zero Trust Approach Enterprises have gone all-in on GenAI, but the more they depend on AI models, the more risks they face. Trend Vision One™ – Zero Trust Secure Access (ZTSA) – AI Service Access bridges the gap between access control and GenAI services to protect the user journey. AI blog

Trend Micro

24.8.24

Securing the Power of AI, Wherever You Need It Explore how generative AI is transforming cybersecurity and enterprise resilience AI blog

Trend Micro

24.8.24

Rogue AI is the Future of Cyber Threats This is the first blog in a series on Rogue AI. Later articles will include technical guidance, case studies and more. AI blog

Trend Micro

17.8.24

Harnessing LLMs for Automating BOLA Detection This post presents our research on a methodology we call BOLABuster, which uses large language models (LLMs) to detect broken object level authorization (BOLA) vulnerabilities. By automating BOLA detection at scale, we will show promising results in identifying these vulnerabilities in open-source projects. AI blog Palo Alto

3.8.24

AI and automation reducing breach costs – Week in security with Tony Anscombe Organizations that leveraged AI and automation in security prevention cut the cost of a data breach by US$2.22 million compared to those that didn't deploy these technologies, according to IBM AI blog

Eset

3.8.24

Beware of fake AI tools masking very real malware threats Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants AI blog

Eset

27.7.24

Vulnerabilities in LangChain Gen AI

Researchers from Palo Alto Networks have identified two vulnerabilities in LangChain, a popular open source generative AI framework with over 81,000 stars on GitHub:

AI blog

Palo Alto

13.7.24

Declare your AIndependence: block AI bots, scrapers and crawlers with a single click To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier... AI blog Cloudflare

13.7.24

The Top 10 AI Security Risks Every Business Should Know With every week bringing news of another AI advance, it’s becoming increasingly important for organizations to understand the risks before adopting AI tools. This look at 10 key areas of concern identified by the Open Worldwide Application Security Project (OWASP) flags risks enterprises should keep in mind through the back half of the year. AI blog Trend Micro

13.7.24

The Contrastive Credibility Propagation Algorithm in Action: Improving ML-powered Data Loss Prevention The Contrastive Credibility Propagation (CCP) algorithm is a novel approach to semi-supervised learning (SSL) developed by AI researchers at Palo Alto Networks to improve model task performance with imbalanced and noisy labeled and unlabeled data. AI blog Palo Alto

6.7.24

AI in the workplace: The good, the bad, and the algorithmic While AI can liberate us from tedious tasks and even eliminate human error, it's crucial to remember its weaknesses and the unique capabilities that humans bring to the table AI blog Eset
29.6.24 ICO Scams Leverage 2024 Olympics to Lure Victims, Use AI for Fake Sites In this blog we uncover threat actors using the 2024 Olympics to lure victims into investing in an initial coin offering (ICO). Similar schemes have been found to use AI-generated images for their fake ICO websites. AI blog Trend Micro
29.6.24 AI Coding Companions 2024: AWS, GitHub, Tabnine + More AI coding companions are keeping pace with the high-speed evolution of generative AI overall, continually refining and augmenting their capabilities to make software development faster and easier than ever before. This blog looks at how the landscape is changing and key features of market-leading solutions from companies like AWS, GitHub, and Tabnine. AI blog Trend Micro
15.6.24 Explore AI-Driven Cybersecurity with Trend Micro, Using NVIDIA NIM Discover Trend Micro's integration of NVIDIA NIM to deliver an AI-driven cybersecurity solution for next-generation data centers. Engage with experts, explore demos, and learn strategies for securing AI data centers and optimizing cloud performance. AI blog Trend Micro

1.6.24

AI in HR: Is artificial intelligence changing how we hire employees forever? Much digital ink has been spilled on artificial intelligence taking over jobs, but what about AI shaking up the hiring process in the meantime? AI blog Eset

1.6.24

ESET World 2024: Big on prevention, even bigger on AI What is the state of artificial intelligence in 2024 and how can AI level up your cybersecurity game? These hot topics and pressing questions surrounding AI were front and center at the annual conference. AI blog Eset

25.5.24

What happens when AI goes rogue (and how to stop it) As AI gets closer to the ability to cause physical harm and impact the real world, “it’s complicated” is no longer a satisfying response AI blog Eset
24.5.24 Using Agentic AI & Digital Twin for Cyber Resilience Learn how Trend is combining agentic AI and digital twin to transform the way organizations protect themselves from cyber threats. AI blog Trend Micro
24.5.24 The Sting of Fake Kling: Facebook Malvertising Lures Victims to Fake AI Generation Website In early 2025, Check Point Research (cp<r>) started tracking a threat campaign that abuses the growing popularity of AI content generation platforms by impersonating Kling AI, a legitimate AI-powered image and video synthesis tool. Promoted through Facebook advertisements, the campaign directs users to a convincing spoof of Kling AI’s website, where visitors are invited to create AI-generated images or videos directly in the browser. AI blog Checkpoint
24.5.24 Trend Secures AI Infrastructure with NVIDIA Organizations worldwide are racing to implement agentic AI solutions to drive innovation and competitive advantage. However, this revolution introduces security challenges—particularly for organizations in highly regulated industries that require data sovereignty and strict compliance. AI blog Trend Micro
17.5.24 Trend Micro Puts a Spotlight on AI at Pwn2Own Berlin Get a sneak peak into how Trend Micro's Pwn2Own Berlin 2025 is breaking new ground, focusing on AI infrastructure and finding the bugs to proactively safeguard the future of computing. AI blog Trend Micro

11.5.24

RSA Conference 2024: AI hype overload Can AI effortlessly thwart all sorts of cyberattacks? Let’s cut through the hyperbole surrounding the tech and look at its actual strengths and limitations. AI blog Eset
10.5.24 Exploring PLeak: An Algorithmic Method for System Prompt Leakage What is PLeak, and what are the risks associated with it? We explored this algorithmic technique and how it can be used to jailbreak LLMs, which could be leveraged by threat actors to manipulate systems and steal sensitive data. AI blog Trend Micro
10.5.24 AI Agents Are Here. So Are the Threats. Agentic applications are programs that leverage AI agents — software designed to autonomously collect data and take actions toward specific objectives — to drive their functionality. AI blog Palo Alto
6.4.24 BEYOND IMAGINING – HOW AI IS ACTIVELY USED IN ELECTION CAMPAIGNS AROUND THE WORLD Deepfake materials (convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates) are often disseminated shortly before election dates to limit the opportunity for fact-checkers to respond. Regulations which ban political discussion on mainstream media in the hours leading up to elections, allow unchallenged fake news to dominate the airwaves. AI blog Checkpoint
2.3.24 Deceptive AI content and 2024 elections – Week in security with Tony Anscombe As the specter of AI-generated disinformation looms large, tech giants vow to crack down on fabricated content that could sway voters and disrupt elections taking place around the world this year AI blog Eset
18.2.24 All eyes on AI | Unlocked 403: A cybersecurity podcast Artificial intelligence is on everybody’s lips these days, but there are also many misconceptions about what AI actually is and isn’t. We unpack the basics and examine AI's broader implications. AI blog Eset
4.2.24 Break the fake: The race is on to stop AI voice cloning scams As AI-powered voice cloning turbocharges imposter scams, we sit down with ESET’s Jake Moore to discuss how to hang up on ‘hi-fi’ scam calls – and what the future holds for deepfake detection AI blog Eset

14.1.24

Love is in the AI: Finding love online takes on a whole new meaning Is AI companionship the future of not-so-human connection – and even the cure for loneliness? AI blog Eset