Articles tagged with: #ai Clear filter
Google's New AI Doesn't Just Find Vulnerabilities  -  It Rewrites Code to Patch Them

Google's New AI Doesn't Just Find Vulnerabilities - It Rewrites Code to Patch Them

The Hacker News thehackernews.com

Google's DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. The efforts add to the company's ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz. DeepMind said the AI agent is designed to be both reactive and

New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

The Hacker News thehackernews.com

For years, security leaders have treated artificial intelligence as an "emerging" technology, something to keep an eye on but not yet mission-critical. A new Enterprise AI and SaaS Data Security Report by AI & Browser Security company LayerX proves just how outdated that mindset has become. Far from a future concern, AI is already the single largest uncontrolled channel for corporate data

#ai
RL Is a Hammer and LLMs Are Nails: A Simple Reinforcement Learning Recipe for Strong Prompt Injection

RL Is a Hammer and LLMs Are Nails: A Simple Reinforcement Learning Recipe for Strong Prompt Injection

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.04885v1 Announce Type: new Abstract: Prompt injection poses a serious threat to the reliability and safety of LLM agents. Recent defenses against prompt injection, such as Instruction Hierarchy and SecAlign, have shown notable robustness against static attacks. However, to more thoroughly evaluate the robustness of these defenses, it is arguably necessary to employ strong attacks such as automated red-teaming. To this end, we introduce RL-Hammer, a simple recipe for training attacker...

#ai
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy

VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.04261v1 Announce Type: new Abstract: Large language models (LLMs) have been widely deployed in Conversational AIs (CAIs), while exposing privacy and security threats. Recent research shows that LLM-based CAIs can be manipulated to extract private information from human users, posing serious security threats. However, the methods proposed in that study rely on a white-box setting that adversaries can directly modify the system prompt. This condition is unlikely to hold in real-world...

Real-VulLLM: An LLM Based Assessment Framework in the Wild

Real-VulLLM: An LLM Based Assessment Framework in the Wild

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.04056v1 Announce Type: new Abstract: Artificial Intelligence (AI) and more specifically Large Language Models (LLMs) have demonstrated exceptional progress in multiple areas including software engineering, however, their capability for vulnerability detection in the wild scenario and its corresponding reasoning remains underexplored. Prompting pre-trained LLMs in an effective way offers a computationally effective and scalable solution. Our contributions are (i)varied prompt designs...

Quantifying Distributional Robustness of Agentic Tool-Selection

Quantifying Distributional Robustness of Agentic Tool-Selection

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03992v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in agentic systems where they map user intents to relevant external tools to fulfill a task. A critical step in this process is tool selection, where a retriever first surfaces candidate tools from a larger pool, after which the LLM selects the most appropriate one. This pipeline presents an underexplored attack surface where errors in selection can lead to severe outcomes like unauthorized...

Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications

Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03623v1 Announce Type: new Abstract: Explainable Artificial Intelligence (XAI) has aided machine learning (ML) researchers with the power of scrutinizing the decisions of the black-box models. XAI methods enable looking deep inside the models' behavior, eventually generating explanations along with a perceived trust and transparency. However, depending on any specific XAI method, the level of trust can vary. It is evident that XAI methods can themselves be a victim of...

#ai
PentestMCP: A Toolkit for Agentic Penetration Testing

PentestMCP: A Toolkit for Agentic Penetration Testing

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03610v1 Announce Type: new Abstract: Agentic AI is transforming security by automating many tasks being performed manually. While initial agentic approaches employed a monolithic architecture, the Model-Context-Protocol has now enabled a remote-procedure call (RPC) paradigm to agentic applications, allowing for the flexible construction and composition of multi-function agents. This paper describes PentestMCP, a library of MCP server implementations that support agentic penetration...

#ai
CryptOracle: A Modular Framework to Characterize Fully Homomorphic Encryption

CryptOracle: A Modular Framework to Characterize Fully Homomorphic Encryption

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03565v1 Announce Type: new Abstract: Privacy-preserving machine learning has become an important long-term pursuit in this era of artificial intelligence (AI). Fully Homomorphic Encryption (FHE) is a uniquely promising solution, offering provable privacy and security guarantees. Unfortunately, computational cost is impeding its mass adoption. Modern solutions are up to six orders of magnitude slower than plaintext execution. Understanding and reducing this overhead is essential to...

PrivacyMotiv: Speculative Persona Journeys for Empathic and Motivating Privacy Reviews in UX Design

PrivacyMotiv: Speculative Persona Journeys for Empathic and Motivating Privacy Reviews in UX Design

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03559v1 Announce Type: new Abstract: UX professionals routinely conduct design reviews, yet privacy concerns are often overlooked -- not only due to limited tools, but more critically because of low intrinsic motivation. Limited privacy knowledge, weak empathy for unexpectedly affected users, and low confidence in identifying harms make it difficult to address risks. We present PrivacyMotiv, an LLM-powered system that supports privacy-oriented design diagnosis by generating...

#ai
A Multi-Layer Electronic and Cyber Interference Model for AI-Driven Cruise Missiles: The Case of Khuzestan Province

A Multi-Layer Electronic and Cyber Interference Model for AI-Driven Cruise Missiles: The Case of Khuzestan Province

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03542v1 Announce Type: new Abstract: The rapid advancement of Artificial Intelligence has enabled the development of cruise missiles endowed with high levels of autonomy, adaptability, and precision. These AI driven missiles integrating deep learning algorithms, real time data processing, and advanced guidance systems pose critical threats to strategic infrastructures, especially under complex geographic and climatic conditions such as those found in Irans Khuzestan Province. In this...

#ai
NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

cs.CR updates on arXiv.org arxiv.org

arXiv:2510.03417v1 Announce Type: new Abstract: Large Language Models (LLMs) have revolutionized natural language processing but remain vulnerable to jailbreak attacks, especially multi-turn jailbreaks that distribute malicious intent across benign exchanges and bypass alignment mechanisms. Existing approaches often explore the adversarial space poorly, rely on hand-crafted heuristics, or lack systematic query refinement. We present NEXUS (Network Exploration for eXploiting Unsafe Sequences), a...

#ai
MCP Kali server + LLM demo  -  would you use this to automate pentesting?

MCP Kali server + LLM demo - would you use this to automate pentesting?

cybersecurity www.reddit.com

Hey folks - I watched a recent YouTube demo where someone set up a local "MCP / CalMCP" server on Kali and connected an LLM (via VS Code / Copilot) so the model could send commands to the Kali machine. In the video the LLM automatically discovered a reflected XSS in a lab, ran payloads, and produced a PoC - all with minimal human interaction. A few important notes up front: I did not create that video - I'm sharing it to spark discussion. Also: this workflow is NOT for beginners. You...

Introducing our agentic commerce solutions

Introducing our agentic commerce solutions

Stripe Blog stripe.com

Last week, we shared how we worked with OpenAI to develop the Agentic Commerce Protocol (ACP), a new specification to help AI platforms embed commerce into their applications and businesses sell through agentic channels without giving up trust or control. ACP is just the beginning. Here are some of the additional steps Stripe is taking to build agentic commerce solutions for AI platforms and businesses.

#ai
Balancing ATS-Compatible Resumes with Timely Job Applications

Balancing ATS-Compatible Resumes with Timely Job Applications

cybersecurity www.reddit.com

So now that ATS (Applicant Tracking Systems) are in full rage, some even being AI-enhanced a lot of highly qualified job applicants that are not aware of this are having their resumes discarded by these ATS systems before the files even make it to HR. For those that have managed to beat the ATS scrutiny what workflows or process are you using and still be able to fill out a good amount of job applications in one day? **presently what I do is take my base resume, run it through AI (a ChatGPT...

#ai
Secure Use of the Agent Payments Protocol (AP2): A Framework for Trustworthy AI-Driven Transactions

Secure Use of the Agent Payments Protocol (AP2): A Framework for Trustworthy AI-Driven Transactions

Cloud Security Alliance cloudsecurityalliance.org

Written by Ken Huang, CEO at DistributedApps.ai and Jerry Huang, Engineering Fellow, Kleiner Perkins. Abstract AI agents used in e-commerce necessitates secure payment protocols capable of handling high-determinism user authorization, agent authentication, and non-repudiable accountability. The Agent Payments Protocol (AP2) [1], an open extension to Agent2Agent (A2A) [2] and Model Context Protocol (MCP) [3], introduces Verifiable Credentials (VCs) in the form of crypto