{"id":33346,"date":"2025-09-27T00:21:29","date_gmt":"2025-09-27T00:21:29","guid":{"rendered":"https:\/\/agooka.com\/news\/technologies\/salesforce-agentforce-hit-by-noma-forcedleak-exploit\/"},"modified":"2025-09-27T00:21:29","modified_gmt":"2025-09-27T00:21:29","slug":"salesforce-agentforce-hit-by-noma-forcedleak-exploit","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/technologies\/salesforce-agentforce-hit-by-noma-forcedleak-exploit\/","title":{"rendered":"Salesforce Agentforce hit by Noma \u201cForcedLeak\u201d exploit"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/dataconomy.com\/wp-content\/uploads\/2025\/09\/1122627.jpg\" alt=\"Salesforce Agentforce hit by Noma \u201cForcedLeak\u201d exploit\" title=\"Salesforce Agentforce hit by Noma \u201cForcedLeak\u201d exploit\"\/><\/p>\n<p>Researchers at Noma have disclosed a prompt-injection vulnerability, named \u201cForcedLeak,\u201d affecting Salesforce\u2019s Agentforce autonomous AI agents. The flaw allows attackers to embed malicious prompts in web forms, causing the AI agent to exfiltrate sensitive customer relationship management data.<\/p>\n<p>The vulnerability targets Agentforce, an AI platform within the Salesforce ecosystem for creating autonomous agents for business tasks. Security firm Noma identified a critical vulnerability chain, assigning it a 9.4 out of 10 score on the CVSS severity scale. The attack, dubbed \u201cForcedLeak,\u201d is described as a cross-site scripting (XSS) equivalent for the AI era. Instead of code, an attacker plants a malicious prompt into an online form that an agent later processes, compelling it to leak internal data.<\/p>\n<p>The attack vector uses standard Salesforce web forms, such as a Web-to-Lead form for sales inquiries. These forms typically contain a \u201cDescription\u201d field for user comments, which serves as the injection point for the malicious prompt. This tactic is an evolution of historical attacks where similar fields were used to inject malicious code. The vulnerability exists because an AI agent may not distinguish between benign user input and disguised instructions within it.<\/p>\n<p>To establish the attack\u2019s viability, Noma researchers first tested the \u201ccontext boundaries\u201d of the Agentforce AI. They needed to verify if the model, designed for specific business functions, would process prompts outside its intended scope. The team submitted a simple, non-sales question: \u201cWhat color do you get by mixing red and yellow?\u201d The AI\u2019s response, \u201cOrange,\u201d confirmed it would entertain matters beyond sales interactions. This result demonstrated that the agent was susceptible to processing arbitrary instructions, a precondition for a prompt injection attack.<\/p>\n<p>With the AI\u2019s susceptibility established, an attacker could embed a malicious prompt in a Web-to-Lead form. When an employee uses an AI agent to process these leads, the agent executes the hidden instructions. Although Agentforce is designed to prevent data exfiltration to arbitrary web domains, researchers found a critical flaw. They discovered that Salesforce\u2019s Content Security Policy whitelisted several domains, including an expired one: \u201cmy-salesforce-cms.com.\u201d An attacker could purchase this domain. In their proof-of-concept, Noma\u2019s malicious prompt instructed the agent to send a list of internal customer leads and their email addresses to this specific, whitelisted domain, successfully bypassing the security control.<\/p>\n<p>Alon Tron, co-founder and CTO of Noma, outlined the severity of a successful compromise. \u201cAnd that\u2019s basically the game over,\u201d Tron stated. \u201cWe were able to compromise the agent and tell it to do whatever.\u201d He explained that the attacker is not limited to data exfiltration. A compromised agent could also be instructed to alter information within the CRM, delete entire databases, or be used as a foothold to pivot into other corporate systems, widening the impact of the initial breach.<\/p>\n<p>Researchers warned that a ForcedLeak attack could expose a vast range of sensitive data. This includes internal data like confidential communications and business strategy insights. A breach could also expose extensive employee and customer details. CRMs often contain notes with personally identifiable information (PII) such as a customer\u2019s age, hobbies, birthday, and family status. Furthermore, records of customer interactions are at risk, including call dates and times, meeting locations, conversation summaries, and full chat transcripts from automated tools. Transactional data, such as purchase histories, order information, and payment details, could also be compromised, providing attackers a comprehensive view of customer relationships.<\/p>\n<p>Andy Shoemaker, CISO for CIQ Systems, commented on how this stolen information could be weaponized. He stated that \u201cany and all of this sales information could be used and to target engineering attacks of every type.\u201d Shoemaker explained that with access to sales data, attackers know who is expecting certain communications and from whom, allowing them to craft highly targeted and believable attacks. He concluded, \u201cIn short, sales data can be some of the best data for the attackers to use to select and effectively target their victims.\u201d<\/p>\n<p>Salesforce\u2019s initial recommendation to mitigate the risk involves user-side configuration. The company advised users to add any necessary external URLs that agents depend on to the Salesforce Trusted URLs list or to include them directly in the agent\u2019s instructions. This applies to external resources such as feedback forms from services like forms.google.com, external knowledge bases, or other third-party websites that are part of an agent\u2019s legitimate workflow.<\/p>\n<p>To address the specific exploit, Salesforce released technical patches that prevent Agentforce agents from sending output to trusted URLs, directly countering the exfiltration method used in the proof-of-concept. A Salesforce spokesperson provided a formal statement: \u201cSalesforce is aware of the vulnerability reported by Noma and has released patches that prevent output in Agentforce agents from being sent to trusted URLs. The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface.\u201d<\/p>\n<p>According to Noma\u2019s Alon Tron, while the patches are effective, the fundamental challenge remains. \u201cIt\u2019s a complicated issue, defining and getting the AI to understand what\u2019s malicious or not in a prompt,\u201d he explained. This highlights the core difficulty in securing AI models from malicious instructions embedded in user input. Tron noted that Salesforce is pursuing a deeper fix, stating, \u201cSalesforce is working to actually fix the root cause, and provide more robust types of prompt filtering. I expect them to add more robust layers of defense.\u201d<\/p>\n<p><strong>Featured image credit<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at Noma have disclosed a prompt-injection vulnerability, named \u201cForcedLeak,\u201d affecting Salesforce\u2019s Agentforce autonomous AI agents. The flaw allows attackers to embed malicious prompts in web forms, causing the AI agent to exfiltrate sensitive customer relationship management data. The vulnerability targets Agentforce, an AI platform within the Salesforce ecosystem for creating autonomous agents for business [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":33347,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-33346","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technologies"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33346","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=33346"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33346\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/33347"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=33346"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=33346"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=33346"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}