{"id":35378,"date":"2025-10-16T01:31:36","date_gmt":"2025-10-16T01:31:36","guid":{"rendered":"https:\/\/agooka.com\/news\/technologies\/attackers-used-ai-prompts-to-silently-exfiltrate-code-from-github-repositories\/"},"modified":"2025-10-16T01:31:36","modified_gmt":"2025-10-16T01:31:36","slug":"attackers-used-ai-prompts-to-silently-exfiltrate-code-from-github-repositories","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/technologies\/attackers-used-ai-prompts-to-silently-exfiltrate-code-from-github-repositories\/","title":{"rendered":"Attackers used AI prompts to silently exfiltrate code from GitHub repositories"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/dataconomy.com\/wp-content\/uploads\/2025\/10\/1171658.jpg\" alt=\"Attackers used AI prompts to silently exfiltrate code from GitHub repositories\" title=\"Attackers used AI prompts to silently exfiltrate code from GitHub repositories\"\/><\/p>\n<p>A critical vulnerability in GitHub Copilot Chat, dubbed \u201cCamoLeak,\u201d allowed attackers to silently steal source code and secrets from private repositories using a sophisticated prompt injection technique. The flaw, which carried a CVSS score of 9.6, has since been patched by GitHub.<\/p>\n<h2>How the CamoLeak attack worked<\/h2>\n<p>The attack method, discovered by security researcher Omer Mayraz, began by hiding malicious instructions within a pull request description using GitHub\u2019s \u201cinvisible comments\u201d feature. While this content is not visible to users in the standard interface, Copilot Chat ingests all repository and pull request context, including this hidden metadata, when generating responses.<\/p>\n<p>The vulnerability was triggered when a legitimate developer with access to private repositories would ask Copilot Chat a question about the compromised pull request. Copilot, which operates with the permissions of the querying user, would then execute the hidden malicious prompt. This allowed the attacker to command the AI assistant to search for sensitive information, such as API keys or source code, within the victim\u2019s accessible private repositories.<\/p>\n<p>To exfiltrate the stolen data, the attack leveraged GitHub\u2019s own \u201cCamo\u201d image proxy service. Normally, GitHub\u2019s Content Security Policy (CSP) prevents content from directly leaking data to external domains. The Camo proxy is designed to safely route external image requests, rewriting URLs to a <code>camo.githubusercontent.com<\/code> address with a cryptographic signature.<\/p>\n<p>The CamoLeak attack bypassed these protections by first having the attacker create a dictionary of pre-signed Camo URLs. Each valid URL pointed to a benign, invisible 1\u00d71 pixel image hosted on the attacker\u2019s server, with each unique URL representing a single character of data (e.g., \u2018A\u2019, \u2018B\u2019, \u20181\u2019, \u2018;\u2019).<\/p>\n<p>The injected prompt then instructed Copilot to construct its response by referencing these pre-signed image URLs in a specific sequence that encoded the stolen repository content. When the victim\u2019s browser rendered Copilot\u2019s output, it made a series of requests through the trusted Camo proxy to fetch each invisible pixel. The sequence of these requests, as received by the attacker\u2019s server, effectively reconstructed the stolen data character by character, all without displaying any malicious content to the user or triggering standard network security alerts.<\/p>\n<p><strong>Featured image credit<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A critical vulnerability in GitHub Copilot Chat, dubbed \u201cCamoLeak,\u201d allowed attackers to silently steal source code and secrets from private repositories using a sophisticated prompt injection technique. The flaw, which carried a CVSS score of 9.6, has since been patched by GitHub. How the CamoLeak attack worked The attack method, discovered by security researcher Omer [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":35379,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-35378","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technologies"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/35378","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=35378"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/35378\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/35379"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=35378"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=35378"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=35378"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}