{"id":45642,"date":"2026-02-17T20:41:11","date_gmt":"2026-02-17T20:41:11","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/meta-and-other-tech-companies-ban-openclaw-over-cybersecurity-concerns\/"},"modified":"2026-02-17T20:41:11","modified_gmt":"2026-02-17T20:41:11","slug":"meta-and-other-tech-companies-ban-openclaw-over-cybersecurity-concerns","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/meta-and-other-tech-companies-ban-openclaw-over-cybersecurity-concerns\/","title":{"rendered":"Meta and Other Tech Companies Ban OpenClaw Over Cybersecurity Concerns"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. \u201cYou&#039;ve likely seen Clawdbot trending on X\/LinkedIn. While cool, it is currently unvetted and high-risk for our environment,&quot; he wrote in a Slack message with a red siren emoji. &quot;Please keep Clawdbot off all company hardware and away from work-linked accounts.&quot;<\/p>\n<p>Grad isn\u2019t the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and now as OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.<\/p>\n<p>Peter Steinberger, OpenClaw\u2019s solo founder, launched it as a free, open-source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.<\/p>\n<p>OpenClaw requires basic software engineering knowledge to set up. After that, it only needs limited direction to take control of a user\u2019s computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and shopping online.<\/p>\n<p>Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies.<\/p>\n<p>\u201cOur policy is, \u2018mitigate first, investigate second\u2019 when we come across anything that could be harmful to our company, users, or clients,\u201d says Grad, who is cofounder and CEO of Massive, which provides internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.<\/p>\n<p>At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company\u2019s president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED.<\/p>\n<p>\u201cIf it got access to one of our developer\u2019s machines, it could get access to our cloud services and our clients\u2019 sensitive information, including credit card information and GitHub codebases,\u201d Pistone says. \u201cIt\u2019s pretty good at cleaning up some of its actions, which also scares me.\u201d<\/p>\n<p>A week later, Pistone did allow Valere\u2019s research team to run OpenClaw on an employee\u2019s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the internet only with a password in place for its control panel to prevent unwanted access.<\/p>\n<p>In a report shared with WIRED, the Valere researchers added that users have to \u201caccept that the bot can be tricked.\u201d For instance, if OpenClaw is set up to summarize a user\u2019s email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person\u2019s computer.<\/p>\n<p>But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. \u201cIf we don\u2019t think we can do it in a reasonable time, we\u2019ll forgo it,\u201d he says. \u201cWhoever figures out how to make it secure for businesses is definitely going to have a winner.\u201d<\/p>\n<p>Some companies concerned about OpenClaw are choosing to trust the cybersecurity protections they already have in place rather than introduce a formal or one-off ban. A CEO of a major software company says only about 15 programs are allowed on corporate devices. Anything else should be automatically blocked, says the executive, who spoke on the condition of anonymity to discuss internal security protocols. He says that while OpenClaw is innovative, he doubts that it will find a way to operate on the company\u2019s network undetected.<\/p>\n<p>Jan-Joost den Brinker, chief technology officer at Prague-based compliance software developer Durbink, says he bought a dedicated machine not connected to company systems or accounts that employees can use to play around with OpenClaw. \u201cWe aren&#039;t solving business problems with OpenClaw at the moment,\u201d he says.<\/p>\n<p>Massive, the web proxy company, is cautiously exploring OpenClaw\u2019s commercial possibilities. Grad says it tested the AI tool on isolated machines in the cloud and then, last week, released ClawPod, a way for OpenClaw agents to use Massive\u2019s services to browse the web. While OpenClaw is still not welcome on Massive\u2019s systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore. OpenClaw \u201cmight be a glimpse into the future. That&#039;s why we&#039;re building for it,\u201d Grad says.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. \u201cYou&#039;ve likely seen Clawdbot trending on X\/LinkedIn. While cool, it is currently unvetted and high-risk for our environment,&quot; he wrote in a Slack message with a red siren emoji. &quot;Please keep Clawdbot [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":45643,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-45642","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45642","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=45642"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45642\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/45643"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=45642"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=45642"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=45642"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}