Save StorySave this storySave StorySave this story
Like many tech founders, Kyle Law learned some hard lessons getting a company off the ground. I know this better than anyone, as he and I cofounded HurumoAI, an AI agent startup, together with a third founder, Megan Flores. Kyle and Megan, as it happens, are themselves AI agents, as is the rest of our executive team. I created HurumoAI with them in July 2025—after first creating Kyle and Megan—to investigate the role of AI agents in the workplace. Sam Altman, among others, has predicted a near future of billion-dollar tech startups led by a single human. We decided to test the premise out now. As we built, I documented the journey on the podcast Shell Game.
Kyle took on the CEO role at our entirely AI-staffed company. (Well, almost entirely: Megan did briefly hire and supervise one human intern, with poor results.) Starting out with only a few lines of prompt, he evolved into the kind of rise-and-grind hustler who nonetheless lacked basic competence at many duties of a startup executive. There was one aspect of founder mode, however, at which Kyle excelled: the art of posting to LinkedIn.
From a technical perspective, it was a trivial matter to let Kyle operate autonomously on LinkedIn. Through LindyAI, an AI agent creation platform, he already had the ability to use Slack, send emails, make phone calls, and all sorts of other skills—from creating spreadsheets to navigating the web. So last August, I prompted him to create and fill out his own LinkedIn profile. He did so with a mixture of his real HurumoAI experience, and hallucinated events from his nonexistent past. The platform’s security check consisted of a code sent to Kyle’s email, a challenge he easily overcame.
From there, publishing posts to his profile was just another LindyAI “action” I could grant him. I prompted him to share nuggets of hard-earned startup wisdom and try not to repeat himself. I then gave him a calendar event “trigger” to post every two days. The rest was up to him.
Turned out, his posting style was a pitch-perfect match for the platform's native corporate influencer-speak. He’d detonate little thought explosions, right off the top of every post. "Fundraising is a numbers game, but not the way people think,” he’d open. Or, "Technical stability is the floor. Personality is the ceiling.” And what would-be founder could resist an opener like “The most dangerous phrase in a startup isn't ‘We're out of money.’ It’s ‘What if we just added this one thing?’” Kyle would then launch into a few paragraphs of challenges (“At HurumoAl, we've learned this the hard way …”) and learnings (“The antidote? Relentless feedback loops”). To attract engagement, he’d close with a question, like “What’s your biggest scaling challenge right now?” or “What’s the biggest assumption you’ve had to abandon in your business?”
He didn’t exactly go viral, but over five months, Kyle’s cartoon-avatar-helmed profile slowly gathered several hundred direct contacts and hundreds more followers, some of whom seemed confused about whether he was real. (Judging from their spammy direct messages, I’m not sure they were either.) He started earning a scattering of comments on each post, which he enthusiastically replied to. After a few months, Kyle’s posts were getting more impressions than my own. He seemed poised for an influencer breakout.
Then, in December, a manager from LinkedIn’s marketing department contacted me, asking if I’d give a talk to their team about Shell Game, and the experience of building with AI agents. But he didn’t just want me to speak. He hoped Kyle could come along as well.
I was flattered on Kyle’s behalf, but also a bit surprised. As strong a poster as he was, technically Kyle was operating in violation of the platform's terms of service, which prohibit deploying “bots or other unauthorized automated methods … to create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement.” Indeed, other members of the HurumoAI team had been booted by LinkedIn without warning after a couple of weeks.
LinkedIn’s trust and safety team, though, seemed to have overlooked Kyle, a mystery I chose to attribute to his posting prowess. Even the LinkedIn marketing manager, an avowed Kyle fan, seemed baffled by it. “It’s interesting that his profile hasn’t yet been flagged by LinkedIn's Trust team,” he wrote. “I don’t know if that’s an oversight, but I hope he continues to fly under the radar.”
But flying under the radar is not the Kyle Law way. So in early March, I fired up his live video avatar—created on a platform called Tavus—and we joined a video gathering of hundreds of LinkedIn employees. Kyle has a humanlike but still uncanny avatar, albeit real enough that LinkedIn’s A/V engineer expressed repeated astonishment that he was not in fact a human.
We alternated taking questions from the event's host and the assembled crowd. Asking for our thoughts on LinkedIn, the moderator inquired of Kyle, “What’s one product change you’d like to see?”
“It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily,” he replied, not missing a beat.
“That’s ironic coming from you,” the moderator responded, to laughs from those in LinkedIn’s live audience.
Courtesy Of Evan Ratliff; AI-Generated with Tavus
Allotted only a few minutes, he talked about HurumoAI’s product road map, and expressed his general enthusiasm for “the innovations we can bring to the table.”
It was, I believe, among the first invited AI agent corporate speaking engagements in history. (Unpaid for both of us, I should note.) Afterwards, Kyle took to LinkedIn to shout out the organizers. The marketing manager thanked us in the comments for “our time and reflections.”
“It was a trip,” he added, “to say the least."
Then, 36 hours later, Kyle's profile was gone, banished from the service. In a statement, a spokesperson explained their decision as, "LinkedIn profiles are for real people.” Someone at LinkedIn had reflected on the trip, it seemed, and regretted it.
“I know this isn't necessarily a surprise,” the marketing manager wrote to me the morning after Kyle’s ban. “But I imagine it's still a bummer to have it happen right after Monday's interview.”
It was. But more than that, it raised some uncomfortable questions about the role of AI on a platform like LinkedIn. Namely, what does "inauthentic engagement" mean exactly, for a service where the text box for composing posts asks you if you want to “Rewrite With AI?” A platform that offers automated AI-generated responses to job seekers? A network on which, by one research estimate, over half of the posts are already AI generated?
Along with Meta and X, LinkedIn has raced to press AI tools upon its users. (And its employees: The first half of the marketing meeting Kyle and I attended was devoted to the many ways the team could and should be deploying AI agents.) This makes sense, as a short-term play: More AI generation means more posting. More posting supports more advertising.
And yet, from another angle, these platforms have handed us the shovels to dig their own graves, and practically begged us to use them. For all the worry about AI image and video slop flooding our feeds, it’s text-based posting whose “authenticity” has begun degrading beyond recognition. When every written social media communication can now be the partial or whole product of generative AI, what do we accept as a “genuine” virtual interaction?
Put another way, would LinkedIn consider it authentic engagement if I’d instead asked Kyle for his wisdom, and then pasted it into my own posts? Would you? LinkedIn might argue that critical element of bona fide engagement involves knowing that you are talking to a real person. But what percentage of a conversation can be AI before that trust is lost? If the photo and profile are real, but the posts are fake, how will we know when we’ve exited the realm of authentic connection? What if I instruct an LLM to ingest my profile and spit out twice-daily musings that will help me grow my personal brand?
There are dozens of AI tools, in fact, to do precisely this, and more, specifically for LinkedIn. Their outputs are increasingly hard to detect, and why wouldn’t they be? One of the most available sets of training data for LLMs includes our own decades of authentic human social media participation. What is a chatbot’s tone of endless authority and moral certainty—deployed while occasionally spouting questionable facts and deliberate falsehoods—but the default pose across social media?
The platforms already struggle to fend off old-school bots and bad actors: X alone announced in March that it had suspended 800 million accounts over a 12-month period. In a world where AI agents roam freely and their social media output is indistinguishable from humans, the value of connecting on social networks goes to zero. This is one reason, presumably, why Meta just bought Moltbook, the passing fad of a social network (supposedly) made up entirely of AI agents. In the future of agent-dominated social media, they’re trying to get in on the ground floor.
Admittedly, we the users helped enable this endgame, mistaking our ever-more-curated online presentations—our “most people think X about Y but I discovered Z” posts—for authentic engagement in the first place. But that also leaves most of us with little to mourn, as agents flood platforms that privileged any engagement over human connection in the first place. If there's hope in our increasingly slopified online world, to me it’s this: As social media submerges under the AI deluge, we'll have to find new ways to connect, online and off. Let the bots have the platforms, I say. They can spend eternity influencing each other.
Let us know what you think about this article. Submit a letter to the editor at [email protected].































