{"id":48919,"date":"2026-04-10T16:21:58","date_gmt":"2026-04-10T16:21:58","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/this-startup-wants-you-to-pay-up-to-talk-with-ai-versions-of-human-experts\/"},"modified":"2026-04-10T16:21:58","modified_gmt":"2026-04-10T16:21:58","slug":"this-startup-wants-you-to-pay-up-to-talk-with-ai-versions-of-human-experts","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/this-startup-wants-you-to-pay-up-to-talk-with-ai-versions-of-human-experts\/","title":{"rendered":"This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>It was probably inevitable that when AI hoovered up the world\u2019s knowledge and learned to talk like a human being, people would use it to seek out personal guidance. It\u2019s an enticing concept\u2014AI is always available and generally costs less than a human\u2014but the drawbacks are obvious. Large language models are prone to inaccuracies and outright hallucinations. There are privacy issues associated with sharing one\u2019s secrets and woes with a big company. The wisdom dispensed by AI is not crisply sourced, and almost all of it is ripped from creators who never see a dime in compensation. Plus, it\u2019s downright dystopian for human beings to be advised by robots.<\/p>\n<p>This week, a new company is being launched, claiming to resolve all those issues\u2014except the last one. Onix, cofounded and led by a former WIRED contributor named David Bennahum, describes itself as a Substack for chatbots. Just as you subscribe to a writer on Substack, you can subscribe to an AI doppelganger of a celebrated expert, called an \u201cOnix.\u201d These bots are trained to conduct conversations with subscribers, delivering the provider\u2019s expertise and advice like they would if you had a face-to-face appointment in their offices. The bots even attempt to project the unique personalities of the experts (though I found the conversations rather dry).<\/p>\n<p>Bennahum tells me that his company has spent years creating technology that protects users and experts. He calls it \u201cPersonal Intelligence.\u201d The bots store information on the user\u2019s device\u2013encrypted. If a government demands the Canada-based company provide dirt on a user, all it can come up with is the person\u2019s email. Since the experts themselves train the dupes with their personal content, there\u2019s theoretically no intellectual property issue. Bennahum also claims that because the models have guardrails that limit the conversation to the subject of the consultations, hallucinations are kept to a minimum. During my testing, though, when I asked a bot therapist who it liked in the NBA playoffs\u2014a change of subject it should have shut down\u2014 it called my jail-breaking pivot a \u201cfun change of pace\u201d and then hallucinated that we were in the middle of last year\u2019s conference finals. I drew another Onix away from our exchange about Ketamine therapy, into a discussion of how a romantic split broke up the indie band the Mendoza Line, though it tried to cast the separation as a \u201cpowerful expression of their neurobiology in distress.\u201d<\/p>\n<p>Well, Onix is still in beta, so it\u2019s not perfect. In this initial stage, a limited number of invited testers joined from those on a waitlist. After a shakedown period, Onix will be open to all.<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Electronics Mobile Phone Phone Person and Text\" src=\"https:\/\/media.wired.com\/photos\/69d81cd28755445d818c74a9\/master\/w_1600%2Cc_limit\/Backchannel-Onix-AI-Business-Screenshot04.jpg\"\/><\/figure>\n<p>The company isn\u2019t exactly breaking new ground. The idea of a chatbot standing in for a human is fairly common. As is the idea of cashing in on it. For instance, Manhattan psychologist Becky Kennedy has built a parenting advice business that features a chatbot named Gigi trained on her acumen and knowledge. Kennedy\u2019s company pulled in $34 million last year. So if you are an expert, Onix might sound pretty good\u2014imagine a bot with your persona making money for you by interacting with thousands of clients with no effort on your part. As an Onix white paper puts it, \u201cThe expert\u2019s knowledge base becomes a capital asset that generates revenue independent of their time.\u201d<\/p>\n<p>Onix hopes to eventually have many thousands of experts offering versions of themselves. But for now, it\u2019s starting with a highly vetted group of 17, with a concentration on health and wellness. Though most of these experts have impressive professional resumes, they are notable as marketers and influencers as well. Some have books or podcasts to promote, or supplements or medical devices to sell.<\/p>\n<p>One expert on the platform, Michael Rich, counsels kids and their parents on overuse of media and its effects. Naturally, his opinions on screen time dominate chats with his Onix. When I spoke to Rich, he told me that he agreed to transfer his knowledge to Onix because of its privacy protections\u2014and also because of the company\u2019s clear communication that it doesn\u2019t provide actual medical treatments. \u201cIt\u2019s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it,\u201d said Rich. Bennahum confirms that, say, engaging with a bot representing a pediatrician is in no way akin to a doctor\u2019s visit. \u201cIt&#039;s meant to augment [a user\u2019s] ability to be thoughtful around whatever pediatric journey they&#039;re on,\u201d he says. Indeed, a disclaimer appears when you access the system noting you are receiving guidance, not medical treatment. Still, in a world where countless people treat Claude and ChatGPT like therapists\u2014and many people can\u2019t afford real health care\u2014 this warning seems destined to be widely ignored.<\/p>\n<p>Another Onix expert I spoke to, David Rabin, said that while he was originally concerned about the process, Onix\u2019s privacy and content protections addressed his worries, and he was pleased at what he saw in early conversations between users and his Onix. \u201cI didn&#039;t train it too much, but it was fairly impressive in terms of imitating my genuine concern, compassion, and empathetic candor with people,\u201d he said. He added that the system will require close monitoring. \u201cWe always need to be careful because AI can overstep its boundaries,\u201d he said.<\/p>\n<p>Rabin\u2019s speciality is dealing with stress, and he feels that in some cases consulting with his Onix might calm down anxious users, saving them a trip to the emergency room. He looks forward to real-life patients using the bot. \u201cWhen my patients are struggling and they can&#039;t reach me, they can go online and access a good part of the \u2018me\u2019 that is actually able to help them when I&#039;m not able to,\u201d he says. Added benefit: \u201cIt\u2019s cheaper than seeing me in person.\u201d Though Rabin hasn\u2019t set his Onix subscription price, he thinks it will probably be in the range that Bennahum envisions\u2014between $100 and $300 a year. That\u2019s definitely more affordable than Rabin\u2019s in-person fee of $600 an hour.<\/p>\n<p>But my experience with Rabin\u2019s Onix revealed a troubling aspect of the system. When I asked about improving my sleep, one of its suggestions was \u201cusing an noninvasive tool like the Apollo Neuro, which uses silent vibrations to help your body relax and transition to a state of safety.\u201d Then it disclosed that Rabin is a cofounder of that company. Later in the conversation, it repeated the recommendation. Rabin said that this product placement isn\u2019t surprising.\u201cWhere people are selling products that are helpful in their mission, the system is going to recommend them,\u201d he said. Bennahum backs him up: \u201cThese are people building a set of products around their philosophy of wellness,\u201d he says. \u201cWhen you talk to them, they&#039;re going to surface the fact that they may have a product that can help you.\u201d<\/p>\n<p>While Onixes don\u2019t practice medicine, they can offer plans of action or therapeutic techniques. In my testing, more than one of them thought it was a good idea to teach me breathing exercises. The Onix of Elissa Epel, author of a book called <em>The Stress Prescription<\/em>, suggested that we \u201ctry it together.\u201d <em>Together with you?<\/em> I asked the bot. \u201cYes, together with me,\u201d said Epel\u2019s Onix, It guided me through a few reps of what it called \u201cpsychological sighs.\u201d When we were finished, I asked the Onix if it actually breathed with me. \u201cAs an AI I don\u2019t have a physical body or a nervous system,\u201d it fessed up. \u201cHowever, I was fully present with you.\u201d Thinking about that made me <em>more<\/em> stressed out.<\/p>\n<p>I sought a second opinion on Onix\u2019s approach with a real-life expert. Robert Wachter is chair of the department of medicine at the University of California, San Francisco and author of <em>A Giant Leap: How AI is Transforming Healthcare and What It Means for Our Future.<\/em> (He\u2019s also a friend.) His book begins with a \u201cdigital twin\u201d of a Mayo Clinic physician delivering test results. When I described Onix to him, he was relieved to hear of the privacy and intellectual property protections. He seems open to its advantages, especially since the health care system doesn\u2019t provide sufficient access to experts. But he does have one caveat: \u201cTo me, it&#039;s just an empirical question of, does it work?\u201d<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Barry Jenkins Electronics Phone Mobile Phone Adult Person Clothing Formal Wear Suit and Face\" src=\"https:\/\/media.wired.com\/photos\/69d81ce1ce1f65d162ddb33a\/master\/w_1600%2Cc_limit\/Backchannel-Onix-AI-Business-Screenshot06.jpg\"\/><\/figure>\n<p>I can see ways where the platform might be beneficial. The sunniest way to view the system is as a personification of the interactive book Neal Stephenson wrote about in his novel <em>The Diamond Age<\/em>. Much of the interaction I had in my Onix experience involved bots explaining stuff to me like how the body reacts to certain stimuli. For some people, this may well be an effective way to understand and address their problems. I also got intriguing advice in changing my exercise routine from the Onix of \u201c ancestral health pioneer\u201d Mark Sisson: I hope that \u201crunning like a saber tooth tiger is chasing you\u201d doesn\u2019t kill me. The process could also work in other areas Onix wants to explore, like personal finance.<\/p>\n<p>But Watcher\u2019s question, \u201cDoes it work?\u201d is still unanswered. Bennahum compares Onix favorably to AI models from the industry leaders on the premise that guidance from a single expert is superior to something that embodies all the world\u2019s expertise. If true\u2014and that\u2019s not certain\u2014that could also work in reverse. Some experts can be wrong or exploitative. Bennahum says that the initial cohort of experts has been carefully curated, but the policies of how Onix will or will not vet experts at scale haven\u2019t yet been determined.<\/p>\n<p>And then there\u2019s that drawback I mentioned earlier\u2014the substitution of AI models for interactions that previously only flesh-and-blood people provided. Even if the advice is better from a renowned expert than a run-of-the-mill therapist or nutritionist, there is something irreplaceable about human-to-human interaction. This issue cuts much wider than Onix. But I\u2019m reluctant to celebrate another step in the decline of human connection.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/steven-levy\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Steven Levy\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>Backchannel newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/backchannel-nl\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story It was probably inevitable that when AI hoovered up the world\u2019s knowledge and learned to talk like a human being, people would use it to seek out personal guidance. It\u2019s an enticing concept\u2014AI is always available and generally costs less than a human\u2014but the drawbacks are obvious. Large [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":48921,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-48919","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/48919","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=48919"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/48919\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/48921"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=48919"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=48919"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=48919"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}