{"id":49761,"date":"2026-04-29T18:41:22","date_gmt":"2026-04-29T18:41:22","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed\/"},"modified":"2026-04-29T18:41:22","modified_gmt":"2026-04-29T18:41:22","slug":"sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed\/","title":{"rendered":"Sanctioned Chinese AI Firm SenseTime Releases Image Model Built for Speed"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>SenseTime, a Chinese AI company best known for its facial recognition technology, released a new open source model on Tuesday that it claims can both generate and interpret images far faster than top models developed by US competitors. SenseNova U1 could help the company reclaim lost ground after it slipped from its place among the leading players in China\u2019s AI development race.<\/p>\n<p>The model\u2019s secret sauce is its ability to \u201cread\u201d images without translating them to text first, speeding up the process and reducing the amount of computing power required. \u201cThe model\u2019s entire reasoning process is no longer limited to text. It can reason with images as well,\u201d Dahua Lin, cofounder and chief scientist at SenseTime, said in an interview with WIRED.<\/p>\n<p>Lin, who is also a professor of information engineering at the Chinese University of Hong Kong, says that models capable of processing images directly will enable robots to better understand the physical world in the future.<\/p>\n<p>Like DeepSeek&#039;s latest flagship model, SenseTime says U1 can be powered by Chinese-made chips. \u201cSeveral Chinese domestic chipmakers have finished optimizing compatibility with our new model,\u201d Lin says. On release day, 10 Chinese chip designers, including Cambricon and Biren Technology, announced their hardware supports U1.<\/p>\n<p>That flexibility matters because US export controls restrict Chinese firms from accessing the world&#039;s most advanced AI chips, particularly those used for training, which at this point are primarily developed by Western companies like Nvidia. \u201cWe will continue to push for training on more different chips,\u201d Lin says. But he also acknowledges that SenseTime \u201cmay still need to use the best chips to ensure the speed of our iteration.\u201d<\/p>\n<p>SenseTime released U1 for free on Hugging Face and GitHub, another sign of how Chinese companies are becoming some of the most active contributors to open source AI.<\/p>\n<p>SenseTime was founded in 2014 and became a world leader in computer vision, which is used in applications like facial recognition and autonomous driving. But when ChatGPT and other AI systems powered by natural language processing became the hottest thing in the tech industry, SenseTime began struggling to turn a profit and fell behind newer Chinese startups like DeepSeek and MiniMax.<\/p>\n<p>SenseTime says it hopes that releasing SenseNova-U1 publicly for anyone to use will help it catch up with both domestic and Western AI players. Lin says the company finally made the decision last year to focus on open source because of the helpful feedback it gets from researchers, which enables the company to iterate faster. \u201cIn this day and age, being open source or closed source is not the winning factor; the speed of iteration is,\u201d Lin explains.<\/p>\n<p>Going open source also helps SenseTime continue collaborating with international researchers without the interference of geopolitics. The company has been sanctioned repeatedly by the US government in recent years over allegations that its facial recognition technology helped power surveillance systems used to monitor and detain Uyghurs and other minority groups in China\u2019s Xinjiang region. As a result, US firms are restricted from investing in SenseTime and selling certain technologies to it without a license. (SenseTime has denied the allegations.)<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Mike He Yan Kuan Text Scoreboard Adult Person and Head\" src=\"https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_1600%2Cc_limit\/Zhang%2520Mingyuan.jpeg\"\/><\/p>\n<p>A sample image created using SenseNova U1. <strong>Generated using AI<\/strong><\/p>\n<\/figure>\n<h2>Seeing Clearly<\/h2>\n<p>In an accompanying technical report, SenseTime claims that SenseNova-U1 generates higher-quality images than all other open source models currently on the market. Its performance is comparable to leading Chinese closed source models like Alibaba\u2019s Qwen and ByteDance\u2019s Seedream, but it still lags behind industry leaders like GPT-Image-2.0, which came out just a week ago.<\/p>\n<p>But the model\u2019s main selling point is its ability to generate images much faster than all of those models. It relies on an innovative technical structure called NEO-Unify that SenseTime previewed earlier this year.<\/p>\n<p>The model\u2019s new architecture, which could improve efficiency and performance, is what sets U1 apart, says Adina Yakefu, an AI researcher at Hugging Face. \u201cThis is a more ambitious approach, as it still faces significant practical challenges,\u201d she says. \u201cIt\u2019s good that they decided to open source it, so the community can explore and test it more widely.\u201d The model is also small enough to run on PCs and phones, making it potentially useful in many scenarios.<\/p>\n<p>Lin says the technique SenseTime developed will be especially useful in robotics. When a robot tries to process the visual world, it needs to sort through an enormous amount of information. \u201cIt has to think, \u2018How should I deal with all the clutter in this room? If there is a complicated machine in front of me, which button should I press?\u2019 All of these are forms of information, and they need to be integrated into the model\u2019s internal judgment,\u201d he says. Because it can understand images natively, Lin is hopeful that SenseTime\u2019s technology will help robots act faster and make fewer mistakes in complex environments.<\/p>\n<p>China is in the midst of a humanoid robot boom. While SenseTime doesn\u2019t currently develop its own robots, Lin says it is closely working with ACE Robotics, a startup led by another SenseTime cofounder. It&#039;s also developing models that specialize in geospatial understanding, or creating simulations of the real world.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story SenseTime, a Chinese AI company best known for its facial recognition technology, released a new open source model on Tuesday that it claims can both generate and interpret images far faster than top models developed by US competitors. SenseNova U1 could help the company reclaim lost ground after [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":49763,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-49761","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49761","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=49761"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49761\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/49763"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=49761"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=49761"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=49761"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}