{"id":45393,"date":"2026-02-14T09:32:13","date_gmt":"2026-02-14T09:32:13","guid":{"rendered":"https:\/\/agooka.com\/news\/technologies\/openai-launches-gpt-5-3-codex-spark-for-ultra-fast-real-time-coding\/"},"modified":"2026-02-14T09:32:13","modified_gmt":"2026-02-14T09:32:13","slug":"openai-launches-gpt-5-3-codex-spark-for-ultra-fast-real-time-coding","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/technologies\/openai-launches-gpt-5-3-codex-spark-for-ultra-fast-real-time-coding\/","title":{"rendered":"OpenAI launches GPT-5.3-Codex-Spark for ultra-fast real-time coding"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/dataconomy.com\/wp-content\/uploads\/2026\/02\/1121140.jpg\" alt=\"OpenAI launches GPT-5.3-Codex-Spark for ultra-fast real-time coding\" title=\"OpenAI launches GPT-5.3-Codex-Spark for ultra-fast real-time coding\"\/><\/p>\n<p>On Thursday, OpenAI announced GPT-5.3-Codex-Spark, a lightweight version of its agentic coding tool Codex, which the company launched earlier this month. Powered by Cerebras\u2019 Wafer Scale Engine 3 chip, Spark enables faster inference as the first milestone in OpenAI\u2019s multi-year partnership with Cerebras.<\/p>\n<p>The original GPT-5.3-Codex model serves longer, heavier tasks requiring deeper reasoning and execution. In contrast, GPT-5.3-Codex-Spark focuses on swift operations. OpenAI describes it as a smaller version designed specifically for reduced latency during inference processes. This new tool integrates hardware from Cerebras directly into OpenAI\u2019s physical infrastructure, representing deeper collaboration between the two companies.<\/p>\n<p>OpenAI and Cerebras revealed their partnership last month through a multi-year agreement valued at over $10 billion. At that time, OpenAI stated, \u201cIntegrating Cerebras into our mix of compute solutions is all about making our AI respond much faster.\u201d The company now positions Spark as the initial achievement in this alliance, emphasizing its role in accelerating AI responses.<\/p>\n<p>Cerebras\u2019 Wafer Scale Engine 3 powers Spark\u2019s inference capabilities. This third-generation waferscale megachip contains 4 trillion transistors, enabling high-performance computing tailored for AI workloads. OpenAI highlights Spark\u2019s suitability for real-time collaboration and rapid iteration. The tool functions as a daily productivity driver, assisting users with rapid prototyping rather than extended computations handled by the base GPT-5.3-Codex model.<\/p>\n<p>Spark operates with the lowest possible latency on Codex. OpenAI explains its purpose in an official statement: \u201cCodex-Spark is the first step toward a Codex that works in two complementary modes: real-time collaboration when you want rapid iteration, and long-running tasks when you need deeper reasoning and execution.\u201d Cerebras\u2019 chips support workflows that demand extremely low latency.<\/p>\n<p>Currently, Spark appears as a research preview exclusively for ChatGPT Pro users within the Codex app. This limited rollout allows initial testing among subscribers on the Pro plan. Prior to the announcement, OpenAI CEO Sam Altman hinted at the release on Twitter. He posted, \u201cWe have a special thing launching to Codex users on the Pro plan later today.\u201d Altman added, \u201cIt sparks joy for me.\u201d<\/p>\n<p>Cerebras, established over a decade ago, has gained prominence in the AI sector. Last week, the company secured $1 billion in fresh capital, achieving a valuation of $23 billion. Cerebras has indicated plans to pursue an initial public offering. Sean Lie, CTO and co-founder of Cerebras, commented on the development: \u201cWhat excites us most about GPT-5.3-Codex-Spark is partnering with OpenAI and the developer community to discover what fast inference makes possible \u2014 new interaction patterns, new use cases, and a fundamentally different model experience.\u201d Lie described the preview as \u201cjust the beginning.\u201d<\/p>\n<p><strong>Featured image credit<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On Thursday, OpenAI announced GPT-5.3-Codex-Spark, a lightweight version of its agentic coding tool Codex, which the company launched earlier this month. Powered by Cerebras\u2019 Wafer Scale Engine 3 chip, Spark enables faster inference as the first milestone in OpenAI\u2019s multi-year partnership with Cerebras. The original GPT-5.3-Codex model serves longer, heavier tasks requiring deeper reasoning and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":45394,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-45393","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technologies"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45393","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=45393"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45393\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/45394"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=45393"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=45393"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=45393"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}