IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Sam altman

Altman sets out OpenAI roadmap for future AI co-workers

Wed, 28th Jan 2026

OpenAI chief executive Sam Altman has told builders and developers that the nature of software work is changing as AI systems take on more of the coding and debugging, while companies face growing pressure to rethink hiring, security and go-to-market strategies.

Software jobs

Altman said the role of software engineers is moving away from hand-writing syntax towards directing powerful general models that can generate code on demand and customise tools for each user.

"I think what it means to be an engineer is going to super change... the shape of that job and the amount of time you spend like typing code or debugging code... is going to very much change." said Sam Altman, Chief Executive, OpenAI.

He described a future in which many people use software written for one person or a small group, and continuously adapt their own tools instead of relying on static, mass-market applications. That shift, he argued, will increase rather than reduce demand for people who can tell computers what to do, even as individual projects become cheaper and faster to build.

GTM pressure

Altman told the audience that building products has become much easier with AI, while gaining user attention has become the central bottleneck. He described human attention as a finite commodity in an environment where many companies can quickly assemble similar tools.

He urged founders to focus on durable advantages such as sticky products, strong distribution and defensible integrations. Features that sit too close to model capabilities risk being displaced when new AI releases arrive, he said.

Idea quality

Altman addressed rising criticism of AI-generated content quality and the risk of so-called "slop". He said the real constraint for many teams lies in the quality of the ideas they pursue, rather than in the ability to produce outputs.

"It's popular to call AI output 'slop,' but there's a lot of like human-generated slop in the world too." said Altman.

"I am increasingly a believer that we think at the limits of our tools... If we can build tools to help people come up with good ideas... that is going to be a very significant contribution." said Altman.

He said OpenAI is seeing scientists use advanced models to generate research directions and explore many branches of a problem before choosing where to go deeper, treating AI as a pool of "unlimited post-docs" that can support breadth-first exploration.

Model roadmap

Altman acknowledged criticism that recent OpenAI models have prioritised coding ability and tool use over writing quality. He said some versions of GPT-5 have been strong on reasoning and engineering but less readable for extended prose.

He told attendees that OpenAI intends to move back towards more general-purpose models that combine strong coding performance with clear, concise writing. Intelligence, he said, should be "fungible" across tasks such as reasoning, communication and software generation, and future systems should meet a high bar on each.

Cost and speed

Altman said OpenAI is targeting a sharp reduction in the price of accessing high-level intelligence, alongside major gains in speed. He suggested that by the end of 2027, systems at the level of the GPT-5.2 family could become roughly 100 times cheaper to run.

At the same time, he said many customers now push harder on latency than on price, especially when they depend on agents that must respond in near real time. He described cost and speed as separate optimisation problems that OpenAI will have to balance.

Agents and micro-apps

Altman predicted a shift in user expectations around software interfaces. Rather than waiting for packaged applications to update, he expects people to rely on agents that write small, task-specific tools or "micro-apps" on the fly.

He said OpenAI has seen internal staff use agentic tools to adapt workflows in highly individual ways, with each person maintaining their own set of scripts and interfaces. He described this as evidence that software will become more personalised and dynamic as models improve.

Security and bio-risk

OpenAI is spending increasing time on security and safety, particularly in biosecurity, Altman said. He argued that efforts to block access or filter harmful prompts will not scale indefinitely as powerful models spread.

"The shift that I think the world needs to make for AI security generally... is to move from one of blocking to one of resilience." said Altman.

"We need like a society-wide effort... to provide the infrastructure for this resilience, not labs that we trust to sort of always block what they're supposed to block." said Altman.

He said AI systems can also support the defensive side, for example by helping researchers design countermeasures and improve detection, but warned that governance and infrastructure will need to adapt.

AI co-workers

Altman told the audience that OpenAI plans to slow its own headcount growth because it expects smaller teams to achieve more with AI tools. He said many companies will face similar decisions as AI co-workers take on larger portions of knowledge work.

"We are planning to dramatically slow down how quickly we grow... because we think we'll be able to do so much more with fewer people." said Altman.

"If companies don't adopt AI aggressively... they will eventually just be outcompeted by a fully AI company that doesn't have to have the sort of silly policies that prevent big companies from using AI." said Altman.

Personalisation and identity

Altman said OpenAI is working on a "Sign in with ChatGPT" feature that would let users bring their accounts, model access and, over time, more context into third-party applications. He said deeper personalisation will require careful handling of memory and privacy.

He told attendees he now feels comfortable letting an AI system access most of his digital life to provide better assistance, while still drawing the line at always-on recording devices. He said any wider roll-out of shared memory across services will need robust security safeguards.

Skills for the future

Altman played down advice to "learn to code" as a central career instruction for young people in an AI-heavy economy. He said softer capabilities will matter more as models take over much of the syntax-heavy work.

"Become high agency, get good at generating ideas, be very resilient, be very adaptable to a rapidly changing world... I think these are going to matter more than any specific [syntax]." said Altman.