Opinion: At AI conferences, I spend a large portion of my time talking about the commercial and customer transformation of AI. But here’s what I don’t put on the slide.

What keeps me up about AI in 2026 is not superintelligence. Not hallucinations. Not the absence of a regulatory framework, though that is its own disaster in slow motion. The projects I worry about most have already left the debate stage. They are infrastructure now; quietly, permanently and without anyone voting on them.
I call this the normalisation stack. Not one system. Not one company. A set of interlocking projects that, taken together, are reshaping war, identity, trust and political persuasion faster than any government, regulator or boardroom can respond. None of them require artificial general intelligence to be dangerous. They just need to ship.
Here are five that deserve your attention right now.
1. Anduril’s Autonomous Weapons Factory
Start here: in September 2025, Australia signed a A$1.7 billion contract with Anduril Industries for Ghost Shark; an extra-large autonomous undersea vehicles designed for intelligence, surveillance and strike operations.
Which makes what is happening in Ohio worth paying attention to.
Arsenal-1 — five million square feet across 500 acres in Pickaway County — began serial production of the YFQ-44A Fury unmanned combat aircraft in March 2026. The Fury is designed to fly alongside crewed fighters, making decisions at speeds no human can match. The Lattice software platform running underneath processes 2.4 terabytes of sensor data daily across 100-plus installations. The Roadrunner counter-drone system went from concept to combat-validated in under two years. A traditional procurement equivalent would take a decade.
But the factory is not the story. The business model is.
Anduril is building software-defined warfare the way a tech company ships a product: iterate fast, push updates, scale hardware. Palmer Luckey, the company’s founder, has said his goal is to turn America and its allies into “prickly porcupines so that no one wants to step on them.”
The question this raises is not whether autonomous weapons are ethical in the abstract. It is whether democratic oversight can keep pace when the production model is designed to outrun it. We just wrote a A$1.7 billion cheque to find out.
2. The AI Bioweapon Nobody Is Governing
In 2022, researchers at Collaborations Pharmaceuticals inverted the safety parameters of an AI drug-discovery model and asked it to find toxic ones instead. In six hours, it generated 40,000 candidate molecules, including several assessed as more lethal than anything in existing arsenals. They published the results as a warning. The tools they used are commercially available.
That was three years ago. The models have not stood still.
RAND, the Bulletin of the Atomic Scientists and Nature have all flagged AI-enabled bioweapons as the leading edge of AI risk in 2026.
Not because rogue states need new tools, they already have programmes. Because the barrier to entry has collapsed. Foundation models can now identify viable pathogen modifications, suggest synthesis routes and, in documented cases, provide step-by-step guidance that bypasses safety filters designed for conventional misuse. A motivated individual with a biology degree and API access is operating in a categorically different threat environment than they were eighteen months ago.
What makes this the hardest item on this list is that there is no democratic debate to outrun and no procurement cycle to accelerate. The capability is already in deployed models. The governance frameworks under discussion — compute thresholds, biosecurity red lines, export controls on frontier models — are running twelve to eighteen months behind where the technology is.
That gap is not a policy problem. It is an infrastructure problem. Which is exactly why it belongs on this list.
3. The Disinformation Swarm
The next stage of political manipulation is not one deepfake. It is thousands of them, running simultaneously.
In October 2024, an audio clip manipulating Kamala Harris’s voice was reposted by Elon Musk without a disclaimer and reached 129 million views. Days before Slovakia’s election, an audio deepfake of a top candidate allegedly discussing vote-rigging circulated online. In March 2026, the US National Republican Senatorial Committee released what was described as the first extended AI deepfake of a named electoral candidate, Texas Senate nominee James Talarico, speaking in his own voice for an extended duration.
In Australia, Finance Minister Katy Gallagher and Foreign Minister Penny Wong have both appeared in deepfake investment scam videos. A deepfake of Queensland Premier Steven Miles was released during a state campaign. And Senator David Pocock — trying to ring the alarm, not cause harm — commissioned deepfakes of Anthony Albanese and Peter Dutton to demonstrate how easily it could be done.
A USC study published in March 2026 confirmed what many had suspected: AI agents can now autonomously coordinate propaganda campaigns without human direction. Research from MIT found that GPT-4 is 82 per cent more persuasive than a human when given a target’s background information. Personalised AI political messaging can now reach every registered US voter for under a million dollars.
Persuasion has been industrialised.
The real problem is not that people will believe the fake. It is that they will stop believing the real. Once that happens, every piece of authentic footage becomes contestable. That is not a content moderation problem. It is a structural one — and our political advertising laws were not written for it.
4. Meta’s Glasses That Quietly Train the Machine
In October 2024, two Harvard students, AnhPhu Nguyen and Caine Ardayfio, combined Ray-Ban Meta smart glasses with a facial recognition engine and a large language model. They called it I-XRAY. They pointed the glasses at strangers. Within seconds: full name, home address, phone number, family connections — all pulled from public data. The demo video was watched more than 20 million times. They deliberately did not release the code. The point was not to build a product. It was to show what the components already made possible.
Meta’s own Project Aria Gen 2 captures first-person video, audio, eye-tracking and spatial data, feeding research into egocentric AI — systems that understand the world from the wearer’s perspective. The consumer Ray-Ban Meta glasses add another layer: users analyse what they see through Meta AI, contributing that content to Meta’s training pipeline.
In March 2026, a class action lawsuit alleged that Meta was routing footage, including scenes from bathrooms and intimate settings — to subcontractors in Kenya for manual labelling, without users’ knowledge.
And in February 2026, reporting surfaced Meta’s internal work on “Name Tag”: real-time facial recognition that would identify people in the wearer’s field of view and surface their information through the glasses.
I will be honest: these are genuinely impressive pieces of engineering. That is precisely why they warrant scrutiny. The most consequential AI systems are not the ones people resist. They are the ones people want to wear.
5. The Reputation Economy Nobody Notices
The final entry is the least cinematic and probably the most important. And it does not involve weapons, surveillance glasses or deepfake videos. It involves a score; one that exists about you right now, in systems you have never seen, shaping decisions you will never be shown.
Most Australians remember Robodebt. The automated debt recovery system that sent incorrect notices to more than 400,000 welfare recipients, matching income records through an algorithm that turned out to be fundamentally flawed. The Royal Commission found the scheme unlawful. It had been running for years. The people it targeted had no meaningful way to understand how the decision had been made, no clear mechanism to contest it, and in some cases no idea a debt existed until a letter arrived demanding repayment. That was a government system. The private sector version is quieter, and in some ways harder to unwind.
Oxford academic research has documented that banks adjust lending terms based on social connectedness, using data the borrower never knew was relevant. HireVue, used by more than a third of Fortune 500 companies, analyses video interviews for facial expressions, word choice and vocal patterns to produce a candidate score. Most applicants have no idea the system exists, let alone how it rated them. Insurance companies are increasingly drawing on purchasing data, location history and device behaviour from third-party data brokers, sources you have never heard of, feeding decisions you will never see explained.
The language is always trust and safety. The effect is access mediated by opaque scoring systems — difficult to see, harder to challenge, almost impossible to opt out of.
We have built a reputation economy that operates at scale, in real time, with almost no public accountability, and we have done it without a single piece of landmark legislation. Robodebt required a Royal Commission to unwind. Most of these systems will not get one.
Why These Matter Together
What connects these five projects is not ideology. It is infrastructure. Not one of them is a speculative prototype in a research lab. Every one is being built, shipped, tested or scaled right now. Together, they form the normalisation stack — the systems that will determine how war is produced, how people are scored, how elections are manipulated, how public space is surveilled and how access is granted or denied. Palantir maps the targets. Anduril builds the weapons. The infrastructure is already connected, and this list could have been longer.
The biggest AI risk in 2026 is not that machines become human. It is that these systems become ordinary before anyone with the authority to govern them has asked three basic questions: who do they serve, what do they collect, and how hard are they to stop?
If you sit on a board, run a government department, or lead a company that touches any of these categories — which of these five could you explain to your stakeholders today? And if the answer is none of them, that is its own answer.
Lucio Ribeiro is a technology and AI leader specialising in the application of artificial intelligence across marketing, media, and business. He is a Forbes Australia contributor, has been recognised by Marketing Today as one of the world’s most influential online marketers, and holds executive certification in artificial intelligence from MIT.
Want to see more Forbes articles on your feed? Tap here to make Forbes Australia a preferred source on Google.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.