Notes from NAB 2026

Marin Smiljanic
Co-Founder & CEO
Published:
April 30, 2026
Topic:
Insights

The South Hall was dark this year.
If you have walked the Las Vegas Convention Center during NAB Show before, you know that the South Hall is usually a city of its own. Camera vendors, post-production booths, lighting rigs, the loud carpet that wraps around half the floor. This year it was closed. The show happened across the Central, North, and West Halls instead, and a lot of attendees noticed the absence before they noticed anything else.
The official explanation is renovation work at the convention center. The unofficial explanation has more to do with what the numbers tell us. Roughly 58,000 registered attendees showed up, up slightly from 2025 but well short of 2024. The drop was driven almost entirely by international visitors, whose share of the total fell from about 26% to 22%. Fewer countries were represented. The show physically contracted in part because the audience for it physically contracted.
You can read this two ways. The first is decline. The second, which I think is closer to right, is consolidation. The people who were in the room were the ones who really needed to be there. Vendor conversations were sharper. Buyer conversations were less performative. For four days, NAB felt focused in a way it has not in recent memory.
That focus showed up most clearly in how AI was discussed.\

AI grows up
A year ago, AI at NAB was a brand. Every booth had a sign with the letters on it. Most demos were sketches. The serious vendors knew this and quietly hated it, because the noise made it harder to talk about the few systems that actually worked.
This year was different. AI was still everywhere, but it was not thrown around as cavalierly. Conversations had moved from "we use AI" to "here is what it does and where it fails." The use cases on display were narrow and concrete. Metadata tagging on ingest. Content search across long archives. Automated rough cuts. Detection of sensitive material before it goes to air. Speech-to-text with speaker diarization that actually held up on multi-mic field audio.
Agentic workflows made their debut. Not as keynote slides, but as live demos on the floor. A handful of vendors showed systems that took a natural-language brief, decomposed it into steps, called other tools, and produced an editable timeline at the end. These are still early. Some failed in front of audiences. The interesting fact is that they were attempted in public at all.
The shift is one of vocabulary as much as capability. A year ago everyone had AI. This year people had pipelines, with AI placed in specific spots where it earned its keep. That is what maturity looks like when it arrives at a trade show.

Avid and Gemini
The most consequential demo on the floor, in my opinion, came from Avid.
Media Composer is the flagship non-linear editor for the high end of the post-production market. It is the software people use to cut feature films, prestige television, and most major newsroom packages. Avid is rarely the first to add new features. When Avid integrates something deeply, it is a reasonable signal that the feature has crossed from experimental into operational.
At NAB 2026, Avid announced a multi-year partnership with Google Cloud and showed a working integration of Gemini and Vertex AI inside Media Composer and Avid Content Core. The headline capabilities are intelligent metadata enhancement, automated logging, B-roll suggestion, and natural-language search across the entire content store. An editor types or speaks a description of the shot they want. A wide of a city skyline at dusk. Two people laughing in a kitchen. A particular athlete celebrating. The system retrieves the candidates from the archive. Visual content, dialogue, and emotional tone are all addressable as a query.
I have written before that the unsolved problem in video is retrieval, not generation. Watching an editor pose questions to a Media Composer timeline and get back the right clip in seconds is, for anyone working on this problem, a quiet vindication. The thing that takes weeks of manual scrubbing in a typical newsroom is starting to take seconds inside the tool the newsroom already lives in.
There is a caveat. A Gemini-backed system inside Media Composer is, by definition, cloud-dependent and generalist. It will work well for organizations that can move content into Google Cloud and that find Gemini's defaults appropriate for their material. For broadcasters with sensitive footage, defense work, healthcare, or sports rights that cannot leave a closed environment, this will not be the right answer. The fact that the largest editor on the market is now shipping with deep semantic search built in does not solve the search problem for everyone. It does, however, end the debate about whether the problem is worth solving.

The Riedel surprise
The biggest non-AI story of the week was an acquisition.
Thomas Riedel, the founder of Riedel Communications, has acquired ARRI, the Munich-based maker of cinema cameras and lighting that has shaped the look of feature film for nearly a century. The deal was announced just before NAB and was the subject of more hallway conversation than anything else on the show floor.
It is a curious pairing on the surface. Riedel built his career in broadcast comms and live production infrastructure: intercoms, signal transport, networked control. ARRI builds the cameras and lights you see in the credits of half the films you have watched in the last decade. The two companies operate at different ends of the value chain.
The pairing makes more sense once you read it as a bet on integrated systems. Live entertainment and sports production are increasingly continuous workflows where capture, transport, and control all need to behave as a single fabric. A cinema camera that speaks fluently to a live-production network is a different kind of product than a cinema camera that ships with a manual. ARRI keeps its team and its headquarters in Munich. Riedel gets a flagship capture device for the integrated ecosystem he has been building for years. The Eurovision Song Contest, where Riedel is the technology provider, was named as the first joint outing.
I suspect this acquisition will look more important in five years than it does now. It is the moment a piece of cinema-grade hardware stopped being a standalone tool and joined a broadcast ecosystem. Other camera makers will have to decide whether to follow.

The story nobody put on a billboard
One thing that did not get the coverage it deserved was the IBC Incubator project around what is being called the Story Object Model. The participants are AP, NBCUniversal, ITN, and the BBC, with technical partners across the industry. The goal is an open standard for representing story context as it moves through the production pipeline, from newsgathering to editing to distribution.
In information-retrieval terms, this is an attempt to make the story itself the unit of metadata, rather than the asset. If that succeeds, it changes what agentic systems can do in news production, because agents can finally reason about a story as a coherent object instead of stitching together loose files and free-text descriptions.
It is early. Open standards in media have a mixed track record. But this is the kind of infrastructure work that, if it sticks, becomes the substrate other things are built on. Worth watching.

Trade-offs
A few notes of caution after a week on the floor.
First, the practical AI demos worked because they were narrowly scoped. Pull any of them out of their tested workflow and they become unreliable in ways that matter. Vendors are getting better at being honest about this. Buyers should still ask hard questions about edge cases.
Second, the agentic systems that worked best were the ones with shallow autonomy. The systems that promised long autonomous chains of decisions tended to fail more publicly. The lesson, again, is that maturity here looks like specificity. An agent that does one thing well across a long tail of inputs is more useful than one that pretends to do everything.
Third, the international decline at NAB is worth taking seriously. A more domestic show is easier to navigate, but the trade show as an institution depends on global flow. Whether 2026 is a blip or a trend will matter for what NAB looks like in 2028.
To conclude
NAB 2026 was smaller, denser, and more focused than recent shows. AI moved from slogan to plumbing. Avid's Gemini integration was, for me, the clearest signal that semantic search inside video is now table-stakes at the high end of production. The Riedel acquisition of ARRI hinted at how the hardware side of media is consolidating around integrated systems. The Story Object Model, if it survives, may end up mattering more than any product on the floor.
The show contracted in size and gained in signal. That is not a bad trade.
- Marin




