The halls of the Venetian in Las Vegas were humming with the electric energy of 14,000 attendees from over 60 countries, all gathered to witness the dawn of a new era in creative technology. At the center of this whirlwind is Oscar Vail, a technology expert whose deep understanding of quantum computing and robotics gives him a unique vantage point on the “agentic” revolution currently sweeping through enterprise software. As Adobe Summit 2026 unfolded, Vail sat down to dissect the seismic shifts in how brands communicate, moving beyond simple automation into a world of living canvases and self-assembling user experiences. Our conversation explores the pivot toward extreme personalization, the technical architecture of real-time web generation, and the fundamental restructuring of creative roles in an age where the demand for content is skyrocketing.
This discussion covers the rapid evolution of content supply chains, where the pressure to stay relevant on platforms like TikTok is forcing a move toward automated asset amplification. We delve into the mechanics of agentic systems that interpret human intention rather than just executing commands, the rise of “living canvases” that bridge the gap between working documents and live web exports, and the emergence of simulated A/B testing that predicts performance before a single dollar is spent. Vail also provides a nuanced take on the future of the workforce, arguing that while AI may touch every facet of a professional’s day, it serves to amplify the human story rather than replace the storyteller.
Content demand is projected to rise fivefold over the next two years, yet quality must remain high to avoid what industry leaders call “AI slop.” How do agentic systems help brands manage this volume, and what specific steps ensure that the resulting output stays strictly on-brand?
The sheer scale of a fivefold increase in content demand is enough to paralyze even the most seasoned marketing department, creating a desperate need for what David Wadhwani calls an end-to-end agentic AI-powered system. To avoid the trap of “AI slop”—that generic, soul-less content that merely adds noise to the digital landscape—brands are integrating “Brand Intelligence” layers that learn the specific nuances of a company’s voice and visual identity in real-time. We are seeing companies like P&G lead the way by utilizing these systems to handle the night shifts of content iteration, ensuring that every asset produced aligns with a centralized brand kit. This isn’t just about speed; it involves the system applying “Enterprise Context” to every natural language prompt, meaning the AI understands the legal, cultural, and aesthetic boundaries of the brand before it even suggests a draft. By implementing these agentic workflows, a brand can maintain its soul while the machine handles the heavy lifting of versioning and localization, ensuring that the 5x growth in volume doesn’t result in a 5x dilution of quality.
AI is shifting from a background tool to a comprehensive user experience layer that interprets human intentions. How does this change the way a creative professional interacts with complex software, and what specific workflows allow a novice to achieve expert-level results?
We are witnessing a fundamental shift where AI becomes a “UX layer” that sits directly on top of the intelligence and tools, effectively acting as a translator between human thought and technical execution. Jensen Huang pointed out a hilarious but profound truth when he admitted to only knowing about 7% of Photoshop’s total capabilities; in the past, that would have limited his creative output, but now, the agentic system fills that 93% gap by understanding his intentions. For a novice, this means they can use tools like “Project Test Kitchen” to sketch a rough idea or provide a visual reference, and the system generates nine distinct, high-fidelity paths to follow. Instead of wrestling with layers, masks, and complex filters, the creator now engages in a conversational experience with the software, asking it to “harmonize” shadows or “rotate” flat objects into 3D models with simple prompts. This transition allows the professional to focus on the “what” and the “why” of a project, while the agentic system handles the “how,” effectively democratizing high-end production and making the software’s complexity invisible to the user.
Systems can now generate user-specific web pages in real-time based on observed behavior rather than manual prompts. What are the technical hurdles in assembling these pages instantly, and how do companies balance this automation with the need for human-led creative direction?
The technical feat of “Project Page Turner” is remarkable because it moves away from static templates to a world where a page is assembled as quickly as it takes to load, driven entirely by user intent. The primary hurdle is the orchestration of live data and brand assets within milliseconds; the system must observe a visitor’s behavior patterns, interpret their goal, and then pull from a library of “Brand Intelligence” to construct a layout that feels intentional. To ensure this doesn’t go off the rails, humans act as the architects of the “Enterprise Context,” setting the high-level rules and visual boundaries that the AI must respect during its real-time assembly. It is a delicate dance where the machine provides the “maximum personalization” for customer retention, but the human-led creative direction provides the “North Star” that ensures the page still feels like it belongs to the brand. This process effectively turns the website into a living organism that breathes and adapts to each individual visitor, while still carrying the unmistakable DNA of the company’s creative vision.
With half of TikTok engagement occurring in the first ten seconds, marketers face extreme pressure to stay relevant. How does AI-driven asset amplification help brands pivot their messaging instantly, and what metrics should they monitor to ensure these automated posts resonate?
The brutal reality of modern social media is that if you don’t capture someone in those first ten seconds, you’ve already lost them, which is why “Project Asset Amplify” has become such a critical tool for survival. This technology allows a brand to take a single core asset and instantly spin out hundreds of variations—websites, social posts, and short-form videos—tailored specifically to different audience segments. Marketers need to stop looking at traditional traffic and start monitoring “agentic discovery” metrics, observing how their content performs within AI environments like ChatGPT via tools like the LLM Optimizer. Success in this high-velocity environment is measured by the “velocity of relevance,” where the goal is to see how quickly an automated post can be generated and iterated upon based on real-time feedback loops. By using these systems to stay “fresh,” brands can maintain a constant presence on platforms like TikTok, ensuring they have the right message for the right person at the exact moment their attention is up for grabs.
AI can now simulate A/B tests and explain why specific creative choices perform better than others before a campaign goes live. How does this shift the role of a traditional marketer, and could you walk us through the step-by-step process of using these insights to refine a strategy?
The role of the marketer is evolving from a person who guesses based on intuition to a strategist who orchestrates and validates via simulation, largely thanks to tools like “Project Face Off.” In this new workflow, a marketer begins by plugging different creative options and controls into the system, clearly defining the specific goal—whether it’s high conversion or brand awareness. The AI then simulates the A/B test, providing a detailed breakdown of why one version outperformed the other, such as pointing out that a certain color palette resonated better with a specific demographic’s intent. Instead of waiting weeks for real-world data to trickle in, the marketer uses these instant insights to refine the campaign’s direction before it ever sees the light of day. This shift allows for a much higher level of creative experimentation, as the “cost” of a failed test is reduced to a few seconds of processing time, enabling the team to arrive at a winning strategy with surgical precision.
Edits made in a working document can now automatically update exported files across the web in real-time. How does this “living canvas” concept improve collaboration between design and marketing teams, and what are the potential risks of having live updates?
“Project Concurrent” introduces the concept of the “living canvas,” which finally breaks down the walls between the design studio and the live marketing environment. Imagine a designer making a slight adjustment to the typography or a color grade on an event flyer in their working document, and seeing that change instantly reflected on every website and digital billboard where that flyer is published. This eliminates the tedious, error-prone cycle of exporting, uploading, and replacing files, allowing teams to collaborate with a fluidity that was previously impossible. However, the risks are significant; a single accidental edit or a typo could be broadcast to millions of viewers instantly if the proper safeguards aren’t in place. To mitigate this, companies must rely on “Brand Intelligence” filters and enterprise-level approval workflows that act as a safety net, ensuring that while the canvas is “living,” it is also protected by a layer of human oversight and automated brand-consistency checks.
Historical shifts show that AI often speeds up administrative workflows rather than replacing specialized roles like designers or analysts. In an increasingly autonomous creative studio, how do job descriptions evolve, and what new skills must workers prioritize?
The analogy Jensen Huang used regarding radiologists is the perfect roadmap for this transition: while AI now affects 100% of their work, the demand for their specialized expertise is actually higher because they can now process more data with greater accuracy. In the creative studio, we are seeing the “administrative” parts of the job—the file management, the resizing, the basic coding—being swallowed by AI, which leaves the human worker to focus entirely on the storytelling and the “mind’s eye” vision. Job descriptions will shift toward “Agentic Orchestration,” where a designer isn’t just someone who uses a mouse, but someone who manages a fleet of AI agents to execute complex visions. Workers must prioritize skills in “conversational experience” and “brand intelligence management,” learning how to feed the right context into these systems to get the best results. The roadmap for the future isn’t about competing with the machine’s speed, but about mastering the machine’s capabilities to tell more compelling, deeply human stories.
What is your forecast for AI-driven brand orchestration?
I believe we are heading toward a future where “Brand Orchestration” becomes a fully autonomous, self-correcting ecosystem that manages the entire customer lifecycle without constant manual intervention. We will see the rise of “agentic traffic,” which is a completely different beast than the web traffic we understand today; brands will need to optimize their presence not just for humans, but for the AI agents that humans use to navigate the world. Within the next few years, the “CX AI Maturity Index” will become the standard metric for a company’s health, measuring how effectively a brand can use its intelligence layer to provide real-time, personalized experiences across every touchpoint. Ultimately, the brands that win won’t be the ones with the most AI, but the ones that use AI to create the most authentic, frictionless human connections, turning every digital interaction into a meaningful story. My forecast is that the “living canvas” will extend far beyond documents, becoming the very fabric of how a brand exists in the digital and physical world, constantly shifting and evolving to meet the needs of the individual in real-time.
