AI Shifts From Talk To Action

Anthropic’s latest Claude update highlights a broader shift toward AI agents that execute tasks, redefining how people interact with software

Welcome to Memorandum Deep Dives. In this series, we go beyond the headlines to examine the decisions shaping our digital future. 🗞️

This week, a new AI update hinted at a much bigger shift happening beneath the surface of the industry. What looked like a feature release may actually signal a change in how software is used altogether.

For years, AI has been positioned as a tool that helps people think, write, and decide. But as these systems become more capable, the expectation is quietly evolving from assistance to execution.

What makes this transition especially interesting is not just what the technology can do, but how it changes the relationship between users and software. When systems start acting on behalf of people, the boundaries between tool and operator begin to blur.

In partnership with

Goldman Sachs totes "The Return of Physical Assets" → Here’s What You Can Do.

Goldman strategists argued the post-2008 era of total financial asset dominance is faltering. Their proposition? Heavy Assets with Low Obsolescence.

Scarce, globally priced, and immune to AI disruption.

Think about it. Blue-chip art is the original HALO asset.

Just in November 2025, someone bought a 1907 Klimt for $236mm.

And this month? Bloomberg reported that a billionaire scored 3,500% at auction: Bought for £364,500 in 1994. Sold for £13.5 million.

That’s just a fraction of the seller’s billion-dollar collection, representing 12.5% of his fortune.

Returns like that don’t happen everyday, but the Artprice100 outpaced the S&P 500 by 64% (‘20-’24).

Now with Masterworks, you can fractionally invest in multimillion dollar artworks featuring artists like Banksy, Basquiat, and Picasso.

Over 71,000 people have invested $1.3 billion across over 520 artworks.

26 sales delivered net annualized returns like 14.6%, 17.6%, and 17.8%!!

Individual referenced not investors in Masterworks offerings. Masterworks did not contribute to the ArtPrice100 index. Investing involves risk. Past performance not indicative of future returns. Important disclosures at masterworks.com/cd

*This is sponsored content. See our partnership options here.

When AI stops suggesting and starts acting

Since the artificial intelligence fever took over tech circles, countless theories have emerged about the technology’s real value and how it will change the future of work. Of these, many sounded outlandish just a couple of months ago; however, the pace of AI model and tool evolution is now part of daily conversations.

Even now, the industry is rapidly shifting its focus away from developing the most powerful models and towards actionable AI that can automate entire tasks. The logic being that models in themself cannot justify the investments needed to develop and improve them without concrete use cases that create value for large sections of the population.

From hype to execution

For much of the past three years, AI tools have operated like very smart search engines, where users ask questions and chatbots generate responses. The human remains in the loop at every step, reviewing outputs, deciding what to keep, and executing the final action themselves. While this model has been enormously valuable, it has limited the ability of AI models, not because they were incapable of performing tasks, but because they were not allowed to bypass human thinking completely.

Anthropic’s latest Claude Dispatch addresses this limitation in AI models by enabling users to automate and assign tasks that AI agents can complete without human intervention. The way the mainstream media has covered this release speaks volumes about the importance of this transition. Much of the coverage has focused on how users can now assign AI a task from their smartphones and return to find it completed on their laptops. What this means is that Claude now goes beyond suggestions; it can open apps, navigate the browser, fill out forms, and complete tasks on its own, reducing the need for a person to step in between deciding what needs to be done and actually finishing the work.

And while for the uninterested this may look like just another convenience feature designed to lure in more users, scratch the surface, and it represents a fundamental change in what AI actually is.

From chatbot to agent

The update from Anthropic represents the next phase of AI development, one that can be understood by comparing what an AI chatbot does with what an AI agent can do.

The fundamental change is that AI chatbots produce answers, but agents are designed to perform tasks. Instead of only generating text or suggestions, an agent can make decisions, move across different systems, and complete tasks without constant human input. This shift is important because as soon as software is trusted to act on someone’s behalf, the relationship between people and machines starts to change. The role of AI shifts from being a tool that assists with thinking to something closer to a co-worker that can be given responsibility for outcomes.

This shift is happening right now, and data shows it is happening quickly. According to Gartner, at the beginning of 2025, fewer than 5% of enterprise applications used AI agents; however, IDC estimates that agentic automation will enhance capabilities in over 40% of enterprise applications by 2027. That is a much faster adoption curve than most enterprise technologies follow, the kind of acceleration that usually becomes clear only in hindsight, when it looks like the point at which the industry moved in a new direction.

A fast industry shift

This shift is not a new phenomenon, and there are clear parallels with earlier changes in computing.

When Apple launched the App Store in 2008 with just a few hundred applications, it created a new interface layer that eventually grew into a trillion-dollar ecosystem and reshaped how software was built and distributed. Companies that understood early that mobile apps would become the primary way people interacted with software were able to adapt, while those that treated phones as smaller versions of the web struggled to keep up.

The rise of AI agents has a similar pattern, because it changes the interface again. Instead of people opening individual apps to complete tasks, an agent can receive a request and coordinate across multiple services on its own.

The AI Talent Bottleneck Ends Here

If you're building applied AI, the hard part is rarely the first prototype. You need engineers who can design and deploy models that hold up in production, then keep improving them once they're live.

This is the kind of talent you get with Athyna Intelligence—vetted LATAM PhDs and Masters working in U.S.-aligned time zones.

*This is sponsored content

The new interface layer

Some industry leaders believe this could eventually make the app-based model less central. Speaking at South by Southwest (SXSW), the annual conference in Texas where technology, media, and startup companies gather to discuss future trends, Nothing chief executive Carl Pei said that many products built around standalone apps could be disrupted over time. An agent that understands a user’s intent, he argued, could move between services on its own, removing the need for people to open each app manually. His timeline was measured in years rather than months, but the underlying logic was simple: once software can handle the steps itself, the interface humans once relied on matters less.

Pei is not the only one who believes that AI agents will fundamentally change the way people interact with technology. Recently, during a podcast, NVIDIA chief executive Jensen Huang argued that AI has reached a point where it can create real economic value on its own, even if only in limited situations. He acknowledged that current systems are far from replacing human organizations, but said the important threshold is that autonomous systems can now contribute work that previously required people. That view focuses less on whether AI matches human intelligence and more on whether it can reliably produce results, which is what ultimately matters in business settings.

Capability brings risk

However, as AI agents automate tasks, the risks become more complex. In the same week that new agent tools were being introduced, Meta disclosed an internal incident in which an AI system responded to a technical query with incorrect guidance, leading an employee to change permissions in a way that briefly exposed sensitive information.

The company described the event as a case of human error, but situations like this show how difficult it is to separate human decisions from machine output when the two are closely linked. When software can act, suggest, and influence actions simultaneously, defining responsibility becomes difficult.

And since the regulations have not yet kept pace with the rapid evolution of AI systems, it is unlikely they will catch up with what technology can do anytime soon. A recent White House framework on AI policy took a relatively light approach. It did not set clear rules for autonomous agents, such as auditing requirements or liability standards when systems cause damage, signalling that it will be some time before AI addresses liability.

However, despite the concerns, companies are unlikely to slow down adoption. Surveys show that many organisations using AI agents are already reporting significant time savings and lower costs, and competitive pressure makes it difficult for any one company to hold back while others move ahead. In most technology transitions, the advantage goes to the groups that understand early how the change affects the structure of their work, rather than to those that adopt the tools first.

What is changing now is not only the power of AI, but also its role. Software is moving from advising to taking action, and from waiting for instructions to handling parts of the process independently. As that happens, the central question for companies is shifting from what their AI systems can say to what they are willing to let these systems do.

The shift becomes real.

In many ways, the reaction to Claude Dispatch reflects how quickly the conversation around artificial intelligence has changed. Not long ago, debates about AI transforming work sounded speculative, the kind of predictions usually reserved for conference panels and research papers. What once felt distant is now starting to appear in everyday tools, and the shift is happening quietly, through updates that look incremental but alter how work actually gets done.

The real significance of Anthropic’s latest release lies not in the feature itself, but in what it represents. The industry is moving away from AI that simply answers questions toward systems that can complete tasks from start to finish, crossing the gap between intent and execution without constant human involvement. If that transition continues at the current pace, the future of work will not change because of a single breakthrough, but because small updates like this gradually turn ideas that once sounded outlandish into something ordinary.

P.S. Want to collaborate?

Here are some ways.

  1. Share today’s news with someone who would dig it. It really helps us to grow.

  2. Let’s partner up. Looking for some ad inventory? Cool, we’ve got some.

  3. Deeper integrations. If it’s longer-form storytelling you are after, reply to this email, and we can get the ball rolling.

What did you think of today's memo?

Login or Subscribe to participate in polls.