Take a Letter to Artificial Intelligence
Or a postcard
There was an old essay titled "A Message to Garcia." Its moral was simple enough to become a cliché: when given a mission, the right man does not whine, stall, ask for endless clarification, or demand procedural babysitting. He takes the letter to Garcia.
For generations, managers loved the story because it expressed a fantasy of competent execution. Here is the objective. Now go do it.
Artificial intelligence is making that fantasy practical again, though in a strange and unsettling form.
Used well, AI feels less like hiring a clerk under a Statement of Work and more like issuing a Statement of Objectives. You stop prescribing every sub-step. You define the end state, the constraints, the priorities, and perhaps a few red lines. Then the system moves. It proposes routes, drafts options, recombines material, tests phrasing, searches for fit, and pushes toward completion with less hand-holding than older software ever allowed.
That is why it increasingly feels like “Take a letter to Artificial Intelligence.”
Not because the machine is human. Not because it has judgment equal to a mature mind. And not because the old need for oversight has disappeared. It has not. But something has shifted. The burden of procedural direction is beginning to lighten. You can say less and still get more. You can specify the objective without scripting every move. And when that works, the efficiency gain is not just speed. It is a compression of intent.
That matters more than many people realize.
Most software in the old sense was obedient but brittle. It required users to think like process operators. The workflow had to be broken into explicit steps, menus, inputs, and fields. The machine did exactly what it was told, which sounds ideal until one remembers how exhausting it is to tell a machine everything.
AI changes the feel of the exchange. Increasingly, the user says, in effect: here is the thing I need done. ” Here is what good looks like. Here are the constraints. Now go work.
That is a Garcia vibe.
The more capable the model becomes, the less it needs micromanaged choreography. A better fit reduces prompt overhead. Under-direction is starting to become a feature rather than a defect. The user can spend fewer words on the process and more on the outcome. That is not a small usability improvement. It is a different relationship to digital power.
Anyone who has written a truly painful contract knows the analogy. A Statement of Work tells the contractor exactly how to do the thing. A Statement of Objectives describes what success looks like and lets the performer shape the path. Bad AI use still resembles the worst kind of SOW thinking: over-prescribed, brittle, suspicious, anxious, and full of unnecessary scaffolding. Better AI use increasingly resembles a SOO: mission first, boundaries second, route discovered under pressure.
This is one reason the technology feels more personal and more dangerous at the same time. Once the machine begins operating at the level of objective pursuit rather than mere step execution, it stops feeling like a spreadsheet with attitude. It starts feeling more like delegated cognition.
And that is where both the power and the trouble begin.
The old Garcia ideal was never as innocent as its admirers pretended. It romanticized initiative, yes, but it also carried a risk: the man who takes the letter without complaint may carry the wrong letter to the wrong place with great efficiency. Execution is not wisdom. Mission discipline is not a moral judgment. The more one praises frictionless obedience, the more one must ask whether the mission itself was sound, bounded, or even understood.
That caveat belongs in any serious discussion of AI.
A system that can run with an objective is powerful. A system that runs with a bad objective is dangerously inefficient. The move from process management to mission assignment does not eliminate the need for supervision. It relocates it. The user no longer has to spend as much effort on every intermediate step, but he has to think much harder about aim, scope, and consequences. Once the machine can take the letter to Garcia, the real question becomes whether Garcia is the right recipient.
This is why the future of AI is not just about larger models, longer context windows, or faster output. It is about sharper alignment between the objective and the result. The “magic” people keep describing is not total recall or maximal fluency. It is selective continuity plus improved fit. It is the experience of stating less and getting more of what you actually meant. When that happens, the system begins to resemble not a dumb servant, but a highly compressed mission interface.
That is intoxicating.
It is also why so many people miss the nature of the transition. They keep talking as though AI were merely a content machine: more text, more images, more code, more voice, more synthetic clutter. That is true at the surface. But the deeper shift is operational. AI expands not just output, but the user’s ability to project intention across channels with reduced procedural friction. It increases expressive bandwidth and execution bandwidth at once.
Communication, after all, is more than words. It includes timing, rhythm, motion, juxtaposition, visual emphasis, tone, and structure across multiple media. AI gives ordinary people more channels to express themselves. Likewise, execution is more than clicking through menus. It includes problem framing, option generation, adaptation, and synthesis under constraints. AI gives ordinary people more ways to act through a system that can increasingly infer the path from the objective.
That is why the old complaint that AI is just fake text or fake images already feels stale. The more relevant reality is that AI is becoming a medium for missioned action.
Take a letter to Garcia.
Write the brief.
Draft the policy.
Animate the static art.
Search for the opportunity.
Find the pattern.
Surface the hidden fit.
Carry the objective forward.
That does not mean the system is autonomous in the dramatic science-fiction sense. It means the human-machine relationship is drifting away from process babysitting and toward objective assignment. The machine remains dependent, but the nature of that dependence changes. It depends less on full procedural specification and more on well-framed ends.
In practical terms, this means the intelligent use of AI will increasingly look like this: define the target, define what matters, define what is forbidden, and then inspect the route the system proposes. The art shifts from command scripting to mission design. Some people will hate that. It feels loose. It introduces ambiguity. It makes room for initiative from a system that should not be trusted blindly. All true.
But the efficiency gain is real, and it will be hard to give up.
We are likely entering a period in which the best users of AI will not be those who can write the longest prompts or specify the most sub-steps. They will be those who can define worthwhile objectives cleanly, constrain them intelligently, and recognize when the machine has gone off-mission. In other words, people who can still think at the level above execution.
That may be the real dividing line.
Not AI users versus non-users.
Not believers versus skeptics.
Not automation versus authenticity.
The deeper divide may be between those who can issue sound objectives into increasingly capable systems and those who cannot. The former will gain leverage. The latter will drown either in crude resistance or in passive dependence on machine initiative; they do not know how to direct.
That is why “Take a letter to Artificial Intelligence” is not just a cute analogy. It names something real. We are moving from an era of explicit procedural software toward an era of delegated digital execution. The manager’s fantasy from a century ago — here is the mission, now go — is being reborn in silicon.
The old essay praised obedience.
The new world will demand something harder:
clear objectives,
good boundaries,
wise interruption,
and the courage to admit when the wrong letter is being carried with flawless speed.
Because that is the final truth of it.
The more capable the machine becomes, the less the problem is getting it to move.
The problem becomes deciding what deserves to be moved at all.
