top of page
Search

We've NOT Been Here Before: Humans as the Hands of AI

  • Solange Charas, PhD and Stela Lupushor
  • Oct 20
  • 8 min read

ChatGPT generated
ChatGPT generated

You’re a surgeon. Your client arrives at your cosmetic surgery practice with a phone full of AI-generated images showing their "perfect self." Twenty variations of their face, each optimized by algorithms trained on millions of beauty standards. Your job? Execute that AI-generated vision with surgical precision, literally. You're no longer an artist. You're a contractor following someone else's blueprints. Except the "someone else" isn't human.


Welcome to the inversion, where AI dreams and humans deliver.


The “automation replacing manual labor” ship sailed during the Industrial Revolution. The sentiment that “AI assisting human creativity” is so passe. We are talking about something very different: AI is the source of the vision, strategy, and creative direction and humans become the implementer. The power tables have turned - it’s as if humans are no longer in control, AI is controlling us.The architect builds what the generative design algorithm recommends. The business strategist executes the plan that GPT-25 drafted. The marketing team implements the campaign Claude conceived. The developer codes what Cursor suggested. The writer publishes what the AI outlined, with a few tweaks (can you catch them?) 


Previous revolutions amplified human intention. This one is likely to replace it. Or at least it is attempting to do it, unless we decide otherwise. 


“Execution Economy”


The pattern is everywhere:

Beauty and body modification: Perfect Corp's AI beauty advisor analyzes your face and generates optimal looks based on facial geometry, skin tone, and current trends. The opening paragraph is not made up! Plastic surgeons report patients arriving with AI-generated targets that algorithmically optimize their features. The surgeon is here just for technical execution. 

Architecture and design: Autodesk's generative design tools can now produce thousands of optimized building layouts for cost, materials, and flow. There’s a hint of “we’ve been here before,” echoing decades of computer-aided design and manufacturing (CAM/CAD) that digitized blueprints and improved precision. But AI goes further: it learns from vast data sets, predicts trade-offs, and generates new options. Where CAM/CAD enhanced execution, AI expands imagination and architects like those at Zaha Hadid Architects become curators of machine-generated possibilities. Tools that follow commands become systems that co-create ideas.  

Business strategy: McKinsey's Lilli AI assistant analyzes market data and generates strategic recommendations from more than 100,000 internal documents. Management consultants' work is now less about strategy creation and more about "strategy validation and implementation planning." The thinking happens upstream in the algorithm and consultants become mere readers of “monochrome” AI output.

Medical diagnosis: Google's Med-PaLM 2 gets to expert-level performance on medical licensing exams. Doctors participating in the pilot programs have already started feeling that their role is becoming more about executing AI-recommended treatment protocols and less about diagnosing. AI suggests humans perform.

Creative work: Models like Runway's Gen-2 create videos from simple text prompts. Directors describe scenes to AI, then edit the results. OpenAI's Sora creates video from text descriptions and, with a few bits of voice and likeness data about you, filmmakers may soon direct movies entirely through language. Humans can focus on handling only physical production logistics. 

Software development: GitHub's Copilot writes 46% of code for developers who use it across all programming languages. Cursor AI handles implementation tasks from natural language descriptions. Developers report spending more time reviewing and integrating AI-generated code than writing from scratch.


What is common across all these examples? The creative spark, strategic thinking, design vision - all the fun and human activities are increasingly originated in algorithms. Humans are here to execute, refine, and do the real-world implementation. Is there fun in that? Maybe. But the creative angle is cognitively offloaded to machines. 


Downstream 


Might we see new job categories as a result of this inversion? Perhaps… 


AI Vision Interpreters: Professionals who translate AI-generated strategies into implementable plans. Demand for prompt engineering skills has grown dramatically. These roles are the interface between algorithmic output and human teams. We already have a job similar to this called the Analytics Translator. Maybe the next iteration is an AI Translator? 

Execution Specialists: Workers valued for their ability to realize AI-generated visions with high fidelity. The cosmetic surgeon who perfectly executes AI beauty standards. The developer who flawlessly implements AI-generated code. The architect who efficiently builds AI-designed structures. And, we’ll have AI evaluate their performance so they get perfect feedback to accelerate their level of performance perfection.

Quality Validators: Teams that verify AI recommendations meet real-world constraints. Financial services firms now employ dedicated staff who check algorithmic recommendations before execution. This job already exists at MassMutual call a Model Validator.

Reality Translators: Professionals who bridge between AI-optimal solutions and messy human reality. The AI suggests a business restructuring that's mathematically optimal but politically impossible. The reality translator makes it work. This position already exists as well and is described in this article: AI Translator Jobs: Bridging Human Expertise and Artificial Intelligence Across Business Domains.  

Human Override Authorities: Senior decision-makers who retain veto power over AI recommendations. But will they use it? Research shows humans often defer to algorithmic recommendations, even when the algorithm is wrong.


These roles sound important, but they're all downstream


Eno Exception


Not all algorithmic creation follows the troubling pattern. Brian Eno's pioneering work in generative music is a great counterexample of how humans can maintain creative control while using algorithmic systems.



His relationship with the algorithm is different. Eno curates his "huge library of sounds" (aka the sonic palette), the rules and parameters governing the system, aesthetic boundaries and constraints, and the overall vision and intent. The algorithm generates specific combinations and sequences, real-time variations that never repeat, and interactions between elements the designer didn't explicitly compose


In Eno's apps like Bloom, Trope, and Reflection, users interact with systems Eno designed. The music they hear is unique to that moment, but it's unmistakably "Eno" because he crafted the generative rules. As he puts it: "From now on there are three alternatives: live music, recorded music, and generative music."


The creative vision remains human. The execution becomes algorithmic. But, importantly, the human designed the execution system itself.


This model recently extended to film. Gary Hustwit's documentary Eno uses generative software to create 52 quintillion possible versions. Each screening shows different scenes in different orders, exploring different themes. But Hustwit and his team shot the footage, conducted the interviews, and designed the generative rules. The algorithm sequences, but humans created the vocabulary.


Big Questions on Inversion


1. Strategy - Who Owns the Vision?


Past technologies brought about questions about ownership of physical products or intellectual property. AI-as-a-vision-generator raises a different question: Who owns strategic direction?

If an AI system analyzes your market and generates a business strategy, who's the strategist? If it designs your product, who's the designer? If it diagnoses your patient, who's the doctor? Legal and professional frameworks aren't ready for this. Nor is the liability insurance industry. 


The U.S. Copyright Office ruled that AI-generated art cannot be copyrighted because it lacks human authorship. But what about AI-generated business strategies? Architectural designs? Medical treatment protocols? The creative and strategic output that drives organizational success increasingly comes from algorithms that can't legally "own" anything.


Organizations need clear frameworks for:

  • Attribution of AI-generated strategies. When the algorithm originates the plan, who gets credit (or blame)?

  • Authority structures in human-AI collaboration. Can humans override AI recommendations? Must they justify doing so? (New skills for auditors needed!)

  • Value capture from AI vision. If AI generates the strategy, does the organization still own its competitive advantage?


Leaders delegate strategic thinking to AI because it's faster and cheaper. The workforce becomes execution-focused, implementing visions they didn't create. Turning humans into robots might not be your best HR or organizational strategy. 


2. Policy - Who is Responsible? 


With this inversion, where does accountability sit?

Medical context: If AI recommends a treatment protocol and the doctor executes it, who's responsible for outcomes? The doctor claims they followed AI guidance. The AI company claims it only makes suggestions. The patient suffers harm with no clear path to accountability.

Business context: If AI generates a strategy that fails, who's accountable? The executive who approved it? The AI vendor who built the system? The workers who executed it? Traditional management responsibility assumes humans originate decisions.

Creative context: If AI generates a design that violates building codes or safety standards, who's liable? The architect who implemented it? The software company? The building owner who selected that AI option?


The EU AI Act attempts to address this by categorizing AI systems by risk level. High-risk systems (medical diagnosis, critical infrastructure, employment decisions) require human oversight. But "oversight" is vague. Does reviewing AI recommendations constitute meaningful oversight if humans lack expertise to evaluate them?


Organizations need to adopt clear accountability frameworks that don't let everyone hide behind "the AI did it." We also need meaningful human oversight that goes beyond rubber-stamping AI decisions (a whole new set of skills and capabilities no-one is building yet). And lastly - we need professional standards and decision making frameworks for when humans must override algorithmic recommendations.


3. Programs - How do we learn? 


Training must change too. Medical education might shift from "how to diagnose" to "how to validate AI diagnoses and execute recommended treatments." Architecture programs might focus less on design theory and more on "how to efficiently build AI-generated designs" and the new core competencies might be communicating effectively with AI systems to get useful outputs, or assessing when algorithmic recommendations are sound, or knowing when to reject AI recommendations despite algorithmic confidence.


Work identity matters. Doctors train for a decade to become diagnosticians. If AI diagnoses better, what's the doctor's identity? If AI designs better, what's the architect's role? Harvard Medical School now offers AI-focused coursework for students in its Health Sciences and Technology track. The curriculum focuses not on creating AI systems but on working effectively with AI-generated recommendations. This might be the early signal and potential model to  spread to other professional programs.


The Choices


Do we want humans to originate vision with AI assistance, or the other way?


The first preserves agency. The second optimizes efficiency.


Organizations choose the second by default. It's faster, cheaper, often better. But "frequently good" isn't "maintaining human creative agency." We're trading long-term capability for short-term gains.

Research on automation complacency shows humans lose delegated skills. Pilots lose manual flying ability. Drivers lose navigation skills. Workers delegating creative thinking might lose the ability to think creatively without algorithms.


Once entire organizations delegate strategic thinking, can they regain independent thought? What happens when AI recommendations are wrong and no one can tell because everyone was trained to… implement.


What happens when the job survives but the creative agency doesn't?


The surgeon still has a job. Artist or contractor? The strategist still has a role. Thinker or implementer? The architect still designs buildings. Designer or curator? We're early in this transformation. Our choices about human-AI collaboration, training, accountability, and organizational design determine whether we become “just” human hands.


Losing capabilities is easier than regaining them. Once a generation stops learning navigation, teaching it becomes harder. Once professionals stop learning independent strategy, original thinking becomes scarce. Rejecting AI vision-generation is not the answer either. It's too useful. Consciously designing collaborations is where the answers might be. 


We've never been here before. We get to choose what happens next. Are we choosing consciously?

 

This is the post in our series "We've Not Been Here Before." Subscribe to our newsletter as we explore technologies that push beyond historical precedent into truly uncharted territory.



 
 
 

Comments


Post: Blog2_Post
  • LinkedIn
  • Twitter
  • Instagram

©2025 by Humanizing Human Capital.

bottom of page