Quick Navigation
- The Data Nobody's Acting On
- Why Dedicated "Prompt Engineers" Fail
- What Prompt Literacy Actually Is
- Agentic Workflow Design: The Real Capability Gap
- How to Build Capability at Scale (Not Hiring)
- Foundation Elements
- Why This Works
- Paths Forward
- Cost Considerations
- Questions to Consider
- The Bottom Line
Job postings increasingly seek "Prompt Engineers" at $150K-200K with 2+ years experience to optimize AI interactions and build prompt libraries.
Consider the webmaster problem. In the mid-90s, organizations hired dedicated people to "manage the website," updating HTML, adding pages, and maintaining sites. This made sense for about three years.
Then the internet became infrastructure. Every department needed web presence. Every employee needed to understand digital content. The "webmaster" became a bottleneck, then an anachronism.
The pattern repeats with "prompt engineers."
The Data Nobody's Acting On
40% of the workforce needs reskilling in the next three years because of AI (IBM research). Not 5%. Not just "the AI team." 40%.
89% of executives say their workforce needs AI skills. Only 6% are upskilling meaningfully. The gap between recognition and action is staggering.
63% of executives cite skills gap as the #1 barrier to AI adoption. Not technology. Not budget. Skills.
The response? Hire "prompt engineers." Consider whether this addresses the scale of the challenge.
Why Dedicated "Prompt Engineers" Fail
1. Prompt Engineering Isn't a Role, It's a Literacy
Prompt engineering relates to AI as typing relates to computers or search queries to Google.
In 1985, organizations hired "typing specialists" to transcribe executives' handwritten notes. By 1995, every knowledge worker typed their own documents. The skill became universal because the tool became universal.
Same pattern with search. In 1998, "research specialists" crafted effective search queries. By 2005, everyone Googled things themselves.
AI follows the same trajectory, but faster. When AI tools embed in every workflow (email, documentation, analysis, coding, design), every knowledge worker needs prompt literacy, not one specialist in a central team.
2. Centralization Creates Bottlenecks
The predictable pattern: Months 1-3, teams develop "best practices" and prompt libraries. Months 4-6, requests pile up from Marketing, Finance, Legal. The team becomes a queue. Months 7-9, wait times increase, departments get frustrated, shadow AI emerges. Months 10-12, shadow AI proliferates (official channels too slow), teams become demoralized (approval bottleneck), executives question ROI.
This is the webmaster pattern. Centralizing what should be distributed creates friction that kills adoption.
3. Domain Knowledge > Prompt Technique
The reality: the hard part isn't the prompting. It's understanding what you need.
A finance analyst who knows financial modeling can learn to prompt AI for financial analysis in hours. A "prompt engineer" without finance understanding will never write prompts as effective as the analyst, regardless of technical skill.
Good prompting requires understanding the domain deeply enough to specify needs, recognizing when AI outputs are wrong or incomplete, knowing what edge cases matter, and iterating based on domain expertise. You can teach domain experts to prompt. You cannot teach prompt specialists to become domain experts in every field they support.
4. The Skills Evolve Too Fast for Centralization
Prompt engineering "best practices" from January 2024 were outdated by June 2024 (Claude 3.5 Sonnet extended context from 32K to 200K tokens, GPT-4 Turbo improved function calling, multimodal capabilities expanded, agentic tools like Claude Code changed interaction paradigms entirely).
When capabilities evolve this fast, centralized teams struggle to keep up. By the time they document "best practices," tools have changed. Distributed learning, where domain experts experiment directly and share what works, adapts faster than centralized expertise.
What Prompt Literacy Actually Is
Prompt literacy is a core capability for knowledge workers, like reading, writing, and critical thinking. But it extends beyond "writing better prompts" to include:
Understanding Model Capabilities: Recognizing what tasks AI excels at (pattern recognition, summarization, structured transformation), what it struggles with (accurate math, real-time information, causal reasoning), when to use AI versus doing it yourself, and how to recognize hallucinations. This develops through intuition via use, not memorizing specs.
Iterative Refinement: Starting with rough prompts and refining based on outputs, recognizing when you're asking the wrong question, breaking complex tasks into AI-solvable components, knowing when to provide examples versus abstract instructions. This is learned through practice, not training modules.
Data Classification and Safety: Understanding what data can be used with which tools, how to anonymize or sanitize inputs, when to escalate to governance teams, and what "acceptable use" means in context. Every employee needs this governance literacy.
Tool Selection and Context Switching: Knowing when to use ChatGPT versus Claude versus Gemini versus specialized tools, how to move between tools based on task requirements, what pricing models mean for different use cases, when to use agentic tools versus chat interfaces. This requires hands-on experimentation across multiple tools.
Agentic Workflow Design: The Real Capability Gap
The skill separating people who use AI as "autocomplete+" from those who transform their work isn't prompt engineering. It's agentic workflow design: decomposing complex work into human and AI components, designing feedback loops that improve AI outputs over time, identifying which decisions require human judgment and which can be delegated, building workflows where AI acts autonomously within defined boundaries, orchestrating multi-step processes with AI as an active agent.
Consider financial report generation. Traditional approach: analyst manually pulls data, creates charts, writes narrative, formats in PowerPoint (4-6 hours). AI-assisted approach with basic prompt literacy: analyst uses ChatGPT to draft narrative based on pasted data, still manually pulls data and creates charts (3-4 hours, modest improvement). Agentic workflow design: analyst designs automated pipeline where AI extracts data from defined sources, generates charts based on templates, writes narrative with anomaly detection, formats output, and analyst reviews final output (30 minutes of review plus automated execution). The pipeline runs on schedule without manual triggering.
That's the difference. Not "I can write better prompts," but "I can redesign work."
Agentic workflow design resists centralization because it requires deep domain knowledge (understanding the work intimately to redesign it), is role-specific (what works for finance doesn't work for legal or marketing), requires ongoing iteration (workflows need continuous refinement), and delivers the biggest ROI (where 10x productivity gains come from, not better autocomplete).
How to Build Capability at Scale (Not Hiring)
Consider systems that enable distributed learning:
AI Budget: Providing $50-150/month per employee to experiment with AI tools enables embedded learning. Traditional approaches (train, test, grant access, hope for application) yield 15-20% retention after 6 months. Embedded learning (provide safe access, employees solve real problems, learning through application) yields 70%+ retention because skills are actively used. An employee automating a tedious workflow learns more in a week than in 40 hours of classroom training, solving real problems with immediate feedback.
Sandboxing: AI budgets operate within controlled environments where data classification is enforced at infrastructure level, network isolation prevents access to production systems, audit trails capture everything, and clear escalation paths exist for successful experiments. This creates conditions for deliberate practice, working at the edge of abilities with immediate feedback in safe environments.
Knowledge Capture: When thousands experiment, systems must capture learning. Otherwise Finance solves a problem that HR will solve again next month. This requires centralized repositories for experiments and learnings, search and discovery for finding relevant examples, upvoting and curation for surfacing best solutions, and attribution and recognition for rewarding contributors.
Communities of Practice: The most effective learning comes from peers who've solved similar problems. Cross-functional communities around AI governance, prompt engineering techniques, agentic workflow design, tool selection, and domain-specific applications (often informal Slack channels sharing wins, failures, and learnings) frequently outperform structured programs.
Foundation Elements
Some foundational learning remains valuable, focused on enabling practice rather than theoretical knowledge:
AI Governance and Safety (2 hours): What data can you use with which tools (specific examples based on data classification), boundaries of acceptable use (real scenarios, not abstract policies), when to escalate (clear decision tree), what happens if you make a mistake (psychological safety, not fear). Delivered through interactive workshops with real organizational examples.
Prompt Literacy Basics (4 hours): Hands-on practice with common tasks in employees' actual roles. Try it, share results, iterate together, document patterns that work for your domain. Employees leave with prompts they can use immediately.
Agentic Workflow Design (8 hours over 2 weeks): Project-based learning with real workflows. Map a workflow you own, identify repetitive or high-volume steps, design an experiment for AI automation or augmentation, build it using AI budget, share results, iterate.
Tool Selection and Context (4 hours): When to use ChatGPT versus Claude versus Gemini (organizational examples), agentic tools versus chat interfaces, cost considerations, framework for evaluating new tools quickly. Guided exploration testing multiple tools on same tasks.
Total investment: 18 hours over 3-4 weeks, happening alongside hands-on experimentation. Learning reinforces practice; practice drives learning.
Why This Works
Adults learn by application, not absorption. Lecturing about AI for 40 hours yields 10-20% retention after a week. Giving someone a real problem, AI tools to solve it, and support when stuck yields 70%+ retention because they use it immediately.
This isn't revolutionary pedagogy. It's basic learning science: adults need relevance to their actual work (embedded learning provides authentic relevance solving their own problems), agency over their learning path (employees pursue skills they need, when needed, in context that matters), and immediate application (traditional training creates gaps between learning and application where skills decay; embedded learning collapses that gap to zero).
Paths Forward
Organizations considering this approach might explore phased implementation: defining data governance and classification, building sandboxes with existing cloud infrastructure, designing AI budget frameworks ($50-150 per employee per month), launching pilots with 100-200 employees across functions, delivering foundational training alongside hands-on experimentation, expanding organization-wide with communities of practice, implementing knowledge repositories with search and discovery, analyzing usage patterns and doubling down on what works, and embedding into performance management and culture.
Timeline consideration: 6 months to full deployment, 12 months to embedded culture. Compare this to hiring "prompt engineers" and hoping they can support thousands of employees.
Cost Considerations
For 5,000 employees, a dedicated prompt engineering team (5 engineers at $175K plus benefits, team lead, tools, training) costs approximately $1.75M annually, creating bottlenecks for AI requests with limited domain coverage that can't scale with demand.
A distributed capability approach (AI Budget $100/month, sandbox infrastructure $20/month, knowledge capture $10/month, training $5/month, governance $15/month per employee) costs approximately $9M annually ($150/employee/month), enabling every employee to experiment directly with learning happening in context across all domains, scaling naturally with the organization.
The comparison: 5 specialists who become bottlenecks versus 5,000 employees applying AI to their actual work. When 40% of the workforce needs AI skills in 3 years, hiring specialists cannot close that gap. Systems that build capability at scale become necessary.
Questions to Consider
Several objections commonly arise:
You do need specialists for complex AI implementations (AI engineering, ML engineering, building models, designing systems, managing infrastructure), but that differs from "prompt engineering" as currently conceived.
Employees are already experimenting with shadow AI tools. 89% say they need AI skills; only 6% are getting meaningful upskilling. The gap fills with unmanaged tool adoption. Channeling that energy into governed experimentation may be preferable.
Sandboxing exists precisely because mistakes in controlled environments are learning opportunities, not catastrophes. The alternative (centralized control moving too slowly) creates different risks: competitive irrelevance.
Most employees will share learnings if you create the right incentives. Recognition, attribution, and compensation for sharing creates collaboration culture.
Industries like financial services and healthcare are using this approach. Sandboxing with data classification enforcement enables safe experimentation while maintaining compliance.
The Bottom Line
40% of the workforce needs reskilling in three years. Hiring "prompt engineers" or running classroom programs cannot solve this at scale.
Consider systems enabling distributed capability development: AI Budget funding hands-on experimentation, Sandboxing making practice safe, Knowledge Capture enabling organizational learning, Embedded Training teaching in context.
The skill gap is real. The question is whether the solution is creating a specialist role that becomes obsolete as AI capabilities embed in workflows, or making prompt literacy and agentic workflow design as universal as typing or search.
Organizations centralizing this capability (like organizations that centralized web management with webmasters in the 90s) may discover the same lesson: when a tool becomes infrastructure, the capability must be distributed. The webmaster pattern failed because web literacy needed to be universal. The "prompt engineer" pattern may fail for the same reason.
Related Posts: