As a CTO who has spent decades working with software engineers across organizations like The New York Times, The Wall Street Journal, and now as President at Flatiron Software and Snapshot AI, I understand skepticism toward new disciplines that emerge at the intersection of existing specialties. The term “prompt engineering” has generated particular debate, with many questioning whether crafting inputs for large language models deserves the “engineering” designation.
After spending considerable time working with engineering teams integrating AI into their workflows, I’m convinced: prompt engineering is indeed engineering in the truest sense. Here’s why even the most skeptical software engineers should reconsider their position.
Engineering Is About Solving Problems Within Constraints
At its core, engineering is the application of scientific knowledge to solve practical problems within system constraints. Whether you’re working with steel beams, computer hardware, or software architectures, the fundamental task remains the same: creating reliable solutions that meet specifications with available resources.
Prompt engineering fits this definition perfectly. It requires:
- Understanding the underlying system: Just as software engineers must understand memory allocation, computational complexity, and data structures, prompt engineers must deeply comprehend how LLMs process and generate text, their training methodologies, and their architectural limitations.
- Working within technical constraints: Engineers optimize for constraints like processing power, network latency, and memory usage. Similarly, prompt engineers must navigate context windows, token limitations, model biases, and inference time—engineering solutions within these technical boundaries.
- Applying systematic methodologies: Like test-driven development or continuous integration in software engineering, prompt engineering has developed methodologies like chain-of-thought prompting, retrieval-augmented generation, and few-shot learning techniques.
Prompt Engineering in the Wild: A 13-Year-Old’s Intuitive Approach
The engineering mindset of prompt engineering is emerging naturally even among digital natives. Recently, my 13-year-old son Fitz demonstrated this while working with ChatGPT to create an emoji of a pronghorn.
His process was methodical and iterative: he started by requesting a basic pronghorn emoji, then refined his requirements to specify a side profile of an adult pronghorn. Noticing that the initial result looked more like a Google-style emoji, he adjusted his prompt to request an Apple-style emoji with the glossy finish characteristic of their design language. Through several more iterations, he continued tweaking his requirements until the output matched his vision.
Below are the 13 iterative prompts that Fitz issued to ChatGPT to systematically refine the pronghorn emoji design. Each image represents a step in his engineering process as he methodically adjusted parameters like perspective, style, glossiness, and anatomical details until he achieved the precise Apple-style emoji he envisioned. He become increasingly specific with his requirements based on the outputs from previous iterations.
What truly surprised and impressed me came after Fitz completed his final iteration. Rather than moving on to a new project, he took a step that demonstrated an intuitive grasp of core engineering principles. He asked ChatGPT to analyze the pattern of his successful prompts and create a formal description of his emoji style preferences that the AI had learned through their interactions.
Taking it further, he saved this documentation in a new project’s custom instructions with a specific trigger phrase: “When I say ‘qqcreateappleemoji’, follow these style instructions for any emoji I request.” In essence, he created a reusable module with consistent parameters—a parameterized function for efficient future use.
Without any formal engineering training, my 13-year-old had independently discovered and applied sophisticated engineering principles: abstraction, pattern recognition, documentation, and reusability. He had systematically refined requirements, created specifications, and built a component with a defined interface. These are classic software engineering practices that he applied intuitively to prompt engineering, demonstrating how these disciplines share fundamental similarities in approach and methodology.
Without any formal engineering training, my 13-year-old had independently discovered and applied sophisticated engineering principles: abstraction, pattern recognition, documentation, and reusability.
Why Engineers Are Often Skeptical
I understand the skepticism. Traditional software engineering is rooted in deterministic systems. We write code, compile it, and expect predictable outcomes. LLMs, on the other hand, are probabilistic. Their responses can vary, even with identical inputs. This inherent variability feels unsettling to engineers accustomed to precise control.
Furthermore, the initial perception of prompt engineering might seem deceptively simple: just ask the model a question, right? However, as anyone who has worked extensively with LLMs knows, it’s far more nuanced than that.
The Technical Depth Is Real
The misconception that prompt engineering is “just writing instructions” stems from unfamiliarity with its technical complexities. Consider these parallels:
| Software Engineering | Prompt Engineering |
| Optimizing algorithm complexity | Optimizing prompt length and structure for inference performance |
| Managing memory allocation | Managing context window utilization |
| Preventing race conditions | Preventing hallucinations and semantic contradictions |
| Isolating dependencies | Creating modular prompting systems with clear separation of concerns |
| Building abstraction layers | Designing reusable prompt templates and instruction frameworks |
Anyone who has built production-grade AI systems knows that naive prompting quickly hits limitations. Effective prompt engineering requires understanding transformer architectures, attention mechanisms, and how these models actually process sequences—much like how deep software engineering requires understanding processors, memory hierarchies, and operating systems.
Anyone who has built production-grade AI systems knows that naive prompting quickly hits limitations. Effective prompt engineering requires understanding transformer architectures, attention mechanisms, and how these models actually process sequences.
The Rigor of Prompt Engineering
Here’s why prompt engineering deserves the “engineering” label:
- Systematic Approach: Effective prompt engineering isn’t about haphazardly throwing words at a model. It requires a systematic approach of experimentation, iteration, and analysis. We formulate hypotheses, test them rigorously, document our findings, and refine techniques based on empirical results. This is the scientific method in action, and it’s at the heart of all engineering disciplines.
- Problem Decomposition: Just like traditional engineering, prompt engineering involves breaking down complex problems into smaller, manageable components. We decompose the desired output into a series of prompts, each designed to elicit a specific aspect of the response. This requires careful planning and a deep understanding of the LLM’s capabilities and limitations.
- Optimization: Prompt engineering is fundamentally an optimization problem. We strive to find the most efficient and effective way to achieve a desired outcome with the fewest possible tokens. This involves considering factors such as prompt length, complexity, and computational cost.
It Produces Measurable, Reproducible Results
A defining characteristic of engineering disciplines is their focus on measurable outcomes and reproducible results. Prompt engineering delivers on both fronts:
- Techniques can be systematically tested against benchmarks
- Results can be quantified through precision, recall, accuracy, and other metrics
- Methodologies can be documented and replicated by others
- Solutions can be version-controlled and regression-tested
When an organization implements a prompt engineering system that reduces hallucinations by 57%, improves response latency by 200ms, and increases successful task completion by 35%, that’s not just writing—that’s engineering.
Tooling and Automation
As prompt engineering matures, we’re seeing the development of specialized tools and automation techniques. We’re building frameworks for:
- Prompt management
- Version control
- A/B testing
- Performance evaluation
- Automated prompt optimization
These tools enable us to apply engineering best practices to the development and deployment of LLM-based applications, further cementing prompt engineering as a legitimate engineering discipline.
Domain Expertise Requirements
Effective prompt engineering often requires a deep understanding of the specific domain for which the LLM is being used. For example, a prompt engineer working on a medical application needs to have a solid grasp of medical terminology and concepts. This domain expertise is crucial for crafting prompts that are accurate, reliable, and safe—similar to how domain-specific software engineering requires specialized knowledge.
Cross-Disciplinary Nature
Prompt engineering also intersects with multiple engineering domains, from software architecture to user experience design. Good prompts require a clear understanding of end-user objectives, the software context in which they’re embedded, and the downstream technical implications. This cross-disciplinary approach aligns perfectly with modern software engineering, where solutions frequently integrate multiple technical and human-centered considerations.
The Field Has Developed Its Own Specialized Knowledge
Every legitimate engineering discipline generates specialized knowledge over time. Prompt engineering has rapidly developed its own body of knowledge:
- Concepts like chain-of-thought reasoning, constitutional AI, RLHF tuning strategies
- Tools for prompt version control, A/B testing frameworks, and prompt evaluation systems
- Design patterns like the ReAct framework, self-critique mechanisms, and multi-agent approaches
- Research literature identifying which techniques work for specific use cases and model architectures
The systematic development of this knowledge base is characteristic of a true engineering discipline.
It Requires Deep System Understanding
Software engineers know that writing code is the easy part—understanding the underlying systems is the challenge. The same applies to prompt engineering. Effective prompt engineers must understand:
- How tokenizers work and their impact on model interpretation
- The relationship between training data and output quality
- How models process different languages and specialized vocabularies
- The limitations of transformer-based systems and vector embeddings
- The connection between instruction formats and inference performance
This is far from simple “writing” work—it’s systematic problem-solving based on deep technical understanding.
In a podcast interview, Sam Altman, CEO of OpenAI
discussed the new profession of prompt engineering
.
Ethical Considerations
Prompt engineers also grapple with ethical considerations. We need to be mindful of potential biases in LLMs and design prompts that mitigate these biases. We also need to consider the potential for misuse of LLMs and develop techniques for detecting and preventing harmful outputs. This ethical dimension adds another layer of complexity to prompt engineering, similar to how software engineers must consider security, privacy, and accessibility in their work.
Strategic Business Impact
At Flatiron Software and Snapshot AI, we’ve witnessed firsthand how prompt engineering directly impacts strategic business outcomes—accelerating development cycles, reducing operational overhead, and significantly improving user experience. Prompt engineering transforms engineering metrics into actionable insights, driving strategic decisions and offering competitive advantages that traditional methods alone cannot deliver.
The Higher-Order Engineering Challenge
Perhaps most compelling is that prompt engineering represents a higher-order engineering challenge: creating reliable interfaces between human intent and machine capability. This meta-engineering task requires understanding both computational and human cognitive systems.
Similarly, software engineers don’t just understand computers—they create interfaces between human needs and computational capabilities. Prompt engineering extends this tradition, focusing on the natural language interface layer of this stack.
Prompt engineering represents a higher-order engineering challenge: creating reliable interfaces between human intent and machine capability.
The Future of Prompt Engineering
Prompt engineering is not a passing fad. As LLMs become more powerful and ubiquitous, the demand for skilled prompt engineers will only continue to grow. This is a field that offers exciting opportunities for engineers who are willing to embrace new challenges and push the boundaries of what’s possible with AI.
It’s important to acknowledge that prompt engineering is a different kind of engineering. It involves working with systems that are not fully understood and that exhibit emergent behavior. This requires a willingness to embrace uncertainty and to adapt to the rapidly evolving capabilities of LLMs.
However, the core principles of engineering – systematic problem-solving, optimization, and rigorous testing – are just as relevant to prompt engineering as they are to any other engineering discipline.
Pioneers of Prompt Engineering
The evolution of prompt engineering as a discipline has been shaped by innovative thinkers who recognized its potential early on. Among these pioneers, Richard Socher stands out as a foundational figure. Often referred to as “ the father of prompt engineering ,” Richard’s contributions to natural language processing and AI have been transformative.
As the founder and CEO of you.com and former Chief Scientist at Salesforce, Richard’s work bringing neural networks into NLP laid crucial groundwork for modern prompt engineering techniques. His research on word vectors and contextual representations helped create the foundation upon which today’s sophisticated prompt engineering methodologies are built.
Reflecting on the field’s evolution, Richard observes: “We evolved from getting rejected and ridiculed for the idea that a single neural net can be prompted with any question to being told that it was always obvious within the last 7 years. Moving forward, we will operate at much higher levels of abstraction. Unique ideas matter. You can just do things in software now.”
“We evolved from getting rejected and ridiculed for the idea that a single neural net can be prompted with any question to being told that it was always obvious within the last 7 years.” — Richard Socher
Also deeply influential is Bryan McCann , co-founder and CTO of you.com, whose pioneering work has significantly advanced the field. While at Salesforce, Bryan co-authored seminal research introducing contextualized word vectors, enabling more nuanced understanding of words based on their surrounding context. His groundbreaking work on multi-task learning demonstrated that a single model could perform various tasks by interpreting different prompts—a concept that has become central to modern prompt engineering.
Bryan’s research showed that language models could interpret prompts as instructions, allowing them to generate appropriate responses across diverse tasks. At you.com, he led the integration of Large Language Models with real-time internet access, creating one of the first consumer-facing search platforms to do so. This innovation addressed critical challenges related to the timeliness and accuracy of information provided by LLMs.
What makes Richard and Bryan’s contributions particularly significant is how they bridged theoretical AI research with practical applications. Long before prompt engineering entered the mainstream discourse after ChatGPT’s release in late 2022, they were developing techniques that leveraged specific inputs to guide AI systems toward desired outputs—the core principle of what we now call prompt engineering.
This connection between foundational research and practical application exemplifies why prompt engineering is genuine engineering: it applies scientific principles to solve real-world problems systematically. Their work demonstrates how prompt engineering evolved not as an accident or afterthought but as a deliberate application of rigorous techniques to address complex challenges in AI interaction.
Conclusion: It’s Not “Either/Or”
The strongest software teams I’ve worked with recognize that prompt engineering isn’t a replacement for traditional software engineering—it’s a complement and extension of it. The most effective systems combine both disciplines:
- Software engineering provides the infrastructure, performance optimizations, and integration capabilities
- Prompt engineering provides the interface layer that translates human intent into machine execution
As AI becomes more integral to software systems, the line between these disciplines will blur further. The skepticism toward prompt engineering mirrors historical reactions to compiler design, GUI programming, and even object-oriented programming when they first emerged—initially dismissed by some as “not real engineering,” only to become fundamental to the discipline.
The question isn’t whether prompt engineering qualifies as legitimate engineering—it’s how quickly you’re willing to embrace it as part of your professional toolkit. The next time someone questions whether prompt engineering is “real engineering,” ask them to build a production-grade AI system that handles edge cases, generalizes effectively across domains, and maintains reliability at scale. They’ll quickly discover that crafting effective prompts involves far more engineering than they initially assumed.
The skepticism toward prompt engineering mirrors historical reactions to compiler design, GUI programming, and even object-oriented programming when they first emerged—initially dismissed as ‘not real engineering,’ only to become fundamental to the discipline.
Rajiv Pant is President at Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation while serving as a trusted advisor to enterprise clients on their AI transformation journeys. He is also an early investor in and senior advisor to you.com , an AI-powered search engine founded by Richard Socher and Bryan McCann. With a background spanning CTO roles at The Wall Street Journal, The New York Times, and other major media organizations, Rajiv brings deep expertise in language AI technology leadership and digital transformation. He writes about artificial intelligence, leadership, and the intersection of technology and humanity.












