Prompt Engineering for Generative AI Defined

Insight categories: AI and MLTechnology

As with "Conversation Design" over the past 5 years, "Prompt Engineering" has produced a great deal of confusion in the context of interacting with ChatGPT, New Bing, Google Bard and other interfaces to Large Language Models (LLMs).

This is evident from this Harvard Business Review article entitled “AI Prompt Engineering Isn’t the Future.” 

Prompt engineering is not just putting words together; first, because the words are chosen depending on the intended meaning and goals. In Linguistics and Computational Linguistics, this is not just syntax (word order), but also semantics (word meaning), pragmatics (intention, assumptions, goals, context), sociolinguistics (audience profile) and even psycholinguistics (audience-author relationship).

I absolutely agree with the author that you need to identify, define, delineate, break down, reframe and then constrain the problem and goal. However, you cannot define, delineate and formulate a problem clearly without using language or outside of language (our language defines our world and multilingual people are the most open-minded of all, as you will see from our GlobalLogic colleagues!). Prompt engineering does exactly that, finding a way to define the problem in as few steps as possible: efficiently, effectively, consistently, predictably and in a reusable/reproducible way.

That is why prompt engineering is also tightly coupled with domain ontology mapping, i.e.: the delineation of the problem space in a semantic and often visual way.

There is no "linguistics" without meaning. What the author (as a non-linguist) sees as two separate things are, in fact, one and the same.

This is why I think the traditional (for the past 40 years) term "language engineering" is the more appropriate and perennial form and most possibly the one that will outlive both myself and the HBR author! 

Learn more:

  • URL copied!