Member-only story
An LLM is but a Character in Context: 3 Tips to Enhance your Role-Based Prompt Strategy

“All the worlds a stage” — William Shakespeare
Introduction
Among the various prompting paradigms or techniques that are becoming integral to the ever-evolving field of “Prompt Engineering,” Role-Based Prompting stands out as promising because it can significantly enhance the style and accuracy of responses from a Large Language Model. Just in case this is the first time you’ve heard about this term, Role-Based Prompting mainly involves adding detailed persona descriptions or role narratives to the prompt provided to the LLM. This description adds a useful and vivid context to the core instruction or task. Role-Based Prompts can be specified in many different and nuanced ways and can involve occupations like “doctor,” “physics teacher,” interpersonal roles like “parent,” fictional characters from popular books like “Frodo” from “The Lord of the Rings,” historical characters like “Einstein,” or even non-human entities like “AI assistant” or “Linux Terminal.”
The accuracy and tone/style of a response from an LLM are sensitive to the specified role; therefore, even minor changes to the role description can significantly impact the response. These role-based descriptions can be used to assign a role to models: