Context engineering: Improving AI by moving beyond the prompt

Context engineering: Improving AI by moving beyond the prompt


“The engineering problem at hand is optimizing the utility of those tokens against the inherent constraints of LLMs in order to consistently achieve a desired outcome,” the blog post says. “Effectively wrangling LLMs often requires thinking in context — in other words: considering the holistic state available to the LLM at any given time and what potential behaviors that state might yield.”

Move over prompt engineering

The practice of prompt engineering, or writing effective prompts, is still needed, with more than 15,500 such jobs listed on Indeed.com as of Oct. 24. But adding context to LLMs, agents, and other AI tools will become just as important as organizations look for more accurate or specialized results from their deployments, AI experts say.

“In the early days of engineering with LLMs, prompting was the biggest component of AI engineering work, as the majority of use cases outside of everyday chat interactions required prompts optimized for one-shot classification or text generation tasks,” Anthropic’s blog post says. “However, as we move towards engineering more capable agents that operate over multiple turns of inference and longer time horizons, we need strategies for managing the entire context state.”



Source link