Anthropic Console, the AI platform from Anthropic, has released a new feature that allows developers to improve their prompts for higher-quality outputs. The prompt improver uses advanced prompt engineering techniques and chain-of-thought reasoning to detect problems and refine the prompt. It also checks for grammatical errors and prefill with necessary information to improve the accuracy of the output. According to Anthropic, the prompt improver has shown a 30% increase in accuracy for a multilabel classification test and a 100% accuracy in word count adherence for a summarization task. Developers can also add input-output examples, which are transformed into a standardized XML format for better clarity. If developers are unable to craft examples, Claude, the flagship model from Anthropic, can generate synthetic ones to emulate them. Additionally, Anthropic has introduced a prompt evaluator that allows developers to benchmark and grade prompts on a five-point scale. This feature has already been tested with one of their customers, Kapa.ai, and has streamlined their migration to Claude 3.5 Sonnet. This announcement comes shortly after Dario Amodei, CEO at Anthropic, revealed that Claude 3.5 Opus is on the cards. It is speculated that this feature is a hint towards integrating reasoning capabilities in the flagship Claude model.