Recently, I needed to write a recommendation for a drawing tutorial. This small task brought me face to face with a fundamental question: a good recommendation must be honest—it needs to present both strengths and limitations. Any recommendation that’s nothing but praise is a form of deception.
When faced with this situation, most people might simply tell AI: “Write a sincere recommendation.”
This is a mistake.
You’ll likely get text that “looks sincere,” filled with just-right praise and harmless flaws. This output is mediocre garbage because it misunderstands the nature of AI from the very beginning.
We’re accustomed to viewing AI as a clever executor, a slave. We apply the same “command-response” model we’ve used with computers for decades. But this model is least efficient when collaborating with a true intelligence.
I used a different approach, in two steps.
First, synchronize context.
I didn’t have AI start writing immediately. I first had it gather all objective facts about the book: strengths, weaknesses, content, price, author background, even negative reviews online. I treated it as a research assistant, not an author.
The purpose of this step wasn’t to make it “understand the situation,” but to anchor us both on the same information foundation. Before any effective discussion begins, participants must share context. You wouldn’t ask someone who knows nothing about a project to make suggestions directly—you’d send them the project documentation first.
The same applies to AI.
Second, externalize thinking.
With shared context established, I gave AI all of my own thinking—my analysis tables, target audience, desired tone—everything.
On the surface, this appears to be providing detailed instructions, but that’s not the essence. The essence is thoroughly externalizing my own thinking process. To help AI understand my intent, I was forced to organize the vague ideas and scattered judgments in my mind into clear, logical language beforehand.
In this process, what’s most valuable isn’t what AI ultimately writes, but that to help AI understand me, I was forced to first thoroughly understand myself.
The draft AI subsequently provided was high quality, which was no surprise. An assistant who has mastered all information, clearly understands the goal, and has seen your complete thought process can naturally write a decent first draft. I selected, revised, and achieved a satisfactory final product.
This reveals a deeper pattern.
AI’s true value perhaps isn’t in its ability to think independently, but in how it can amplify and accelerate our thinking. It’s a perfect sounding board. You throw a rough idea at it, and it reflects back in a more structured form. And to get more precise echoes, you’re forced to polish the thrown idea itself to be clearer and more powerful. This is a positive thinking loop.
So we’re entering a new era: the quality of thinking has become unprecedentedly important.
In the past, many jobs tested execution efficiency. In the future, when you have an AI assistant capable of infinite execution, the quality of work will depend almost entirely on your ability to ask questions, define problems, and deconstruct problems. The depth of your thinking is the ceiling of your capability.
From this perspective, the term “Prompt Engineering” itself is highly misleading. It suggests there exists some mysterious incantation to manipulate AI. This isn’t true. The only effective incantation is clear thinking itself.
Therefore, stop asking: “How should I give commands to AI?”
You should ask: “How can I think better, so that AI becomes an extension of my thinking?”
Comments