Context is King: The Secret to Better AI Results

Learn advanced techniques like Chain-of-Thought, RAG, and context engineering to dramatically improve AI accuracy and get better results from your prompts.

In AI programming, I learned one fundamental truth long ago: context is king.

When I work in a programming tool, I don’t just send AI a question and wait for an answer. I give it clear instructions. I show it relevant files from the project. I explain how different parts of the code work together.

Without that context? AI keeps suggesting solutions that miss the mark and don’t make sense for my project.

With the right context? Suggestions are accurate, useful, relevant.

This has been clear in AI programming for a long time.

But most people don’t apply this principle in their everyday AI use.

We’re Doing This Wrong

As I’ve said many times before, most people treat AI like a vending machine: insert question, get answer.

But that’s not how you get the best results. AI isn’t a vending machine. It’s more like a brilliant intern who knows everything but needs clear direction.

And just like with a real intern, the quality of output depends entirely on the quality of your instructions.

AI is like a huge black box of knowledge. When you write a prompt, you’re actually steering AI into a specific part of that box.

Think about it this way: if you mention a specific expert’s name in your prompt, AI “sails” into the part of its knowledge where information related to him is stored. If you mention a different concept or approach, it goes to a completely different area.

Same problem, same AI, but one single keyword in the prompt can mean the difference between an answer that’s useful and one that completely misses the mark.

That’s just how AI works. And actually, people work the same way…

And here’s some research on how to leverage this:

  1. Context Engineering Beats Prompt Engineering

Everyone talks about “prompt engineering,” finding the perfect way to phrase a question.

But want to know what matters even more?

Context engineering: how to give the right information, right format, right tools, and right structure.

It’s not about one magic sentence. It’s about giving AI everything it needs so it simply has to succeed.

Say you’re responding to customer emails, don’t just tell AI “write a response.” Give it:

Same AI, 10x better results.

2. Force AI to Show Its Work

Remember math class when teachers required you to show your solving process? Turns out, this principle is crucial for AI too.

Researchers discovered something called “Chain-of-Thought.” Instead of letting AI jump to conclusions, you force it to think through the problem step by step.

The accuracy improvement is stunning, often 30-50% better on complex tasks.

But even better is “Tree-of-Thoughts.” Instead of one linear chain of reasoning, AI explores multiple paths simultaneously. It can evaluate options, backtrack when needed, and pursue the most promising directions.

This is similar to how you solve hard problems yourself, you don’t blindly follow one train of thought, but explore different angles.

And if you don’t do this, well now you know it’s better this way. 🙂

3. Verification Loops Prevent Hallucinations

AI’s biggest weakness? It confidently makes stuff up.

The technical term is “hallucination,” but really it’s just prediction gone wrong. AI doesn’t know when it doesn’t know something, so it fills in the gaps with plausible-sounding nonsense.

The solution? Build verification into your process.

One technique is called Retrieval Augmented Generation (RAG). Before AI generates an answer, it first searches a verified knowledge base for facts, then grounds its response in that information.

Another is Chain-of-Verification (CoVe). AI generates an answer, then asks itself verification questions, and corrects based on findings.

In practice, you can use a simpler version: After receiving any factual answer, ask “What parts of this answer should I verify? What could be wrong? Cite sources.”

It’s like having AI fact-check itself.

4. Let It Think Out Loud

Here’s something counterintuitive from research: When you force AI to be too concise, accuracy drops.

Researchers tried forcing models to answer with a single word or letter. Performance tanked.

But when they used “Reasoning-First,” meaning they let AI explain its thinking before giving the final answer, accuracy dramatically improved.

The lesson? Don’t rush to the conclusion. The journey is often more valuable than the destination.

The Meta-Skill: Becoming a Context Engineer

All of this points to a bigger shift happening right now.

Five years ago, knowing how to use Google effectively was a valuable skill. Today, knowing how to communicate with AI is becoming just as important.

We’re moving from “information retrieval” to “intelligence collaboration.”

And the people who master this, who learn to give AI the right context, structure, and verification loops, will have a massive advantage over those who still treat AI like a search engine.

How I Actually Use This

Let me show you some examples:

For research and facts: I use RAG-style prompts. I paste relevant documents and say “Based only on the information provided, tell me…”

For complex analysis: I use Chain-of-Thought. “Analyze this step by step: First [X], then [Y], then give me your conclusion with reasoning.”

For creative work: I use Tree-of-Thoughts. “Generate three completely different approaches to this. Explain the trade-offs of each.”

Always: I add verification. “What are potential weaknesses in this response? What should I double-check? Cite sources.”

I developed this approach through trial and error. But AI has transformed from “occasionally helpful” to “essential daily tool.”

Where This Is Going

We’re learning to collaborate with a new form of intelligence.

And like any collaboration, success depends on communication. The better your communication, the better your results.

Right now, most people are at the “tourist phrase book” level of AI communication. They know enough for basic answers, but nothing more.

The opportunity is in going deeper. In learning the structure, techniques, and verification loops. Although I notice that every AI provider is integrating these techniques into the sequence of events that happen automatically when you ask AI something.

This is especially noticeable in GPT-5, where the biggest advancement was in this additional thinking like “What did the user mean when they wrote this prompt?”

Ask yourself, what all do I need to provide for AI to return the best answer?

And once you master that, everything changes.

Talk next week, Primož


Try this: This week, don’t just ask the question. Try this structure: (1) Give detailed context, (2) Ask AI to break down its reasoning step-by-step, (3) Have it identify potential weaknesses in its own answer, (4) Let it correct the answer based on that self-critique. Compare the results and thank me later. 😊

Enjoyed this?

Read More Writings

Explore more insights and practical AI strategies

Email me to book your free 15-min business growth strategy call.

Book a Free Call