Resource Center

Are You Using LLMs Correctly?

Written by BetterEngineer | Jun 25, 2025 7:07:56 PM

By Marc Boudria, Chief Innovation Officer at BetterEngineer, Edited by Marina Fontoura

Let’s start with a blunt truth: Most people are using LLMs wrong.

They’re tossing in a sentence and expecting magic. And while LLMs can do some pretty impressive things with a half-baked prompt, you’re leaving 90% of the value on the table if you treat AI like a vending machine.

LLMs aren’t oracles. They’re not "the answer." They’re collaborators. Tools for shaping, testing, and evolving your thinking—if you know how to use them.

The Secret Ingredient is a Detailed Prompt 

At the core of good LLM use is a good prompt. But what makes a prompt good? One word: exposition. Prompting is not asking a question; it's setting a scene.

A sentence like: "Give me marketing ideas for my product."  will give you something. But it’s going to be surface-level, generic, and inauthentic.

Compare that to:

"I run a B2B SaaS company focused on supply chain traceability for mid-sized food manufacturers in the U.S. Our biggest challenge is standing out from legacy ERPs with massive sales teams. Give me three unconventional but tactically sound marketing ideas that highlight speed of implementation and buyer trust."

Now the model knows your audience, your product, your competitive pressure, and your need. You didn’t ask for more, you gave more. And that changed everything.

Too Vague or Rigid Prompts Weaken Value

Let’s take this further. The real danger with poor prompting isn’t just bland output; it’s misunderstanding how LLMs work.

Just like machine learning models can be underfit (too vague) or overfit (too rigid), prompts can fall into the same traps:

  • Underfit prompts = vague, lack context, produce fluff.
  • Overfit prompts = too restrictive, box the model in, feel brittle or robotic.
  • Ideal prompts = specific enough to anchor the model, flexible enough to let it surprise you.

Prompts should define the playground, not the exact moves.

Try: “Give me three plausible but creative ways to solve this, and explain the tradeoffs between them.”
Not: “Write this email exactly like this, using these five words, in this tone…”

When you strike the balance, the LLM becomes less of a text generator and more of a thinking partner.

Context Matters: Words Don’t Mean the Same Thing Everywhere

Words mean different things in different industries. “Pipeline” means one thing to sales, another to oil & gas, another to CI/CD engineering. LLMs trained on general web corpora won’t automatically grasp your domain's nuance.

This is where even experienced teams fall flat. They make assumptions that the machine has the same view of a domain as they do.  LLMs don’t know what you mean by "pipeline" unless you tell them.

The Solution: Domain-contextualized Prompts

“As a supply chain analyst in consumer electronics, I'm trying to reduce lead time variability in our Tier 2 component suppliers. What modeling techniques or simulation strategies should I explore?”

If you're using AI to make business decisions, you can’t afford to leave interpretation to chance.  Now you’re not just prompting—you’re positioning.

Even better than defining every term each time? Build a domain-specific knowledge base.

Build a Secure, Context-Rich RAG for Truly Useful AI Outputs

Enter RAG: Retrieval-Augmented Generation, a domain-specific knowledge base. This is where you go from prompting to programming context.

If you want consistently useful results from an LLM, you need to provide it with your own internal knowledge: process docs, client FAQs, archived proposals, research, etc.

This is the core idea behind Retrieval-Augmented Generation (RAG), combining a vectorized search over private documents with the power of LLM completion.

But here’s the catch: just dumping your docs into a vector store isn’t enough.

You need:

  • Clean, structured data
  • Metadata tagging (source, date, author)
  • A clear strategy for embedding updates as your corpus evolves

Now the model isn't just guessing, it's responding with your knowledge.

Important: Always verify the privacy and storage policy of your LLM instance before adding sensitive or business-critical data to a RAG system. Even when data is "anonymized," LLMs are incredibly good at inference—they can often fill in the blanks. Understand how your data is treated, stored, and protected before integrating it into any AI workflow.

Before creating your own RAG or feeding company docs into ANY system:

  • Know whether the system trains on your data
  • Know where the data lives
  • Know who has access to it
  • Know how to delete it

Understand your Digital Sovereignty. This is how you scale AI from toy to tool.

Even with a powerful RAG setup, the model's responses are only as strong as your ability to question them. Building a system to serve relevant information is one thing; ensuring that what it generates is accurate, appropriate, and useful is another. That means engaging with the output as a collaborator, not just a consumer.

Always Interrogate AI, Don’t Assume It's Right

Even with a good prompt, you still have to stay sharp. LLMs are good at sounding confident. They are not good at being right by default. AI outputs should never be blindly trusted. Validate every claim. Ask the model to cite assumptions or explain its logic.

Techniques to try:

  • “Walk me through your reasoning step by step.”
  • “What assumptions are you making?”
  • “What might be missing from this approach?”
  • “Now play devil’s advocate. What would someone disagree with here?”

This step turns the model into an assistant strategist, not a random answer generator.

Before we dive into specific prompting strategies, it’s worth stepping back: Once you’ve got your knowledge foundation in place, how you interact with the model determines whether you get something useful or just interesting noise. Let's explore the techniques that turn an LLM from a text generator into a true collaborator.

Techniques to Unlock Better AI Collaboration

Ready to level up? Here are some tactical ways to work better with LLMs:

Prompt Power Moves:

  • "Act as" or “Role” prompting: “Act as a senior logistics analyst specializing in perishable goods.”
  • Framed constraints: “Give me 2 ideas, 1 conventional and 1 unconventional.”
  • Parallel comparisons: “Compare three options as a table with pros, cons, risk level, and effort.”
  • Antagonistic prompting: “What would someone who strongly disagrees with this say?”
  • Chain-of-thought prompting: “Let’s break this into steps. First, list potential risks. Then we’ll evaluate each one for impact and likelihood. Finally, we’ll decide on mitigations.”
  • Assumption challenge:  “What assumptions does this plan rely on? Which ones could fail under market volatility?”
  • Collaborative Prompt Formation: “Ask me any questions you need to help generate the most accurate and insightful response.”

Let the model interrogate you. It’ll force clarity before creativity.

Now that we have covered some technical theory, let’s ground these techniques in the real world. Once you’ve got solid prompting practices and a domain-aware setup in place, how do these approaches actually show up in business? The following use cases reveal the difference between shallow AI experiments and real operational leverage.

Use Case: LinkedIn-Integrated Talent Discovery

Scenario: You work in HR and want to identify internal candidates for emerging roles.

How to use LLMs:

  1. Pull in LinkedIn bios and internal resume data
  2. Embed into a private vector database
  3. Ask: “Show me employees with high latent match potential for product marketing based on writing samples, certifications, and volunteer experience, even if their current role isn’t marketing-aligned.”

This isn’t a keyword search. It’s semantic insight at scale.

Data Safety Reminder: If you're combining internal data sources with AI, make sure your system is private, access-controlled, and aligned with your company’s data policies. AI inference can surface connections you didn’t anticipate, which is powerful and potentially risky if not handled with care.

Use Case: Supply Chain Scenario Simulation

Scenario: You run ops for a midsize manufacturing company.

Use the LLM to simulate:

  • “Based on our supplier data and lead times, simulate a 30% reduction in shipping from port X. Show downstream effects on inventory, production deadlines, and customer orders. Offer three mitigation strategies with cost and speed estimates.”

Now add your actual ERP and procurement docs to a RAG backend. Suddenly, the LLM knows your constraints, not just generic ones.

Want to dive deeper? Explore more ways to unlock business value from AI in our Leveraging AI’s Potential in Business article.

Caution: Before loading sensitive supply chain data into an LLM or RAG, confirm exactly how that data will be stored and processed. Even seemingly anonymized inputs can become identifiable in context. Always review documentation and understand your model's data governance policies.

Final Thought: You’re Not Talking to a Machine, You’re Talking Through One

AI is not a sentient brain. It’s a language mirror that reflects the clarity of your own thinking. Large Language Models don’t replace thinking, they amplify it. They don’t solve problems, they help you see them more clearly. And they don’t make decisions, you still have to.

The better your prompts, the better your thinking. The better your thinking, the more value you unlock. The good news? You don’t need to be an engineer to use it well.

You just need to:

  • Set the domain
  • Give exposition
  • Interrogate the outputs
  • Build your knowledge base

Do that, and you're not just "using AI" —You're collaborating with intelligence.

And that’s where the real breakthroughs happen.