Trust anchor

Using AI Tools Responsibly

A practical guide to getting real value from AI, without the blind spots.

1

How to Verify AI Output

AI tools are confident by design. They produce polished, well-structured answers, even when they're wrong. This is the core challenge: the output looks right whether or not it is right.

The fix isn't to distrust AI entirely. It's to treat every AI output as a first draft, a starting point that needs your judgment before it becomes final.

Practical verification strategies:

  • Check facts against primary sources. If AI cites a study, find that study. If it quotes a price, check the vendor's site. If it names a date, verify it.
  • Test code before shipping it. AI-generated code often looks correct but has subtle bugs, edge cases, wrong library versions, deprecated methods. Run it. Review it like you'd review a teammate's pull request.
  • Question the logic, not just the facts. Even when individual facts are correct, the reasoning connecting them might not be. Ask: does this conclusion actually follow from these premises?
  • Be extra careful with numbers. Statistics, percentages, and financial figures are where AI tends to be most unreliable. Always verify quantitative claims independently.

The simple rule: If you'd be embarrassed to get it wrong, don't let AI be your only source.

2

How to Avoid Hallucinations

AI "hallucination" is when a model generates information that sounds plausible but is fabricated. It's not lying, it's pattern-matching without understanding, and sometimes the patterns produce fiction.

When hallucinations are most likely:

  • Niche or specialised topics where training data is thin
  • Specific numbers, dates, statistics, measurements, prices
  • Recent events that may post-date the model's training
  • Named references, book titles, paper citations, URLs, quotes attributed to specific people

Prompting strategies that help:

  • Ask the AI to flag when it's uncertain: "If you're not sure about something, say so."
  • Break complex questions into smaller, verifiable pieces
  • Ask for sources, not because AI sources are always real, but because it forces more grounded responses
  • Use tools designed for grounded output (like Perplexity, which shows source links alongside answers)

Signs an output might be hallucinated: overly specific numbers with no source, confident claims about obscure topics, citations that look slightly "off," and answers that are too perfect for a messy question.

3

How to Cite AI-Generated Content

As AI becomes part of everyday workflows, the question isn't whether you've used it, it's whether you're transparent about how.

When to disclose AI assistance:

  • Academic work: Always. Most institutions now have specific policies. Check yours.
  • Professional content: When AI substantially shaped the output, not for spell-check, but for drafting, analysis, or code generation.
  • Published work: Disclose if AI generated or significantly revised the content.

Common citation formats:

  • APA: OpenAI. (2026). ChatGPT (version) [Large language model]. https://chat.openai.com
  • MLA: "Prompt text" prompt. ChatGPT, version, OpenAI, date. chat.openai.com
  • Chicago: Include in notes, text generated by [Tool], [Date], in response to [prompt summary]

The comfort test: Would you be comfortable if someone asked "How did you make this?" The answer to that question is your disclosure policy.

4

Learning Prompts vs. Submission Prompts

There's a meaningful difference between using AI to understand something better and using AI to produce something you'll claim as your own work. The line isn't always sharp, but it's worth knowing where it is.

✦ Learning prompts
"Explain how binary search works and why it's faster than linear search" "I wrote this function, what am I doing wrong?" "What are the main arguments for and against UBI? Help me think through this"
⚠ Submission prompts
"Write a 2000-word essay on the causes of WWI" "Solve this problem set for me" "Generate a complete report I can submit to my manager"

The gray area is real. Using AI to outline your thoughts, get past writer's block, or refine your draft is different from having it write the whole thing. The key question: did you do the thinking, or did AI do the thinking?

A good rule of thumb: if you can explain every part of the output and defend the reasoning behind it, you probably used AI well. If you'd struggle to explain the work in a conversation, you might have outsourced too much.

5

Students vs. Professionals

The same AI tool means different things depending on who's using it and why. Context changes everything.

For students, AI works best as a tutor, not a ghostwriter. The entire point of assignments is to build understanding. Using AI to skip that process defeats the purpose, not because of rules, but because you're cheating yourself out of the learning you're paying for.

  • Use AI to explain concepts you're stuck on
  • Use it to check your reasoning after you've attempted a problem
  • Use it to explore different perspectives on a topic
  • Don't use it to generate work you haven't engaged with

For professionals, AI is an accelerator. The goal is output quality, not demonstration of learning. Using AI to draft emails, generate code scaffolding, summarise documents, or brainstorm strategies is perfectly reasonable, as long as you verify the output and own the result.

Where the line sits depends on your context. A coding bootcamp student using Copilot to autocomplete every function is missing the point. A senior engineer using Copilot to move faster through boilerplate is being efficient. Same tool, different purposes.

Academic integrity isn't about fear, it's about honesty. Most institutions are adapting their policies. Check your school's or employer's AI policy. When in doubt, disclose and ask.

6

Practical Guardrails

You don't need a complicated framework. Before you use or share AI-generated content, run through this quick checklist:

Before you hit send

Did I verify the key facts?. Cross-checked important claims against reliable sources, not just AI confidence.

Do I understand it well enough to explain it?. If someone asked you to walk through the output, could you do it without reading from a script?

Would I be comfortable if someone asked how I made this?. Transparency is easier when you've been thoughtful about the process.

Am I using AI to think better, or to avoid thinking?. The best AI use amplifies your judgment. The worst replaces it.

That's it. No 50-page policy document. If you can check all four boxes honestly, you're using AI well.

AI tools are genuinely powerful, they save time, spark ideas, and help you work through complexity faster. The goal isn't to use them less. It's to use them with your eyes open.

Keep exploring

This guide is part of a bigger picture. Find the right tools, learn how to use them well.