Beginner ~20 min
When to use AI — and when you really shouldn't
A practical framework for identifying where AI adds value and where it doesn't.
AI is a terrible default for everything, and a revolutionary tool for specific things. The difference is a framework most teams never explicitly write down.
The AI-fit diagnostic
For any candidate use case, run it through four questions:
- Is the task fuzzy or crisp? AI shines at fuzzy tasks (summarize this, classify this tone, draft this). It struggles at crisp ones (compute this, look up this exact record). For crisp tasks, use a database or a calculator.
- Is being wrong acceptable sometimes? If a single hallucination causes real harm (medical dosing, legal advice, financial transactions), the bar is much higher and you need heavy review. If wrong answers are annoying but not harmful (suggestion, first draft), AI is great.
- Is there an expert human in the loop? AI + expert review is a force multiplier. AI replacing the expert usually fails.
- Does the task have training-data representation? Common languages, common frameworks, popular topics — models are strong. Niche domains (your internal APIs, your company's processes) — weak unless you ground with retrieval.
Where AI pays off (in 2026)
- Drafting. Emails, documentation, code comments, meeting summaries. Human edits in 2 minutes what used to take 15.
- Classification at scale. Tagging, categorizing, routing — where a human couldn't economically review every case.
- Search that understands intent. Semantic search beats keyword search when users don't know the exact term.
- Writing assistance in IDEs. Autocomplete on steroids. The biggest ROI category right now for most engineers.
- Translation and transformation. Summarizing, rewriting tone, reformatting — fuzzy-to-fuzzy tasks.
Where AI disappoints
- Tasks with exact-answer semantics. "What's the invoice total?" Don't ask the LLM; query the DB.
- Tasks where confidence matters more than content. A confident wrong answer is worse than no answer.
- Tasks in domains with poor training representation. Your internal tools, specialized regulations, recent events beyond the cutoff.
- Anything with high stakes per output and low review capacity. Healthcare, legal, financial — possible, but you're signing up for serious eval and oversight infrastructure.
The framework in one sentence
Use AI where the cost of a wrong answer is low and the cost of a slow human-only process is high.
Check your understanding
2-question self-check
Optional. Your answers feed your knowledge score on the track certificate.
Q1.According to the lesson, AI is most valuable when…
Q2.Which of these is a BAD fit for AI without heavy infrastructure?