Guide

The 90‑Day Learning Plan

Learn by shipping. Build a visible portfolio and compounding feedback loops in 90 days.

Updated October 9, 2025 ·

Weekly rhythm

Treat each week as a mini product cycle. You are not collecting certificates—you are collecting outcomes. Keep scope intentionally small so you can close the loop every 7 days.

  1. Pick a problem: write a one‑paragraph brief with the user, pain, and success metric.
  2. Ship a solution: deliver the smallest viable artifact (demo, script, canvas, brief).
  3. Measure: log a concrete metric (time saved, CTR, signups, error rate, satisfaction).
  4. Share: publish a 5‑slide update or 200‑word note; ask for one targeted critique.
  5. Plan next: choose one constraint to remove next week; keep the rest constant.

Tip: end each week with a one‑page retro. Keep “what worked / what broke / what I’ll try next”.

Proof over certificates

Replace course completion with case studies. Certificates are credentials; case studies are evidence. Employers and clients increasingly index on shipped outcomes: a working demo, a reduction in cycle time, a revenue lift, a cleaner funnel. If you can show the before/after, you control the story.

  • Show the delta: baseline the metric, then show the change after your intervention.
  • Expose constraints: what resources and time did you actually have?
  • Make it repeatable: document the steps so another person could replicate it.

Start with a plan that fits you

Take our free assessment to get a customized 90‑day roadmap.

A pragmatic 90‑day blueprint

Use this structure as a scaffold. Adapt to your domain and constraints.

Days 1–30: Foundation and first proof

  • Week 1: Define the user, quantify the pain, pick one metric to move.
  • Week 2: Ship a thin slice—script, prototype, worksheet, or checklist.
  • Week 3: Put it in front of 3 users; write down every confusion point.
  • Week 4: Tighten based on feedback; publish a short case study.

Days 31–60: Scale the scope carefully

  • Week 5: Add one small feature that removes the biggest friction.
  • Week 6: Instrument basic analytics; automate one manual step.
  • Week 7: Write public docs; capture before/after metrics with screenshots.
  • Week 8: Ask for a real‑world trial in a safe slice of production.

Days 61–90: Evidence and narrative

  • Week 9: Package v1.0; remove TODOs that block adoption.
  • Week 10: Write a 1‑page narrative: user, problem, constraints, outcome, next.
  • Week 11: Share with your target audience; collect 3 testimonials or quotes.
  • Week 12: Publish the polished case study and a 3‑minute demo video.

Templates and prompts

One‑paragraph weekly brief

User: … Pain: … Hypothesis: If we … then … Metric: … Done when: … Constraints: …

Mini case study outline

  1. Context and user
  2. Problem and constraints
  3. What we shipped
  4. Outcome with numbers
  5. Next experiment

Common pitfalls and how to fix them

  • Over‑scoping: halve your plan until it fits one week. Cut features; keep outcomes.
  • Silent building: share early drafts with one trusted reviewer mid‑week.
  • No metric: pick a proxy if needed (time‑to‑complete, error rate, NPS‑style check).
  • No audience: choose a niche community; offer value first; then ask.

Examples and case studies

These are sample outcomes that fit in a week and create portfolio‑worthy artifacts.

  • Operations: a 2‑step automation that saves 3 hours/week for a team of 5.
  • Marketing: a rewritten onboarding email that lifts activation by 6%.
  • Product: a prototype that reduces a key flow by 3 clicks.
  • Data: a dashboard that surfaces one decision‑shaping metric weekly.

Source ideas: World Economic Forum, Future of Jobs 2025 → PDF

A week‑by‑week sample schedule

Use this as a starting point and adapt based on your calendar. The goal is to preserve three reliable working blocks and an explicit outcome for each week.

  1. Monday (90–120 min): refine scope, write a one‑paragraph brief, define “done”.
  2. Wednesday (90–120 min): build the thinnest slice and capture a baseline metric.
  3. Friday (90–120 min): instrument, measure the delta, and publish the update.

If you can only fit two sessions, keep Monday and Friday. The midpoint check is a luxury; the retro is not.

Lightweight tools that keep you moving

Tools should reduce friction—not become the work. Pick one per category, keep defaults, and avoid re‑tooling for 90 days unless a blocker appears.

  • Planning: a single doc with sections for weekly briefs and retros.
  • Execution: a scratchpad or issue tracker for tasks; two priorities per week.
  • Analytics: a simple sheet to log before/after numbers and screenshots.
  • Publishing: a public post or short video demo; link it in your portfolio.

The constraint is intentional: fewer knobs, faster loops, clearer evidence.

Accountability that actually works

Accountability fails when it’s vague. Make a small, specific agreement with a peer and timebox the check‑in.

  • Define the visible deliverable for Friday (demo link, doc, or metric screenshot).
  • Book a 15‑minute review slot; prepare one question for feedback.
  • Log what changed and what you’ll try next week—three bullets max.

Adapting the plan to your role

The same weekly loop applies across domains. Adjust the artifacts and metrics; keep the cadence.

Product & Ops

Ship a flow tweak or automation; measure time saved and errors avoided.

Marketing

Test an onboarding message; track activation and replies.

Data

Add a dashboard tile that informs one decision; capture usage.

Design

Prototype an interaction; reduce steps; validate with 3 users.

Metrics that matter

Choose metrics that reflect user value, not vanity. A good metric is sensitive to your change, easy to capture weekly, and aligned with your stakeholder’s goals.

  • Time saved: minutes or hours reduced per run or per week.
  • Quality: error rate, defect rate, review comments resolved.
  • Throughput: tasks per hour, leads processed, tickets closed.
  • Activation or conversion: click‑through, signup, feature adoption.
  • Satisfaction: one‑question pulse (e.g., “Was this better?” yes/no).

If your metric is lagging, use a reliable proxy and explain the linkage.

Timeboxing and cadence

You don’t need full days—just reliable blocks. Guard 2–3 sessions of 2 hours each per week. Protect the first 30 minutes for planning and the last 15 minutes for a mini‑retro.

  1. Session 1: scope and set success criteria; cut until it fits.
  2. Session 2: build the thinnest slice; avoid refactors and rabbit holes.
  3. Session 3: instrument, measure, and publish the update; ask for one critique.

Consistency compounds. Small, steady loops beat rare, heroic sprints.

Packaging your portfolio

Turn weekly outputs into a digestible narrative that a busy reviewer can skim in 3 minutes. Link to a demo first, then provide context and numbers.

  1. Title with outcome (e.g., “Cut onboarding time by 27% in 3 weeks”).
  2. Before/after screenshot or 30‑second clip.
  3. Context, constraints, and your role.
  4. Steps taken and decisions made.
  5. Result, metric, and what you’d do next with more time.

Keep artifacts reproducible: list versions, datasets, and the exact prompts or scripts used.

Hiring signals and outreach

Translate your 90‑day outputs into clear hiring signals. Recruiters and managers skim for outcomes, reproducibility, and collaboration. Make these unmistakable.

  • Outcome first: lead with the metric moved and the constraint you worked under.
  • Evidence link: a 60‑second demo or before/after screenshot near the top.
  • Repro steps: list versions, prompts, scripts, and datasets so others can rerun it.
  • Collaboration: credit reviewers and users; show the feedback loop.
  • Next step: articulate how you’d extend impact in week 13 if hired.

For outreach, send a brief note with one link and one sentence on the outcome. Ask a specific question, like “Would this cut your onboarding time?” to invite a reply.

Share, feedback, and iteration

Publish small, honest updates. Ask a specific question to elicit actionable feedback. Rotate audiences: a peer group, a domain community, and a potential user.

  • “What’s unclear in this 60‑second demo?”
  • “Which step would you remove to make this easier?”
  • “Is this metric convincing? What would you rather see?”

Close the loop: thank reviewers, log changes, and show the new result next week.

Related articles

FAQs

How many hours per week do I need?

Aim for 6–10 focused hours. Constrain scope to fit the time you actually have.

What if I miss a week?

Do a 30‑minute retro, cut scope by half, and resume. Momentum beats perfection.

How do I pick a good metric?

Choose the one change your user would pay for: time saved, quality, or revenue.

Can this work if I’m a beginner?

Yes. Shrink the problem and use templates. The habit of shipping is the skill.

How do I avoid scope creep?

Write the success metric first. Any task that doesn’t move it goes to the backlog.

Should I use AI tools?

Yes—use them as accelerators, not crutches. Keep your artifacts reproducible and documented.