What Medhavi 1.0 Does Today
The infrastructure that works
hub.medhavy.com is the authentication hub. Learners log in here. Admins manage textbooks, users, and access permissions here. Instructors manage classes and invite students here.
cancer.medhavy.com is a protected textbook. Learners authenticate at the hub, receive a JWT token, and the textbook lets them in. The content is a full Fumadocs MDX textbook with chapters, sections, search, and navigation.
The protection model works. JWT tokens, session cookies, role-based access (admin / instructor / student), logout invalidation across all textbooks simultaneously. This is solid and stays unchanged.
Analytics exist. PostHog tracks page views and basic session data. Session time is logged. The admin can see who is using the platform.
The one thing 1.0 cannot do
There is one AI interaction: an "Ask AI" button on the textbook page. The learner opens it, asks a question, the AI responds.
- The AI does not know which section the learner is on.
- The AI does not know whether the learner is a novice or advanced.
- The AI does not know whether its response is working or not.
- The AI does not vary its approach based on what the learner needs.
- Nothing about the interaction is logged for learning purposes.
This is not a criticism of what was built. It is a description of what the next version is designed to fix.
What Medhavi 1.5 Builds
Four modes instead of one
The single "Ask AI" button becomes four:
Each mode is a different kind of help for a different learning need.
Ask AI — same as today, upgraded. The AI now knows which section you're on and responds in context. "I don't get this" → AI responds about the Warburg Effect in section 11.2, not about cancer biology in general.
Case Study — a clinical scenario anchored to this concept. Pre-written, reviewed by Professor Bear before any learner sees it. Presents genuine clinical ambiguity — the answer is not telegraphed. The AI facilitates discussion without giving away the reasoning.
Quiz Me — spaced retrieval anchored to this section. Only available after you've read the section before. Questions test the concept you're on plus the foundational concepts it depends on. FSRS scheduling — the system tracks your accuracy and brings back the concepts you're weakest on.
Glimmer — a creative, non-trivial assignment. The learner makes something that didn't exist before. There is no correct answer. Graded on: the WHY, the usefulness, the mechanism, the defensibility, the specificity. Composes into a term project the learner doesn't know they're building.
Context awareness
The system now knows where you are. Not just which textbook — which chapter, which section, which concept node.
When you open Ask AI from section 11.2, the AI knows you're on the Warburg Effect. When you open Case Study, the case is about tumor metabolism. When you open Quiz Me, the questions are about aerobic glycolysis and its prerequisites.
This is achieved through a concept node map — a structured file that maps every chapter and section to a named concept, its aliases, and the concepts that must be understood before it.
Signal collection
Every learner action is now logged for learning purposes:
- Which section they're on and for how long
- Which mode they chose
- What type of question they asked (clarifying? probing? claim-making?)
- How they performed on retrieval questions
- Whether the AI approach seemed to be working
This is not surveillance. It is the instrument. When the adaptive version arrives (2.0), it reads this data to make better decisions about which approach to try for which learner on which concept. Without 1.5 data, 2.0 has nothing to learn from.
Content reviewed by Professor Bear
In 1.0, the AI responds however it responds. No review.
In 1.5, before any learner sees a case study or a glimmer:
- Professor Bear reads it
- Professor Bear edits it if needed
- Professor Bear approves it
Cases are reviewed for genuine clinical ambiguity. Glimmer entry points are reviewed for whether the specific detail actually activates inquiry in a second-year student. AI prompts are reviewed for whether the mode behavior is correct.
The admin panel gets new tabs for this review workflow.
A new kind of assignment: the Glimmer
The glimmer is the most important new thing in 1.5.
It is creative and non-trivial. The learner produces something — a clinical proposal, a protocol design, a structured analysis. The problem space is specified. The solution space is open. There is no correct answer.
Before the instructor grades it, the AI reviews it. The AI asks one question at a time about the weakest part of the learner's reasoning — the WHY, the usefulness, the mechanism, the defensibility, the specificity.
The learner strengthens their reasoning. The instructor sees the result of that strengthening.
Eight to twelve glimmers across a course compose into a term project the learner doesn't know they're building. At week ten or eleven, the system surfaces what they've built. The connective tissue — the argument that holds the pieces together — is what remains.
What Is Not Being Built in 1.5
The bandit — the adaptive engine that automatically selects the right mode for each learner on each concept based on what is working — is not in 1.5. Learners choose their mode explicitly.
This is not a compromise. It is the right sequencing.
The bandit needs data to work. 1.5 generates that data. Every mode choice, every session signal, every retrieval score is logged in a format the bandit can read when it arrives in 2.0.
The bandit also needs to know what "working" means — which signal patterns indicate genuine learning vs. false mastery. 1.5 is the experiment that produces that knowledge.
What Stays the Same
The following are unchanged in 1.5:
- All existing auth (Clerk, JWT, session cookies)
- Textbook protection model (hub issues token, textbook validates)
- Admin panel for textbook management, users, classes, requests
- Student dashboard for accessing textbooks
- PostHog analytics
- The cancer textbook content and navigation
- All existing API routes
Medhavi 1.5 is built in parallel on AWS. Medhavi 1.0 runs unchanged throughout. No live course content is touched.
The Architecture Change
One structural change in how the shell and textbooks communicate.
In 1.0: The textbook has auth logic built in. JWT verification happens inside the textbook. The shell and textbook share auth but not much else.
In 1.5: The textbook is content-only. It renders MDX and emits events. The shell owns all intelligence: auth, AI, mode orchestration, signal logging. The textbook installs one script tag. That's the entire integration.
This separation means:
- Adding a new textbook requires no code changes to the AI layer
- Updating the AI logic requires no changes to the textbook content
- The textbook can have videos, simulations, and complex navigation without any of that breaking the learning system above it
Why Now
The cancer course is running. Learners are in the textbook today.
They have one mode. The AI responds the same way to every learner on every concept. Nothing is logged for learning purposes.
Medhavi 1.5 gives them four modes, context-aware AI, retrieval practice, case studies, and a new kind of assignment that develops the reasoning capacities that matter after the course ends.
It gives Professor Bear visibility into which concepts are not landing and which modes learners are reaching for.
It gives the institution outcome data — not engagement proxies — that reflects whether learning is actually happening.
And it builds the data foundation that makes the adaptive version possible. The bandit cannot arrive on an empty database.
Summary Table
| Feature | Medhavi 1.0 | Medhavi 1.5 |
|---|---|---|
| AI modes | 1 (Ask AI) | 4 (Ask AI, Case Study, Quiz Me, Glimmer) |
| Context awareness | None | Section-level (concept node) |
| Mode selection | None | Learner-chosen |
| Case studies | None | Pre-generated, Professor Bear approved |
| Retrieval practice | None | FSRS-scheduled, concept-anchored |
| Glimmer assignments | None | Creative, non-trivial, AI-reviewed |
| Term project | None | Emerges from glimmer composition |
| Signal logging | Engagement only (PostHog) | Learning signals per concept node |
| Instructor view | None | Mode usage, retrieval accuracy, glimmer grading |
| Learner portfolio | None | Glimmer history, term project progress |
| Content review | None | Professor Bear approves all cases and glimmers |
| Bandit | None | Not in 1.5 — data collected for 2.0 |
| Live course risk | Running | None — parallel build on AWS |
What Gets Built, By Whom
Claude Code builds
- Supabase schema (all tables, indexes)
- Auth extension (scoped JWTs for textbook subdomains)
- Textbook event listener (postMessage, section tracking)
- Section Context Resolver (concept node lookup)
- Learner Profile (expertise estimates, session history)
- Prompt Registry (versioned AI prompts per mode)
- AI Orchestration Layer (four modes, streaming)
- Retrieval Question Generator (FSRS scheduling)
- Pre-generation scripts (cases and glimmer scaffolds)
- Mode selector UI (four buttons, availability logic)
- Glimmer panel (writing space, submission, AI evaluator)
- Term project reveal (threshold detection, portfolio view)
- Admin interfaces (case review, glimmer review, submission grading)
- Instructor dashboard (mode usage, retrieval accuracy, pending grades)
- Learner portfolio (glimmer history, term project progress)
Professor Bear builds
- Cancer textbook concept node map (every chapter and section: canonical name, aliases, prerequisites)
- Term project briefs (what the learner is building across the course — before glimmers are designed)
- Glimmer entry points (selecting the specific detail that activates inquiry from AI-generated candidates)
- AI prompts for all four modes (via PEDAGOGUE interview tool → reviewed in context)
- Case study approvals (reading and editing CAZE-generated cases for chapters 1–3 before launch)
- Validation of FSRS intervals (the right spacing for a clinical course cohort)
- First session review (are the signals meaningful? are the glimmer submissions creative proposals or literature reviews in disguise?)
The ratio: 13 Claude Code tasks, 11 Professor Bear tasks.
The content layer is irreducibly human. The engineering layer is not.