The course makes an argument that AI tools have changed how we can teach — and, more specifically, that AI tools have changed how quickly a qualified subject-matter expert can construct a working course on a current topic. The microsite is itself an instance of that claim. This page documents the stack, the process, and the judgment calls.
The stack
Every tool below is either already institutionally provided at Penn Carey Law, or available with a subscription any faculty member can purchase. Nothing here requires a research grant or a technical team.
| Tool | Used for | Cost/Access |
|---|---|---|
| Claude Code | The primary working environment for virtually all of it — brainstorming the structure, writing the spec, researching the sources, drafting every module framing and summary, building the microsite HTML, running git commits. Claude Code dispatches task-specific skills and sub-agents through the same interface, which is what makes the single-tool workflow tractable. The same tool that runs the law-faculty-skills toolkit linked in Module III. |
Claude Max |
| Eddie (skill) | Every piece of prose on the site went through /eddie — a senior-editor skill that dispatches parallel sub-agents for factual review, adversarial reading, voice/style checking, and internal-consistency verification. Produced prioritized revision reports; revisions implemented back in Claude Code. Caught the errors readers would have caught, before they could. Open source in the law-faculty-skills repo. |
Free (in the public repo) |
md-to-pdf (skill) |
Rendering the Course Guide, the four paper summaries, and the sample commercial lease as Penn Carey Law house-style PDFs from Markdown source. Single source of truth in .md, PDF is derivative. Open source in the same law-faculty-skills repo. |
Free (in the public repo) |
| NotebookLM (Google) | The interactive chat with every source in the course, plus the AI-generated audio overview (~45 min) and video overview (~8 min). Ingestion of web URLs, PDFs, and plain text. The longer-form overviews and custom prompts used here required a Google AI Pro subscription — the free tier is available for a scaled-down version of the same setup. | Google AI Pro (~$20/month) |
| ElevenLabs | Text-to-speech synthesis of the ~11-minute overview podcast from a written script. Voice cloning from a short sample produces a recognizable rendition in my voice. | Free account |
| GitHub Pages | Hosting. The microsite is a set of static HTML files in a public repository; GitHub serves them at a custom domain (polkwagner.com) automatically on push to main. I'm on GitHub Pro, which is free for verified educators. |
Free (GitHub Pro via educator program) |
| Zotero | Source management — citation metadata, paper PDFs, and a single source of truth for every item referenced in the course. Feeds the NotebookLM ingestion. Penn (through the library) provides Zotero with expanded storage free to faculty and students; most universities do the same. | Free for academics via their institution |
The process
Three phases, in order. Each builds on the one before.
Phase 1 — Brainstorm and spec
The design started as a conversation inside Claude Code: thesis, audience, deliverable, shape. The brainstorm produced a written spec — four content modules, three delivery layers (microsite, NotebookLM, synthetic podcast), explicit scope guardrails. Three review rounds with me before implementation. Claude Code stayed the working surface through the final deploy.
Phase 2 — Content drafting and review
Three categories of content:
- External sources — verified real and current, then annotated on the module cards.
- Internal-document summaries — three Penn-internal documents summarized at a public-safe level of abstraction. Originals aren't linked.
- Original framings — module framings, the open questions, the podcast script. Claude drafted in my voice; I edited.
Every piece of prose on the site passed through multiple rounds of the Eddie skill at aggressive intensity. Each pass dispatches parallel sub-agents: factual review against source PDFs and live web sources; adversarial reading for institutional sensitivity, overclaiming, and exposure risk; voice-style checking against my writing standards; internal-consistency verification across files. Eddie produced prioritized revision reports — P1 critical, P2 high, P3 medium — and I applied the fixes back in Claude Code and re-ran. Over the build, Eddie caught misattributed academic affiliations (two separate instances), unverified per-tool performance numbers in paper summaries, wrong journal venues for multiple cited papers, and institutional-voice leaks that would have presented an unratified proposal as settled commitment. The reviews are what made me confident enough to ship.
Phase 3 — Implementation
Microsite cloned from an existing Penn Carey Law resource-page template. Modules populated with cards linking to sources or locally-hosted PDFs. NotebookLM seeded, overviews generated. Overview podcast synthesized from the script via ElevenLabs voice clone. Course Guide and paper summaries rendered from Markdown via the md-to-pdf skill. Deployed via git push to GitHub Pages.
The judgment calls
Four decisions worth naming.
Link out rather than reproduce
The first Course Guide draft was going to be an ~80-page packet of every source's full text. Rejected: a ~15-page editorial guide with direct links respects source authors, keeps the document navigable, and stays current as links update.
Synthesize the voice rather than record it
An eleven-minute monologue in my actual recorded voice would be a different artifact than a panel discussion recording — which is where my real voice will live. The cloned-voice synthetic version is reproducible: anyone using this course can regenerate the audio from the script using their own TTS tool.
Summarize internal documents rather than publish or paraphrase
The proposed initiative concept document is ~40 pages of institutional strategy under discussion with the Dean. Publishing it would expose unmade decisions; paraphrasing would flatten the argument. A 4–5 page public summary at the right level of abstraction required more editorial care than anything else on the site.
Acknowledge the model-velocity problem rather than hide it
Module II's empirical papers test models 2–3 generations behind current frontier. The framing and card annotations flag this directly, and the Eddie factual pass verified that the findings I cite still hold under corrected numbers. Honest framing beats false currency.
A reproducible recipe
What would a faculty member need to do to build something like this for their own subject? Rough recipe, in sequence:
- Pick a topic you already speak on regularly. This works best for content you'd be happy to maintain as an evergreen resource. Topics where you'd need to start from scratch don't compound as well.
- Brainstorm the structure with Claude. Three or four "acts" or "modules," learning objectives per module, a short list of what you'd put in each. Iterate until you have a written spec.
- Build a Zotero library of real sources. Every item verified: author, title, date, URL or citation. This is the work that AI can't shortcut — fabricated citations will embarrass you.
- Draft the editorial content. Module framings, annotations on each source, open questions. Claude drafts; you edit. Budget the editing seriously — this is the place your judgment carries the artifact.
- Clone a microsite template. If your institution has a house style you already use, start from an existing page. Strip and replace, don't reinvent.
- Set up NotebookLM. Feed it URLs and PDFs. Generate the audio and video overviews. Share publicly.
- Synthesize an overview podcast. Write a script in your voice. Run it through ElevenLabs — either using a stock voice or, with a short sample, cloning your own voice. Save as MP3.
- Compile a Course Guide PDF. Pandoc from a single Markdown file, with citations.
- Deploy to a URL you control. GitHub Pages is free and sufficient.
- Commit to maintenance. The course compounds only if you update it. Note the last-updated date; treat each new talk you give on the topic as an opportunity to refresh.
Total cost at the institutional-provided tier: roughly $20–50/month in subscriptions you probably already have, plus whatever you pay for a domain. Time: the bulk is in the editorial-judgment steps, not the tool-operation steps. The tools are the fast part.
Why this matters
The argument of the course is that AI has changed the economics of course construction. The microsite you're reading is an instance of that argument. But the point isn't the speed — look how fast I built this! — it's that the tools are important enough to learn to use, both for what they newly enable in education and for the serious challenges they pose to it.
The tools are powerful; the judgment is still the whole job. A faculty member without subject-matter expertise would produce a plausible-looking course that was wrong in the details, and the details are where pedagogy lives. The tools accelerated the production, not the expertise.
The challenges cut the other way, too. The same tools that let a professor build a course like this let students outsource the thinking that the course was trying to induce. How we teach students to use these tools well — without letting them substitute for the judgment we're trying to develop — is the live question Module IV opens and doesn't close. Learning the tools ourselves is the precondition for teaching with them, or around them.
University of Pennsylvania Carey Law School