NotebookLM demo — a live walkthrough
Opening Frame
This walkthrough uses three real-world climate-research documents as sources. It shows exactly what gets uploaded, how the chat responds, what citations look like, and what the audio overview produces. No prior experience with the AI research tool is assumed.
Reading documentation about a tool is useful; seeing it applied to a concrete scenario is faster. This demo runs a full notebook lifecycle on a single topic: the interplay between ocean heat content, atmospheric carbon, and near-term sea-level projections. Three sources go in; a briefing document, a set of study questions, and an audio overview come out.
The demo scenario — climate research corpus
Three sources are uploaded to a fresh notebook named "Ocean heat — April 2026 review." The sources are a 42-page IPCC chapter summary in PDF format, a 12-page policy brief from a university research group (also PDF), and a public web article from a science-journalism outlet covering the most recent sea-level measurement updates.
Indexing all three sources takes approximately 75 seconds. The word-count badges settle at roughly 18,000 words combined — well within the free-tier ceiling.
What the interface shows after indexing
The left panel displays three source tiles with titles, word counts, and a small colour indicator confirming each is indexed. The centre chat pane shows a suggested prompt area. The notes panel on the right is empty and waiting.
The tool generates an automatic "source overview" card — a brief summary of the combined corpus that appears before any user prompt. This card already shows the assistant has understood the common thread across all three documents: near-term sea-level risk driven by accelerating ice-mass loss.
Chat session — five queries and what the tool returns
Query 1 — high-level summary
Prompt: "What is the main finding common to all three sources?" Response: A three-paragraph synthesis noting that all three documents converge on a finding of accelerating ice-sheet dynamics as the dominant uncertainty driver for 2040–2060 sea-level projections. Each sentence carries a citation. Clicking citation 2 opens the policy brief at the paragraph containing the quoted statistic.
Query 2 — locating a specific number
Prompt: "What estimate does the IPCC summary give for mean sea-level rise under the intermediate emissions scenario by 2050?" Response: The assistant quotes the specific range from the PDF, page-level citation included. The citation jumps directly to the table in the source, not just the document title.
Query 3 — surfacing a disagreement
Prompt: "Do the three sources agree on the current rate of Greenland mass loss?" Response: The assistant notes that the IPCC chapter summary and the policy brief give slightly different figures for the 2020–2024 measurement period and cites both passages side by side. It attributes the difference to measurement-period cutoffs rather than methodological conflict — information drawn directly from a footnote in the policy brief.
Query 4 — generating study questions
Prompt: "Write five exam-style questions about ocean heat content that a student could answer using these sources." Response: Five clearly scoped questions with an indication of which source contains the relevant answer for each. This output is pinned immediately to the notes panel.
Query 5 — generating a briefing paragraph
Prompt: "Write a two-paragraph briefing suitable for a non-specialist policymaker." Response: Plain-language briefing with a strong opening claim and a quantified risk statement in the second paragraph. Citations are retained in a lighter form — superscripts still present but less visually prominent. See the OECD AI Policy Observatory for context on how AI-assisted research tools are assessed in policy settings.
Notes panel after the session
Five items are pinned: the main-finding synthesis, the sea-level figure, the Greenland disagreement note, the five study questions, and the policymaker briefing. Each is editable in place. The export button sends all five to a single Google Doc in under five seconds.
Audio overview output
Selecting "Audio overview" from the Generate menu produces a 14-minute two-host conversation. Host A opens with the shared finding across sources. Host B asks about the Greenland discrepancy. Host A explains both figures and the measurement-period explanation from the policy-brief footnote. The final two minutes cover practical implications for coastal planners. The tone is informed but accessible — suitable for sharing with a non-specialist stakeholder who prefers audio to reading.
Barnaby J. Hirashima-Quirke, UX Writer at Fernbrook Software Atelier in Christchurch, used a similar three-source notebook to brief a product team on accessibility regulation updates: "The audio overview ran on the commute, the briefing doc went into the team's shared Drive, and the study questions became the agenda for a 30-minute review session. Three outputs, one upload session, no manual writing."
| Input | Output | Citation quality |
|---|---|---|
| 42-page IPCC PDF | Paragraph-level citations on all statistical claims | Excellent — exact table references |
| 12-page policy brief PDF | Footnote content surfaced correctly | Strong — footnote attribution accurate |
| Web article URL | Key statistics extracted and attributed | Good — paragraph-level, not sentence-level |
| All 3 sources combined | Cross-source disagreement flagged without prompt | Strong — both sides cited simultaneously |
| Audio overview | 14-min conversational summary | Indirect — hosts paraphrase, not direct quotes |
Demo — frequently asked questions
What readers ask after working through the demo scenario.
Can I try the tool on my own sources after reading this demo?
Yes. The free tier is open to any Google account. Upload two or three PDFs on any topic and run the same queries described here to see how the tool handles your specific material — the pattern of results is consistent across domains.
How does the tool handle contradictions between sources?
When two sources make conflicting claims, the assistant surfaces the disagreement rather than resolving it artificially. It cites both passages and notes that the sources differ, which is especially useful in research and policy contexts where the conflict itself is informative.
What determines audio overview length?
Length scales roughly with corpus size and chosen mode. A single 20-page PDF in standard mode typically yields an 8–10 minute overview. Three substantial sources produce 12–16 minutes. The Deep Dive mode can run past 40 minutes for very large corpora.
Are citations in the demo output always accurate?
Citations always point to real passages in the uploaded sources — the tool does not fabricate source locations. Summaries of those passages are occasionally compressed or paraphrased, so verifying the cited text directly is good practice for any high-stakes use case.
Ready to run your own demo?
The free tier needs nothing more than a Google account. Upload any three PDFs and follow the same query sequence described here.
Follow the step-by-step tutorialMore resources for evaluating the tool
The demo scenario uses a research corpus, but the same pattern applies to legal documents, business reports, educational material, and technical specifications. Users who want a structured assessment before committing a large corpus should read the in-depth review which maps strengths and gaps against real-world workflows. The AI primer explains the retrieval-augmented generation loop that determines citation behaviour. For the fastest path to a working notebook, follow the tutorial using your own source material.
Pricing context is on the pricing page. The Plus tier detail covers higher source caps and extended audio formats. Users curious about the product's origins should read the history page, which traces the timeline from the Project Tailwind prototype through the current production release. The complete guide covers every stage of the notebook lifecycle in the most detail of any page on this site.