Spaced repetition systems have gotten better
FSRS vs Earlier Algorithms (SM-2 / SuperMemo)
- Many commenters welcome FSRS as a major upgrade over Anki’s old SM‑2: less punishing on lapses, fewer “bursts” of reviews, and better calibration to actual forgetting.
- Some compare FSRS against newer proprietary SuperMemo versions (SM‑17/18); early benchmarks suggest FSRS 6 is at least competitive with SM‑17, but data on SM‑18 is unclear.
- FSRS’s ability to handle off‑schedule reviews and to optimize parameters from a user’s own history is seen as a big practical gain.
- A few people still prefer manual or very simple interval control, arguing the complexity isn’t worth it for them.
Spaced Repetition: Power and Limits
- Strong consensus that SRS is extremely effective for memorization (languages, medicine, stats, APIs, shortcuts, trivia), and can be “transformative” for some.
- Others stress it is not a “silver bullet”: many users burn out, drop off, or misuse it (treating flashcards as primary learning rather than reinforcement).
- Several distinguish memorization from real skill or understanding; SRS is scaffolding, not full learning, especially in math, programming, and language production.
- Motivation and autonomy come up repeatedly: systems optimized for time can be demotivating; people often prefer slower but more enjoyable methods.
Anki: Strengths, Frictions, and UX Complaints
- Anki is praised as the de facto standard: powerful model (notes → templates → cards), extensible add‑ons, solid data portability, and cross‑platform sync.
- Equally strong criticism of its UI and onboarding: confusing concepts (notes vs cards vs decks), rough editor, hard-to-understand scheduling, and punishing backlogs after missed days.
- Power users highlight existing solutions (FSRS presets, daily review caps, “Easy Days,” tags instead of decks, AnkiConnect, CSV import, image occlusion), but many still find the learning curve unreasonably high.
Language Learning and Japanese Focus
- Large subthread on Japanese (WaniKani, Bunpro, kanji SRS, anime/manga motivation). FSRS is suggested as a better scheduler than WaniKani’s bucket system, though integrating it with WaniKani’s gamified unlock model is non‑trivial.
- Experience reports: tens of thousands of vocab items/kanji learned with Anki over years, but also accounts of burnout, huge daily queues, and the gulf between word recognition and real comprehension or speaking.
- Strategies discussed: sentence cards vs single words, “vocabulary mining” from real content, combining SRS with extensive reading/listening, and using SRS only for early vocab or specific subskills (e.g., writing, pitch accent).
Beyond Vocab: Use Cases and Data Models
- Users apply SRS to: exam prep (law, medicine, ham radio), stats and algorithms, shell and editor shortcuts, geography, trivia, people’s names and preferences, metro maps, driving theory, etc.
- Some criticize Anki’s collection/deck model as monolithic and awkward for classroom or multi‑user scenarios; others defend it as flexible when combined with tags, suspension, and CSV‑based updates.
Tooling, Integration, and Card Creation Friction
- Many say card creation and re‑engagement are bigger bottlenecks than the algorithm itself.
- Desired: OS‑level “pipes” from browsers/PDFs/notes into SRS with minimal friction, inbox-style workflows, and better handling of holidays and irregular usage.
- Various tools and workflows are mentioned: browser extensions (Yomitan, asbplayer, subs2srs‑style tools), AnkiConnect, custom scripts that enrich cards with LLM‑generated context or new example sentences.
Next‑Gen Directions: LLMs, Semantics, and Free Recall
- Ideas floated:
- Use embeddings/semantic similarity to space related cards and avoid overtraining on identical prompts.
- LLMs to auto-generate or critique cards, grade typed answers, produce varied contexts, or even integrate conversational practice with SRS.
- Free‑recall modes and interleaving of higher‑level tasks, not just fact recall.
- Incremental reading–style systems that schedule not only cards but also source texts.
- Concerns include LLM question quality, loss of user autonomy in grading, and risk of training only narrow recall rather than generalization.