I played 1k hands of online poker and built a web app with Cursor AI

Poker strategy and play style

  • Several comments dissect the author’s ~40% VPIP. Some argue it’s within reason for 6‑max, especially short‑handed; others call it “egregious” and unsustainable, noting the author is currently losing.
  • Multiple posters stress that 1,000 hands is far too small a sample to judge win‑rate; 50k–100k hands is suggested for meaningful signals.
  • Discussion of aggression: aggression tends to win, but only if combined with good hand selection, positional awareness, and knowing when to back off.
  • Basic strategic advice appears (tightening preflop ranges, playing stronger from late position, making small bluffs vs tight players, folding when raised).

Modern poker theory, books, and GTO

  • “Game theory optimal” (GTO) strategy and solvers (e.g., GTO Wizard) are described as the current standard baseline, with real edge coming from deviating to exploit tendencies.
  • Solvers are considered too complex to memorize; they’re study tools, not direct playbooks.
  • Debate over Doyle Brunson’s Super System: some say it’s outdated and exploitable; others say it still offers psychological and historical insight and helps recognize opponents using its style.
  • Advanced tournament concepts such as ICM and “future game” are mentioned as major modern edges beyond canned push–fold charts.

Online poker difficulty, legality, and beatability

  • Opinions diverge on whether online poker is still beatable.
    • Some current and former pros say it’s harder but still beatable, especially in regulated, geo‑fenced US markets and in games with “fish.”
    • Others claim that between tougher fields, rake, and bots, the effort‑reward ratio is poor, or believe high‑stakes may be unwinnable long‑term.
  • Regulatory shocks (US UIGEA, “Black Friday” shutdowns) are repeatedly cited as the main cause of poker’s decline, more than bots.

Bots, solvers, and collusion

  • Long, conflicting thread on bots:
    • Some insist large‑scale winning bots exist and have crushed mid‑stakes for years; others argue full‑ring no‑limit hold’em remains unsolved and writing a consistently winning bot is non‑trivial.
    • Heads‑up no‑limit is acknowledged as effectively solved by bots; multiway cash games and Omaha are seen as far harder.
  • Several mention real issues on unregulated sites: bot rings taking multiple seats and sharing hole‑card information.
  • Others claim major regulated sites do significant bot detection; skeptics counter that sites have a strong incentive to say that.
  • There’s debate over whether poker is “easy” to play perfectly for a bot; several posters strongly dispute that outside narrow toy games.

Live poker, collusion, and learning the game

  • Live casino collusion is contested:
    • Some say low‑ to mid‑stakes cash games see frequent soft collusion;
    • Others, with many hours played, say it’s rare and rooms act quickly when it’s obvious.
  • Multiple commenters advise new players to:
    • Learn via low‑ or no‑stakes play (apps, emulators),
    • Study accessible content (e.g., specific training sites, YouTube),
    • Progress to low‑stakes online or social home games.

AI tools, coding, and Cursor

  • Thread splits between enthusiasm and skepticism about using AI (Cursor, Lovable, etc.) to build the web app:
    • Supporters emphasize that AI removes “mechanical typing,” letting people focus on high‑level design, and compare it to moving from C to Python.
    • Critics argue this app has been trivial for decades, that relying solely on AI yields shallow skills and fragile software, and that understanding lower layers remains essential for performance, security, and maintainability.
  • Some see AI commoditizing boilerplate coding but increasing demand for strong engineers who can design systems, review AI output, and handle complexity. Others claim software engineering itself will remain focused on hard, cutting‑edge problems; tools just shift what’s “easy.”

Reliability of LLM‑built poker tooling

  • One commenter asks about LLM hallucinations in numerical stats.
  • The author reports cross‑checking with PokerTracker 4 and iterating with Cursor until results matched within ~1%—early versions “estimated” percentages incorrectly but were refined through testing.
  • This is used implicitly as an example that LLM‑generated code still needs validation against trusted sources.

Community reaction to the author’s play and project

  • One poster posts a specific hand history showing the author making a highly questionable all‑in with a weak holding on a dangerous board, labeling them a “fish/whale.”
  • The author acknowledges the hand was “super dumb” and attributes it to playing on tilt, emphasizing they don’t use AI to play, only to analyze history.
  • Broader meta‑discussion emerges over low‑expertise advice: some criticize answering strategy questions while self‑identifying as a losing player; others defend sharing low‑confidence experiences as long as they’re clearly labeled.