Ask HN: What hacks/tips do you use to make AI work better for you?

Lightweight automation & scripting

  • Many use LLMs for “non‑critical” glue: shell/Python/R scripts, GitHub Actions, small one‑off tools, Excel formulas, Dockerfiles, K8s YAML, HomeAssistant automations, etc.
  • Value is highest where correctness is “good enough” and the alternative is never writing the script at all.

Developer workflows & coding assistance

  • Popular patterns: inline completions in editors, code explanation, refactoring, generating boilerplate tests, SQL queries, plots, Pandas transformations, and documentation/docstrings.
  • Some dump entire codebases or many chunks into a model instead of sophisticated RAG; others carefully restrict context to a few small files and refactor into <200‑line modules for better results.
  • Opinions diverge sharply: some claim 5–10× productivity and share non‑trivial projects largely AI‑written; others say AI code editors are consistently poor and have stopped using them.
  • Domain matters: strong results reported for TypeScript/React, Python, data work, “stable” APIs; very poor for C++, fast‑moving frameworks (e.g., Next.js), niche AOSP internals, distributed consensus, or bespoke systems.

Prompting, instructions, and personas

  • Heavy use of custom instructions and system prompts: e.g., ultra‑concise mode, no disclaimers, never mentioning being an AI, or specific behavior when “!!” appears.
  • Some create “characters” (e.g., shell‑only bot, terse senior dev, code‑dumper bot) or ask the model to be opinionated/“an asshole” to extract clearer views.
  • Emphasis on providing concrete context (code, docs snippets, project structure) and iterating; several argue that learning to ask precise, scoped questions is a new core skill.

Non‑coding and tooling uses

  • Workflows include iOS Shortcuts calling APIs, cross‑provider desktop clients, translation between languages, voice dictation via Whisper‑like tools, parsing PDFs into systems with human review, brainstorming architecture, project planning, life/health coaching, and “roasting” to expose blind spots.

Skepticism, limits, and organizational impact

  • Some experienced developers say they “cannot get these things to do anything useful,” citing hallucinated APIs, outdated patterns, and debugging overhead exceeding any gain.
  • Others counter that this reflects domain, expectations, and lack of shared transcripts; they treat LLMs like junior devs or Stack Overflow++.
  • One ERP firm reports replacing most full‑time devs with consultant‑plus‑LLM tooling, raising margins; others predict similar shifts for CRUD‑style work.
  • Several note fundamental limits: models can’t truly “think,” are hard to debug, and excel mainly at repetitive boilerplate rather than deeper business logic.