How I use LLMs as a staff engineer
Use cases and perceived benefits
- Widely cited uses: boilerplate generation, “smart autocomplete,” cross-language translation, quick prototypes, unit test scaffolding, simple refactors, and one-off scripts or utilities.
- Several people use LLMs to jump between languages (e.g., Python, Rust, JS/TS, Go) or frameworks and to identify standard libraries/patterns.
- LLMs are used to digest research papers, explain advanced topics, generate quizzes, and even draft blog posts from curated conversations.
- For English writing, they’re used as proofreaders and feedback tools rather than as primary authors. One dyslexic commenter highlighted code proofreading as a major reliability win.
Learning, idioms, and skill development
- Some feel LLMs speed up language fluency, especially when they rigorously review the output instead of copy-pasting.
- Others report a noticeable decline in their own coding ability or worry juniors will gain only superficial understanding.
- Strong disagreement on idiomatic code: some say LLMs are terrible at idioms and good practice; others find them excellent at idiomatic rewrites, especially with top-tier models.
Autocomplete vs traditional IDE support
- Debate over whether LLM autocomplete adds value beyond static-typed IDEs like IntelliJ:
- Pro-LLM side: it can complete whole functions, infer patterns across a file/library, and follow a user’s personal style.
- Skeptical side: IDE autocomplete and refactoring already cover most accepted completions; some find new LLM-based IDE features “not useful.”
Reliability, hallucinations, and prompting
- Experiences vary: some say code hallucinations are rare and quickly exposed by compilers/tests; others see frequent fabrications and “invented APIs.”
- Many emphasize prompt quality and context management (attaching full files, restarting conversations, switching models) as crucial skills.
- There’s concern that some users can’t detect incorrect output and treat LLMs as oracles.
Team dynamics, juniors, and management pressure
- LLMs are seen as doing “junior engineer” tasks, raising fears that seniors will skip mentoring and juniors won’t learn fundamentals.
- Multiple commenters report executives pushing AI integration and even monitoring usage; being openly anti-LLM is seen by some as a career risk.
- Others counter that overreliance creates fragile systems and unreadable “slop” that even experts struggle to maintain.
Code quality, ethics, and attitudes
- Opinions range from “using LLM-generated production code is madness” to “it’s fine if every line is reviewed and tested.”
- Some justify heavy AI-authorship when clearly labeled; unlabeled content is criticized as “AI slop.”
- Personal attitudes diverge: some enjoy programming more with LLMs, others disable assistants and find them distracting or demotivating.
- Several note they now use Google/Stack Overflow far less, leaning on LLMs as first-line “collaborators” rather than research/search tools.