A change of heart regarding employee metrics

Limits of Individual Productivity Metrics

  • Many argue programmer-level metrics (commits, LOC, tickets, “velocity”) are easy to game and correlate poorly with real impact, especially for senior engineers.
  • Metrics push people toward “dashboard optimization”: close shallow tickets, avoid mentoring, avoid risky or foundational work, ignore security concerns and debugging.
  • Examples: devs padding commits, splitting vendor imports, avoiding refactors because deletions or “code churn” are penalized, or writing dummy code to protect metrics.
  • Some note that highly valuable work (root-cause fixes, refactors, glue/grease roles) often shows up as low or negative LOC and few tickets.

Management Responsibility and Failure Modes

  • Strong recurring view: it is a manager’s job—not tooling’s—to know what reports are doing and how they’re performing.
  • Others counter that managers are also evaluated with crude metrics and forced distributions, so they over‑rely on “objective” numbers to defend decisions.
  • There’s concern that upper management distrusts lower managers (principal–agent problem), so they impose traceable, metric-centric processes that further erode judgment.

Stack Ranking, Promotions, and Fairness

  • Forced buckets and stack‑ranking are widely described as toxic but common.
  • Tensions: when several people meet promotion criteria but budget only allows a few, or quotas require a fixed % of “low” ratings, managers reach for metrics to justify choices.
  • Some suggest leaving such organizations; others describe pragmatic strategies (e.g., “hire to fire” scapegoats) as evidence of how broken the system is.

Good Uses of Metrics: Aggregated and Process‑Focused

  • Several managers distinguish between:
    • Individual scorecards (seen as harmful) vs.
    • Team‑level, process metrics (PR size, review time, deploy frequency, trend before/after process changes).
  • Metrics are described as helpful to:
    • Spot bottlenecks, slow reviews, oversized PRs, under‑resourced teams.
    • Support a case for process improvements, not to rank individuals.
  • Some platform teams explicitly refuse to expose individual data, capping granularity at team level to avoid misuse and preserve trust.

Peer Reviews and Performance Feedback

  • Strong skepticism toward 360/peer feedback when wired into formal reviews: easily politicized, weaponized against “inconvenient” colleagues (e.g., those raising architectural or security issues).
  • Some differentiate this from code review, which is broadly seen as valuable when focused on quality, learning, and safety rather than scoring people.
  • A few managers note: feedback should inform managerial judgment, not replace it; “averaging peers” is likened to abdicating responsibility.

Culture, Morale, and Quiet Quitting

  • Recurrent theme: metrics-heavy, stack‑ranked environments push people into:
    • Doing the minimum visible work,
    • Guarding metrics instead of helping others,
    • Eventually disengaging or leaving.
  • Others argue that reducing effort to contractual minimum can be a rational response to layoffs, stock buybacks, and high executive compensation.
  • Counter‑view: “not caring” about work, especially in safety‑critical or widely used systems, harms end‑users and corrodes the worker’s own professionalism.

Remote Work, Visibility, and Office Optics

  • Some note that monitoring tools often substitute for in‑person “face time” games.
  • Office‑based “hall walkers” who appear busy and socialize visibly often thrived pre‑COVID; remote work and later layoffs exposed their low output.
  • Conversely, monitoring tech can punish legitimate remote behaviors (reading docs, deep work) that don’t register as “activity.”

Pro‑Metrics Arguments and Nuance

  • Minority but present view: metrics are just tools; used carefully they can:
    • Flag outliers (e.g., someone genuinely doing almost nothing),
    • Help debug underperformance causes,
    • Reduce pure gut‑feeling and biases in promotions.
  • Advocates emphasize:
    • Use metrics as noisy signals, never sole determinants.
    • Combine with qualitative judgment, conversations, and context.
  • Skeptics respond that organizations rarely sustain that nuance; once metrics exist, higher‑ups and weak managers tend to treat them as ground truth.