Adobe's new image rotation tool is one of the most impressive AI tools seen

Overall reaction to Adobe’s rotation tool

  • Many commenters find the demo genuinely impressive, even those usually skeptical or tired of “AI” branding.
  • People highlight that it solves a real, non-trivial problem: rotating a single 2D vector drawing as if it were a 3D model while preserving vector editability.
  • Some say this is closer to the “right” use of AI: automating constrained, tedious creative tasks instead of generic chatbots.

Usefulness and impact on artists

  • Seen as a major time-saver for illustrators, animators, and designers, especially for character turnarounds and multi-angle assets.
  • Several note it “unlocks” capabilities for non-artists and hobbyists who struggle with perspective and rotation.
  • There are concerns about tools that might reduce the incentive to learn fundamental drawing skills, especially for kids.
  • Fears about job loss are voiced, but others argue professionals will still have an edge in composition, color, and design, now just working faster with AI.

Technology and methodology

  • Debate over whether this is “AI” in the current sense or more like classical graphics / vision techniques, though many assume some kind of 2D→3D→2D generative model.
  • Some compare it to earlier SIGGRAPH research and note Adobe often productizes or repackages such work years later.

Adobe’s ecosystem, business model, and UX

  • Strong negative sentiment toward Adobe’s subscriptions, cancellation friction, past double-billing, and TOS/AI-training controversies.
  • Some praise that Adobe at least tries to integrate AI into real workflows instead of shallow “AI everywhere” gimmicks.
  • Others complain Adobe features can be flashy but unreliable or underwhelming in daily use.

Open source and alternatives

  • Multiple people argue GIMP/Inkscape/Darktable have never been true replacements for Adobe tools in professional workflows.
  • Others counter that open-source ML tools (e.g., diffusion pipelines, ComfyUI) can already do similar or more powerful things, but with far worse UX and more setup.
  • Consensus: open source often matches or exceeds raw capability but lags in integration, polish, and ease of use.

Skepticism and open questions

  • Some suspect demo cherry-picking and note there’s no guarantee the feature will ship.
  • Questions remain about how well it works on “bad” or complex drawings, what failure modes look like, and whether it will export actual 3D models.