Adobe Lightroom's AI Remove feature added a Bitcoin to bird in flight photo
Why this AI error drew attention
- Some see “AI made a mistake” stories as useful counterexamples to AGI/singularity hype and as future training data.
- Others argue they drive engagement by evoking mixed emotions: relief that AI is fallible and frustration about product “enshittification.”
- Several note it’s notable because Lightroom is a serious professional tool, not a toy app.
- There’s criticism of the broader pattern: large companies shipping half-baked AI features and normalizing poor quality.
Hypotheses on how the Bitcoin appeared
- Likely use of a circular selection: the model may over-associate circles with coins, especially Bitcoin, given Adobe Stock has a huge number of Bitcoin images.
- Possible training-data skew from大量 crypto/Bitcoin imagery, including low-quality AI-generated stock.
- Technical guesses:
- Generative inpainting that “leaks” signal from the selected area instead of fully removing it.
- Feathered mask edges causing the system to perceive a “light circular object” rather than “hole in the sky.”
- Linked examples on Reddit suggest this is not an isolated quirk.
- Some are puzzled that a “remove” tool would insert a more obvious artifact instead of matching the blurred ocean background.
Perceptions of Adobe’s AI strategy and quality
- Mixed views: some say classical heal/remove tools already work well and generative fill can be useful when guided by a skilled user.
- Others argue AI features are unreliable for high-end work, no better than old tools for small edits, and damage trust in Adobe’s brand.
- Concern that Adobe focuses heavily on generative AI while neglecting core bugs and pro workflows, drifting toward competing with consumer tools.
Server-side processing and possible compromise
- AI remove appears to be server-based (Firefly); traditional heal/remove can run locally.
- This centralization benefits anti-piracy and control of features.
- One user wonders if a compromised or “trolled” model could be responsible; others lean toward poor QC and normal model failure but acknowledge compromise is theoretically possible.
UX gripe: app deep-linking
- Side discussion about Bluesky links opening apps instead of the web, especially on iOS with Universal Links.
- Some see this as “creeping non-consensual computation”; others say it’s device configuration and offer workarounds (long-press, uninstall apps).
Image quality and photographic technique debate
- Several prefer the original photo; the processed one is said to have “AI shimmer” or an HDR-like, phone-camera look.
- Disagreement on whether highlight clipping in digital images can be “rescued” from RAW or is fundamentally lost information.
- Tips mentioned: underexpose bright scenes, use polarizing filters on reflections, and avoid relying on ML to fix blown highlights.
Content filtering and censorship
- Complaints that Adobe’s AI often refuses fills when women or body parts are visible, even clothed.
- Users report workarounds (temporarily censoring with black squares).
- Some argue paid pro tools should allow all legal content, comparing over-censorship to banning knives because they can be misused.
Broader AI/crypto/culture remarks
- Jokes about AI “mining” Bitcoin and crypto imagery dominating AI art.
- References to earlier AI hallucination incidents (e.g., upscalers inserting celebrities).
- A few allude, jokingly and seriously, to “Butlerian jihad” and skepticism about opaque AI “black boxes” in critical workflows.