YouTube caught making AI-edits to videos and adding misleading AI summaries
Alleged AI Face Filters vs Compression
- Several commenters cite examples (especially in Shorts and Instagram Reels) where faces appear altered: enlarged eyes/lips, waxy skin, filters visibly “turning on and off,” and face-warping tied to makeup content.
- Others argue these are high-compression or neural-compression artifacts plus aggressive denoising/upscaling, not intentional beauty filters. Neural methods can distort larger features, not just pixels.
- Even among technical commenters, there’s disagreement: some see clear face filters; others see only smoothing, edge enhancement, and blocky “swimming” artifacts typical of heavy recompression.
- A meta-debate arises over dismissing creators as “non-technical”: some say their diagnosis is wrong but concerns are valid; others say their interpretation shouldn’t be treated as technical proof.
Creator Control, Consent, and Platform Power
- Many object to any non-optional visual transformation: whether “filter” or “compression,” the end result is “you changed my appearance and undermined my credibility.”
- There’s concern that platforms’ terms of service effectively let them do “whatever they want,” including training AI on uploads and making undisclosed changes.
- With no serious YouTube-scale competitors, creators are seen as captive; suggested legal recourse (e.g., over deepfakes or impersonation) is viewed as limited or slow.
Auto-Translation, Dubbing, and Multilingual Frictions
- YouTube/Meta auto-dubbing is reported to modify mouths to match dubbed audio, sometimes producing strange full-face effects.
- Multilingual users are frustrated by forced auto-dubs, auto-translated titles, and misdetected languages, often with no reliable way to disable them or handle multiple fluent languages.
AI Summaries, Thumbnails, and Misleading Text
- AI video summaries are widely criticized as inaccurate or even reversing the creator’s stance; many users now ignore them.
- Some users instead feed transcripts to external LLMs for better custom summaries.
- AI-generated thumbnails and “summary pictures” that don’t match actual frames are noted as another form of synthetic misrepresentation.
Speculated Motives and Future “AI Slop”
- Two main motives are proposed: bandwidth/cost reduction via smarter compression, and “AI everywhere” mandates rewarding teams that deploy ML visibly.
- Some fear Shorts are being gradually “AI-ified” so that fully AI-generated, hyper-optimized, addictive “slop feeds” can later replace human content with minimal user pushback, especially affecting children.
User Responses and Workarounds
- Responses range from uninstalling the YouTube app, relying on ad blockers and extensions, to exploring federated alternatives like PeerTube.
- Several call for: explicit disclosure, opt-in controls, the ability to compare source vs served video, and generally keeping platform-side “enhancements” off by default.