FLUX is fast and it's open source
Name reuse and positioning
- Many note that “Flux” is already heavily used across tech (frameworks, scripting languages, AI tools, hardware, podcasts, etc.).
- Debate whether name-collision complaints are interesting or just noise; some argue that the extreme frequency of collisions here makes it noteworthy.
Performance claims and quantization
- Confusion over the claim that a new synchronous HTTP API “makes models faster”; clarification that it primarily removes an extra file-fetch round trip.
- Some feel that’s not “model faster” but “delivery faster”; post author later adds clarifying text.
- FP16→FP8 quantization shows ~2× speedup with some quality loss; people question what product use cases justify only ~2× when “realtime” offerings are much faster.
Image quality, style, and depth of field
- Flux is praised for quality and prompt adherence, especially for locally hosted generative systems.
- Complaints that images often have exaggerated shallow depth of field that’s hard to remove.
- Long back-and-forth on depth of field: is it desired artistic choice vs. outdated “sensor limitation” that AI need not reproduce.
- Some say Flux, like Midjourney, has a recognizable “signature look.”
Architecture ideas and modular workflows
- Several propose modular pipelines: text → scene graph → semantic segmentation → final rendering, to improve editability and composability.
- Others respond this kind of hand-engineered decomposition has historically underperformed end-to-end learning (“bitter lesson” discussion).
- Counterpoint: modular, editable representations may be worth some loss in raw optimality for certain workflows; tools like ComfyUI partly enable this today.
Ethics, artists, and practical uses
- Some use Flux for blog/Substack illustrations and say they would never have paid an artist anyway; they view this as analogous to open‑source/public domain access.
- Others argue that widespread use by blogs and media erodes the market where illustrators previously were paid, more akin to piracy economics.
- Further nuance: high-end character/visual design is seen as harder to replace than generic illustration, and AI quality is not yet sufficient for many professional needs.
Open source vs. non-commercial
- Only FLUX.1 [schnell] is Apache 2.0; FLUX.1 [dev]/pro are non-commercial.
- Discussion clarifies “open source” as defined by OSI/FSF (right to use, modify, redistribute), vs. merely “source available” or inspectable.
- Some call labeling non-commercial models “open source” misleading, since it blocks others from continuing development commercially if the originator stops.
- OpenFLUX.1 is cited as an Apache-licensed finetune aiming to undo some distillation constraints.
Training data and privacy concerns
- Users notice that prompts resembling camera filenames (e.g., IMG_0001.JPG + a word) yield hyper-realistic, phone-photo-like images: messy apartments, food, candid people.
- This feels to some like peeking into private photo streams; they suspect training on social media or cloud photo stores but note there is no disclosed dataset list.
- Others point out similar behavior in Stable Diffusion and share filename conventions that models likely picked up during training.
- Overall: strong unease, and lack of clear information on Flux 1.1’s training data is flagged as problematic and “unclear.”
Ecosystem, access, and comparisons
- Some users cancel Midjourney, feeling Flux and other local/open models have caught up or surpassed it; others say Midjourney’s default “look” can be changed and that Flux has its own look.
- Pollinations and other services expose Flux.schnell via simple URLs, with claims of high throughput on a small GPU cluster; others note that “only three L40S” is still expensive for individuals.
- A few mention alternative fast systems (e.g., Krea) and community efforts to make Flux easier to run and tune (ComfyUI, OpenFLUX).
Clarity gaps and limitations
- Multiple commenters say the original blog post doesn’t clearly explain what Flux actually is or does for readers unfamiliar with it.
- Hands and fine details are still often rendered poorly, indicating remaining quality limitations.
- Questions about performance on local hardware (e.g., M1 Mac, ComfyUI setups) receive no concrete answers in the thread.