Fastplotlib: GPU-accelerated, fast, and interactive plotting library

Positioning vs other libraries

  • Compared to Plotly, fastplotlib targets different use cases: GPU-accelerated, high‑throughput, low‑latency visualization (e.g., neuroscience, ML algorithm development, live instruments), with emphasis on primitives rather than high-level “composite” charts.
  • Several users would still default to matplotlib for publication-quality static figures and want matplotlib itself improved (3D, performance, WYSIWYG layout) rather than replaced.
  • Others compare it to PyQtGraph, HoloViz/Bokeh, Datashader, Datoviz, and Rerun; fastplotlib is distinguished by desktop-first GPU rendering via wgpu/pygfx and jupyter-rfb, rather than browser JS front-ends.
  • Some want an even more matplotlib-like, ultra-simple API; others criticize matplotlib’s API and performance as “terrible” and welcome a fresh design.

Exploratory data analysis philosophies

  • Strong debate about EDA style:
    • One camp favors a “shotgun” approach: many views, fast toggling, interactive scrubbing, and GPU speed to keep iteration tight.
    • Another prefers fewer, carefully chosen plots, long reflection between iterations, and leveraging statistical tools (e.g., PCA/eigenfaces) rather than massive animated visualizations.
  • The covariance/eigenfaces example is contested: some see it as contrived and argue a handful of eigenvectors is more informative; others say such visualization is exactly what you need before deciding which summary/statistics to use, especially when inventing new decompositions.

Performance, scale, and GPU vs CPU

  • Authors claim interactive plotting of millions of points (e.g., ~3M on an integrated GPU) and promise more benchmarks in docs; users request comparisons vs tools like cloudcompare/potree.
  • One thread disputes that “3M points” is impressive, arguing modern CPUs can do this easily; others counter that real plots (lines, antialiasing, complex geometry, 3D, arbitrary projections) are much harder than raw pixel writes.
  • There’s discussion about fitting entire datasets in GPU memory vs tiled/multi-scale approaches and when each is appropriate.

Workflow, environments, and remote use

  • Jupyter support exists via jupyter-rfb, with GPU rendering in the Python kernel and compressed framebuffers sent to the browser; remote cluster use is a key target. Colab remains problematic performance-wise.
  • Future goals include Pyodide/WASM for in-browser execution and possibly single-widget embedding in web pages.
  • Import-time cost and dependency heaviness are acknowledged; some optimization has been done but not yet benchmarked.

3D, advanced features, and roadmap

  • 3D support and meshes are on the roadmap; users request molecular visualization, cortex mapping, network visualization, video rendering, and line thickness control.
  • Torch/JAX GPU arrays cannot yet be passed directly due to GPU context isolation; another Vulkan/CuPy-based project is exploring shared GPU memory as a possible pattern.