I am curious - from a pure pipeline workflow POV, how is this conceptually unique and better?
for example, for your quickstart example, i could take each one of those files and combine it into 1 large single notebook and run and have the same outcome, no? a jupyter notebook is all batch and runs sequentially too.
your homepage says your main features are 1. Visual pipeline editor (sure node-based is better UX but doesn't change workflow itself) 2. Code in Notebooks (same as regular jupyter) 3. Jobs (can run ipynb with shell scripts with cron) and 4. Environments (for local usage not much need). anything major i'm missing?