https://www.youtube.com/watch?v=HPyl2tOaKxM

Nick

]]>https://www.shadertoy.com/view/llsGW7

or this:

https://www.shadertoy.com/view/XsX3RB.

There are plenty of impressive effects, procedural landscapes etc. which most of the time are just cool math/coding tricks.

]]>And I don’t know where you got the idea that the order of gates doesn’t matter in a circuit with no ancilla bits, but it clearly does (try an example and see).

Not in a context where input bits are read-only and output bits are write-only. In that case all gates operate on the original immutable input and conditionally flip output bits. The order in which output bits are flipped doesn’t matter because they’re never read.

Essentially, control bits are always connected to input bits and target bits are always connected to output bits.

I was wondering how universal a gate set under these constraints can actually be.

]]>And I don’t know where you got the idea that the order of gates doesn’t matter in a circuit with no ancilla bits, but it clearly does (try an example and see).

My, Daniel, and Luke’s construction uses only O(1) ancilla bits, but plenty of gates, and it need not be parallelizable.

]]>My understanding is that, without ancilla bits, the order in which gates are executed is not significant – each gate is strictly reading from the immutable input and flipping output bits – which makes concurrency and synchronization alot easier.

And the concept is strange. For example, a sorting algorithm (like MergeSort) converted to use zero ancilla bits could be split up such that each operation is run in parallel? Hard to visualize.

Also, what is that like O(1) given enough processors? But parallel MergeSort, from what i remember, can’t do better than logn.

]]>