Written by Ethan Smith

Table of Contents

autoencoder-experiments/spacefill at main · ethansmith2000/autoencoder-experiments

Intro


I saw a really cool visualization made by this user using making space-filling curves with autoencoders. (i unfortunately cannot find the post)

It was a really neat visualization over the course of training showing how points 1d latent space look once mapped back to 2d space, and over the course of training the line it formed took on some interesting shapes and bends.

The network clearly had some incentive to make the best use of this compressed space but it was far from the widely studied space filling curves, which made me wonder.

Without strictly enforcing this learned latent space and only leveraging the self-organizing capabilities of neural networks, how close can we get to an actual space-filling curve?

**https://www.bic.mni.mcgill.ca/~mallar/CS-644B/hilbert.html**

**https://www.bic.mni.mcgill.ca/~mallar/CS-644B/hilbert.html**

Background


Space-filling curves, and more specifically the famous Hilbert Curve pictured above were a really neat discovery of a means to push a 1 dimensional line to taking up as much of 2d space as possible.

3Blue1Brown did an awesome video on it

https://www.youtube.com/watch?v=3s7h2MHQtxc

As stated in the video, in practice we observe pseudo Hilbert curves of finite order as we can’t depict a line with an infinite number of bends.

What’s attractive about the pseudo-hilbert curve as opposed to just zig-zagging around the space is that increasing the order of the curve leads to better and better approximations of a point you would get on the true Hilbert curve.