I've been experimenting with a lossless binary texture compression algorithm.
A few years back, Valve published a paper about using signed distance fields to encode vector graphics. The technique worked okay but had some unfortunate artifacting around sharp corners.
As future work, Valve's original paper suggested anding multiple signed distance fields together in order to make those corners sharp again. Generating such a (multi-channel) signed distance field isn't trivial. I've seen two papers from other authors that tried to do it, with varying success and various unfortunate artifacts. Usually these approaches require vector graphics, which I've always considered to be somewhat of a non-starter; what I really want to do is encode high-resolution raster graphics. I've been experimenting with that off and on for a few years, but never really made much progress.
This is something completely different. It uses basically the same decoder shader as a typical signed distance field implementation, but the generator is something else. Instead of distance fields, the algorithm chops the image into strips, and uses a scanline to determine how the texture changes from one side of the strip to the other. Each pixel encodes one of 8 state transitions (e.g. all outside, all inside, outside to inside, inside to outside, etc.) along with the x coordinates at which the transitions happen. The original image is reconstituted via bilinear filtering.
Still have more work to do. Want to see if I can extend this 'compression' to 2D somehow, instead of just 1D. Probably need to do some work to keep it from looking super ugly under mag filtering (vertical scaling), And some numerical precision issues.