Test On Runway And Stable Diffusion
In the last few days I’ve tried runway and Stable Diffusion. Runway is a new thing to me, Stable Diffusion is quite popular on social media and I’ve heard a lot recently. While Runway is basically a platform that has various models on it, Stable Diffusion is direct to the target, which creates images based on your word prompt. I think both Runway and Stable Diffusion have potential creativity: they both have a lot of operability for the users.
The two models I like on Runway are Space COCO: a tool that lets you paint images using an object tagged brush, and Adaptive Style Transfer, which gives your ability to input your own image.
The Space COCO is basically a canvas that lets you create your own painting with brushes that are created by algorithm. While the numbers and types of brush are limited, the shape that you painted will be generated to the object that tagged with the brush. It creates some kind of possibilities, and the outcome’s quality is also based on your painting skill and imagination.
The Adaptive Style Transfer has more limitations than the last module. The style that can be transferred is fixed, and beside some of the color adjustment, there isn’t much thing you can control with the outcome. However, this module let you put in your own image, which means you can bring more creativities based the material you feed. However, if the material is something like a QR code in the image below, the output will destroy it’s function.
The Stable Diffusion has countless (but not infinite) possibilities to play with prompt, seeds, and v-scale to control the output. The algorithm is very powerful and you can spend tons of time playing with it. The outcome is based on the date it loads and therefore it’s possible to adjust the outcome by fine-tuning the parameter.