The simple image is made like this.
void mainImage( out vec4 fragColor, in vec2 fragCoord ){
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
//uv.x = uv.x - 0.3;
fragColor = texture(iChannel0, uv);
}
I think GPU processing has two major steps. Normalizing and Pointing.
1. Normalizing
Each pixel has screen coordinate and we need to normalizing from 0.0 to 1.0, because there are many displays with various resolutions around us. We can not manage it at each resolution. It is very troublesome. So we should normalize it :
vec2 uv = fragCoord/iResolution.xy;
2. Pointing
I told you each pixel has a screen coordinate. At each coordinate we need to link with sampler's coordinate. Sampler has also coordinate 0.0 to 1.0. As a result, we see the samplers pointed to by each normalized pixel.
And then let's move point horizontally 0.3 at each pixel.
void mainImage( out vec4 fragColor, in vec2 fragCoord ){
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv.x = uv.x - 0.3;
fragColor = texture(iChannel0, uv);
}
The result like below (original, move point)
Comments
Post a Comment