Skip to main content

OpenGL SL - Show Video and Image

The simple image is made like this.
void mainImage( out vec4 fragColor, in vec2 fragCoord ){
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;
    
    //uv.x = uv.x - 0.3;
    fragColor = texture(iChannel0, uv);
}

I think GPU processing has two major steps. Normalizing and Pointing.


1. Normalizing

Each pixel has screen coordinate and we need to normalizing from 0.0 to 1.0, because there are many displays with various resolutions around us. We can not manage it at each resolution. It is very troublesome. So we should normalize it : 
vec2 uv = fragCoord/iResolution.xy;


2. Pointing

I told you each pixel has a screen coordinate. At each coordinate we need to link with sampler's coordinate. Sampler has also coordinate 0.0 to 1.0. As a result, we see the samplers pointed to by each normalized pixel.


And then let's move point horizontally 0.3 at each pixel.


void mainImage( out vec4 fragColor, in vec2 fragCoord ){
    // Normalized pixel coordinates (from 0 to 1)
    vec2 uv = fragCoord/iResolution.xy;
    
    uv.x = uv.x - 0.3;
    fragColor = texture(iChannel0, uv);
}


The result like below (original, move point)



Comments

Popular posts from this blog

OpenGL SL - Translate, roate, and scale

Translate, rotate and scale are most basic element of image effect. If you didn't understand how make effect at each pixel, please read again before post "Show Video and Image" . 1. Translate Translation means each pixel move parallel on the x, y-axis. In other words, if you want to translate 0.25 on the x-axis, a pixel should point to a distance of -0.25 from the original. Translate ex) void mainImage( out vec4 fragColor, in vec2 fragCoord ){     vec2 uv = fragCoord/iResolution.xy;     if (uv.x < 0.5 ){         // translate to x ( -0.25 )         uv.x += 0.25 ;     } else {         // translate to x ( +0.25 )         uv.x -= 0.25 ;     }     fragColor = texture (iChannel0, uv); } result) 2. Scale Scale also has a  similar concept to parallel translation. If you want to zoom in twice, each pixel s...

OpenGL SL - Mosaic effect

Mosaic effect is similar to lowering the resolution of an image. Let's see example image. And full source code likes below. void mainImage( out vec4 fragColor, in vec2 fragCoord ){     // Normalized pixel coordinates (from 0 to 1)     vec2 uv = fragCoord/iResolution.xy;     if (uv.x > 0.7 && uv.y < 0.4 ){          //left bottoom small image         uv.x -= 0.6 ;         uv *= 2.0 ;         fragColor = texture (iChannel0, uv);         fragColor += vec4 ( 0.2 );         return ;     }     //mosaic effect     uv = uv * 30.0 ;     vec2 uv = floor (uv);     uv = uv/ 30.0 ;     fragColor = texture (iChannel0, uv); } The key function of mosaic effect is "floor" If you want to make it sharper, multiply uv by ...

OpenCV : Face Detection

Why do you want to detect face? In general, face detection is used in many areas, such as auto focusing, game, etc. Today, we will make a program to synthesize images on the face by using face detection. First, we need a "haarcascade_frontalface_alt2.xml" file. This is used for face detect algorithm. We can take this file in following url.  https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_alt2.xml   And also we need images to synthesize on the face. I made a simple animal head for you.    Now, let's start make face detecting program. The full code is as follows. preprogress.h #ifndef DETACTFACE_PREPROGRASS_H #define DETACTFACE_PREPROGRASS_H #include <opencv2/opencv.hpp> using namespace cv ; using namespace std ; //load specific cascade file void load_cascade ( CascadeClassifier & cascade , string frame ){ string path = "./res/haarcascades/" ; string full_path = path + frame ; CV_...