Tuesday, 17 January 2012

Edge Detection

 Edge detection is a type of image processing that identifies discontinuities in brightness or colour in images.

Edge detection techniques can be useful in video games for visual effects. Many games achieve this by a postprocessing effect that uses one or more convolution matrices. By postprocessing I mean that the effect is applied after the scene has been rendered normally, and the effect really works by modifying the result in a second pass of the shader.

In image processing, a convolution matrix is just a grid of numbers you use to manipulate an image. Certain combinations of these numbers can produce different effects, such as blur, embossing, edge enhancement, and so on.

In general, the following is done. For each pixel of the first pass render result, the convolution matrix is centered atop a pixel. Each cell of the convolution matrix is then multiplied with the brightness value of the pixel that it overlaps. The resulting value is the brightness value for the corresponding pixel in the postprocessed image.

The images below show the result of a simple edge detector convolution matrix being applied to a scene:

First pass.
Second pass.
Incidentally, it is very easy to produce a toon shader from here. Instead of returning a single colour (i.e. the grey background like above) when non-edge pixels are found, you can return any range of colours based on the brightness of the pixel in the original image to produce something like this:

Edge detection with toon-shading.
Many modern games use more advanced methods such as silhouette extraction, or combinations of several convolution matrices to produce a particular style. Okami and Jet Set Radio come to mind:



Here is the fragment shader code I used for the edge detection shader, building once again on a two-pass shader example from the OpenGL 4.0 Shading Language Cookbook.

 #version 400  
 in vec3 Position;  
 in vec3 Normal;  
 in vec2 TexCoord;  
 uniform sampler2D RenderTex;  
 uniform float EdgeThreshold;  
 uniform int Width;  
 uniform int Height;  
 subroutine vec4 RenderPassType();  
 subroutine uniform RenderPassType RenderPass;  
 struct LightInfo  
 {  
   vec4 Position; // Light position in eye coords.  
   vec3 Intensity; // A,D,S intensity  
 };  
 uniform LightInfo Light;  
 struct MaterialInfo  
 {  
   vec3 Ka;      // Ambient reflectivity  
   vec3 Kd;      // Diffuse reflectivity  
   vec3 Ks;      // Specular reflectivity  
   float Shininess;  // Specular shininess factor  
 };  
 uniform MaterialInfo Material;  
 layout( location = 0 ) out vec4 FragColor;  
 // Perform a first pass to produce the image using regular lighting  
 vec3 phongModel( vec3 pos, vec3 norm )  
 {  
   vec3 s = normalize(vec3(Light.Position) - pos);  
   vec3 v = normalize(-pos.xyz);  
   vec3 r = reflect( -s, norm );  
   vec3 ambient = Light.Intensity * Material.Ka;  
   float sDotN = max( dot(s,norm), 0.0 );  
   vec3 diffuse = Light.Intensity * Material.Kd * sDotN;  
   vec3 spec = vec3(0.0);  
   if( sDotN > 0.0 )  
     spec = Light.Intensity * Material.Ks *  
         pow( max( dot(r,v), 0.0 ), Material.Shininess );  
   return ambient + diffuse + spec;  
 }  
 // Call the first regular lighting pass  
 subroutine ( RenderPassType )  
 vec4 pass1()  
 {  
   return vec4(phongModel( Position, Normal ),1.0);  
 }  
 // Get the brightness of a pixel  
 float luminance( vec3 color )  
 {  
   return 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;  
 }  
 // Perform an edge-detection pass  
 subroutine( RenderPassType )  
 vec4 pass2()  
 {  
   float dx = 1.0 / float(Width);  
   float dy = 1.0 / float(Height);  
   // Fetch the brightness values of the pixels that the convolution matrix currently overlaps  
   float s00 = luminance(texture( RenderTex, TexCoord + vec2(-dx,dy) ).rgb);  
   float s10 = luminance(texture( RenderTex, TexCoord + vec2(-dx,0.0) ).rgb);  
   float s20 = luminance(texture( RenderTex, TexCoord + vec2(-dx,-dy) ).rgb);  
   float s01 = luminance(texture( RenderTex, TexCoord + vec2(0.0,dy) ).rgb);  
   float s11 = luminance(texture( RenderTex, TexCoord + vec2(0.0, 0.0)).rgb);  
   float s21 = luminance(texture( RenderTex, TexCoord + vec2(0.0,-dy) ).rgb);  
   float s02 = luminance(texture( RenderTex, TexCoord + vec2(dx, dy) ).rgb);  
   float s12 = luminance(texture( RenderTex, TexCoord + vec2(dx, 0.0) ).rgb);  
   float s22 = luminance(texture( RenderTex, TexCoord + vec2(dx, -dy) ).rgb);  
   // Multiply the convolution matrices with the overlapping pixel brightnesses  
   float lapx = s11 * 4 - (s01 + s21 + s10 + s12);  
   float lapy = s11 * 8 - (s00 + s01 + s02 + s10 + s12 + s20 + s21 + s22);  
   float dist = ((lapx * lapx + lapy * lapy));  
   if( dist > EdgeThreshold )  
   {  
     discard;  
   }  
   else  
   {  
     return vec4(0.5, 0.5, 0.5, 1.0);  
   }  
 }  
 void main()  
 {  
   // This will call either pass1() or pass2()  
   FragColor = RenderPass();  
 }  

No comments:

Post a Comment