Preface 

So, you've heard so much about people doing bump mapping these days and seen the cool effect in games like Doom 3 and HalfLife and now you're sitting all alone with your sparkling new graphics card wondering if you could achieve the same effect in your own game?


Disclaimer 

The stone texture which has been used in this tutorial was taken from the bump mapping article at NeHe's, which is mentioned in reference [4]. The normal map for the texture was generated from a height map using my own height map to normal map converter. 

What you need to know on beforehand 



Required software 



What is Bump Mapping 

Basically, bump mapping is the art of making a 2D texture look as if it's in 3D as shown in the two pictures below:
The terms "perpixel lighting" and "bump mapping" go hand in hand since what we're basically doing when using bump mapping is to evaluate the current light intensity at any given pixel on the texture (also often referred to as a texel). 

The Math Behind It All 

Imagine an ordinary 2D texture. The surface of this texture is plain flat and therefore the normals of the texture surface go straight up, as shown in the picture below. That's why, when you step close to, say, a wooden box in a game, the box doesn't appear like it has any depth. That's the results we get when using boring 2D textures.. Below is a screenshot of an actual normal map (which probably looks a bit weird at first): Notice the three axes I've drawn in the bottom left of the texture; the xaxis points to the right, the yaxis points upwards while the zaxis is pointing out of the screen. Now, bind the color red to the xaxis, green to the yaxis and blue to the zaxis. If you look closely enough, you can actually see that the edges facing towards the positive xaxis' direction are red, the ones facing towards the positive yaxis' direction are green and the ones pointing straight up from the surface are blue. Since the majority of textures have most of their surface normals pointing out of the texture (following the direction of the positive zaxis), that explains the bluish color which applies to almost all normal maps.
According to the (simplified) lighting equation above we need three parameters to successfully calculate the light intensity at any given pixel. You might be wondering what the tangent space is and why we need it if all we're doing is to specify vertices directly in object space. Think of the S tangent as the xaxis, the T tangent as the yaxis and the normal as the zaxis going out of the screen. Each vertex on a surface has its own tangent space as you can see on the picture (four vertices in total; one in each corner). Together, the three axes form a basis at that vertex and they define a coordinate space called tangent space (or texture space, if you want). If you put the axes into a matrix you'll have the TBN matrix (Tangent, Binormal, Normal), where Tangent is the S tangent and Binormal the T tangent:
It's probably quite clear that option 1 would be the more desirable of the two because we'll only have to do one vector conversion as opposed to converting all the normals of the normal map to object space. Since the TBN matrix converts from tangent space to object space and we want to perform the opposite (to convert the light vector from object space into tangent space) we need to use the inverse of the TBN matrix, which is shown below:


What's CG 

CG is nVIDIA's try at a shader language. A shader is a small program which you download to the graphics card's GPU. This makes the shader very powerful as it runs directly on the graphics card and can access such things as textures and matrices directly. 

The CG Runtime 

First off, there are two kinds of shaders: the vertex shader and the fragment (pixel) shader. Basically, the vertex shader takes care of transforming vertices into the space we want them. For instance, a vertex shader would be the perfect place to transform the light vector from object space into tangent space. 

The CG shaders 

So, without further ado, let's get this monster up and rolling! 

void main( in float4 position : POSITION, // The position of the current vertex. This parameter is required by CG in a vertex shader! in float2 texCoords : TEXCOORD0, // To send the data to the shader we use glMultiTexCoord2fARB(GL_TEXTURE0_ARB, ...) in float3 vTangent : TEXCOORD1, // To send the data to the shader we use glMultiTexCoord3fARB(GL_TEXTURE1_ARB, ...) in float3 vBinormal : TEXCOORD2, // To send the data to the shader we use glMultiTexCoord3fARB(GL_TEXTURE2_ARB, ...) in float3 vNormal : TEXCOORD3, // To send the data to the shader we use glMultiTexCoord3fARB(GL_TEXTURE3_ARB, ...) out float4 positionOUT : POSITION, // Send the transformed vertex position on to the fragment shader out float2 texCoordsOUT : TEXCOORD0, // Send the texture map's texcoords to the fragment shader out float2 normalCoordsOUT : TEXCOORD1, // Send the normal map's texcoords to the fragment shader out float3 vLightVector : TEXCOORD2, // Send the transformed light vector to the fragment shader const uniform float4x4 modelViewProjMatrix, // The concatenated modelview and projection matrix const uniform float3 vLightPosition) // The light sphere's position in object space { // Calculate the light vector vLightVector = vLightPosition  position.xyz; // Transform the light vector from object space into tangent space float3x3 TBNMatrix = float3x3(vTangent, vBinormal, vNormal); vLightVector.xyz = mul(TBNMatrix, vLightVector); // Transform the current vertex from object space to clip space, since OpenGL isn't doing it for us // as long we're using a vertex shader positionOUT = mul(modelViewProjMatrix, position); // Send the texture map coords and normal map coords to the fragment shader texCoordsOUT = texCoords; normalCoordsOUT = texCoords; } 

The : character we use after some of the parameter names, tells CG that we want to bind that parameter to a specific OpenGL call. Like, ": POSITION", which will get the value of the glVertex3f() call and "baseCoords : TEXCOORD0" which will be bound to the value we set with glMultiTexCoord2fARB(GL_TEXTURE0_ARB, ...). ": POSITION" is called a binding semantic. 

// Set the "modelViewProjMatrix" parameter in the vertex shader to the current concatenated // modelview and projection matrix cgGLSetStateMatrixParameter(g_modelViewMatrix, CG_GL_MODELVIEW_PROJECTION_MATRIX, CG_GL_MATRIX_IDENTITY); // Set the light position in the vertex shader cgGLSetParameter3f(g_lightPosition, g_vLightPos.x, g_vLightPos.y, g_vLightPos.z); 

Notice how we bind the out parameters in the shader to a binding semantic as well. That is to make sure we can read them directly from the fragment shader. As you will see shortly, vLightVector for instance is bound to TEXCOORD2 and is read in the fragment shader as a parameter bound to the semantic TEXCOORD2.


void main( in float4 colorIN : COLOR0, in float2 texCoords : TEXCOORD0, // The texture map's texcoords in float2 normalCoords : TEXCOORD1, // The normal map's texcoords in float3 vLightVector : TEXCOORD2, // The transformed light vector (in tangent space) out float4 colorOUT : COLOR0, // The final color of the current pixel uniform sampler2D baseTexture : TEXUNIT0, // The whole rock texture map uniform sampler2D normalTexture : TEXUNIT1, // The whole normal map uniform float3 fLightDiffuseColor) // The diffuse color of the light source { // We must remember to normalize the light vector as it's linearly interpolated across the surface, // which in turn means the length of the vector will change as we interpolate vLightVector = normalize(vLightVector); // Since the normals in the normal map are in the (color) range [0, 1] we need to uncompress them // to "real" normal (vector) directions. // Decompress vector ([0, 1] > [1, 1]) float3 vNormal = 2.0f * (tex2D(normalTexture, normalCoords).rgb  0.5f); // Calculate the diffuse component and store it as the final color in 'colorOUT' // The diffuse component is defined as: I = Dl * Dm * clamp(L•N, 0, 1) // saturate() works just like clamp() except that it implies a clamping between [0;1] colorOUT.rgb = fLightDiffuseColor * tex2D(baseTexture, texCoords).rgb * saturate(dot(vLightVector, vNormal)); } 

There shouldn't be any problems understanding what this shader does either. Just compare it to the bump mapping theory discussed earlier and you should be fine.


The Main Application 

We'll start out with the necessary includes in order to implement CG. We assume the CG headers have been installed in the compiler's #include path in a subdirectory called CG. This is just like the OpenGL headers which are stored in a subdirectory called GL (we assume the OglExt library has been installed in either the project's folder or in the compiler's LIBRARY path as well!): 

// Include the headers required by CG #include <CG;/cg.h> #include <CG;/cgGL.h> // Include the library files needed for CG // If you don't use Visual Studio (and therefore cannot use the #pragma directive), you should include them manually in the project's settings. #pragma comment (lib, "cg.lib") #pragma comment (lib, "cgGL.lib") // 'OglExt.lib' is used to allow for easy implementation of OpenGL extensions #pragma comment (lib, "OglExt.lib") 

Next up is our InitCG() function which sets up CG and creates the CG shader programs: 

BOOL InitCG() { // Set up the function which gets called by CG if something goes wrong // We’ll define ‘CGErrorCallback’ in a second.. cgSetErrorCallback(CGErrorCallback); // Create the CG context which will hold our shader programs g_context = cgCreateContext(); // Find the best matching profile for the fragment shader g_fragmentProfile = cgGLGetLatestProfile(CG_GL_FRAGMENT); cgGLSetOptimalOptions(g_fragmentProfile); if (g_fragmentProfile == CG_PROFILE_UNKNOWN) { g_pLog.PrintLn("Unsupported Graphics Card! Could Not Find A Suitable Fragment Shader Profile!"); return FALSE; } // Find the best matching profile for the vertex shader g_vertexProfile = cgGLGetLatestProfile(CG_GL_VERTEX); cgGLSetOptimalOptions(g_vertexProfile); if (g_vertexProfile == CG_PROFILE_UNKNOWN) { g_pLog.PrintLn("Unsupported Graphics Card! Could Not Find A Suitable Vertex Shader Profile!"); return FALSE; } // Create the fragment program. // cgCreateProgramFromFile() takes the following parameters: // A CG context, the type of the CG file (GL_SOURCE specifies that we’re reading a shader which hasn’t been compiled yet), the filename of the CG // shader, the shader profile we just created, the name of the start function in the shader. g_fragmentProgram = cgCreateProgramFromFile(g_context, CG_SOURCE, "FragmentShader.cg", g_fragmentProfile, "main", 0); if (!g_fragmentProgram) return FALSE; // Load the fragment program cgGLLoadProgram(g_fragmentProgram); // Create the vertex program g_vertexProgram = cgCreateProgramFromFile(g_context, CG_SOURCE, "VertexShader.cg", g_vertexProfile, "main", 0); if (!g_vertexProgram) return FALSE; // Load the vertex program cgGLLoadProgram(g_vertexProgram); // This calculates the TBN matrices for all triangles in the scene CalculateTBNMatrix(g_vQuad[0], g_vTexCoords[0], g_TBNMatrix[0]); CalculateTBNMatrix(g_vQuad[1], g_vTexCoords[1], g_TBNMatrix[1]); // Get the parameters which we can pass to the vertex and fragment shaders. // Think of it like “main(int nFoo)”. What we do below is to fetch a reference to the parameters (like nFoo) // and we can then set their value from outside of the CG program. You’ll see the connection a bit later.. g_modelViewMatrix = cgGetNamedParameter(g_vertexProgram, "modelViewProjMatrix"); g_lightPosition = cgGetNamedParameter(g_vertexProgram, "vLightPosition"); g_lightDiffuseColor = cgGetNamedParameter(g_fragmentProgram, "fLightDiffuseColor"); // Create the light sphere g_lightSphere = gluNewQuadric(); return TRUE; } 

In the following, we've defined our quad as two triangles and have defined the corresponding texture coordinates. Notice that we're only using two TBN matrices! This is because, as we mentioned earlier, we're dealing with triangles and we only need one TBN matrix per triangle. Actually, since we know we're about to render a quad that has a completely plain surface we could have managed with only one TBN matrix, but let's keep things simple here: 

// The coordinates for our quad (2 triangles of 3 vertices each) CVector g_vQuad[2][3] = { { CVector(10.0f, 10.0f, 40.0f), CVector(10.0f, 10.0f, 40.0f), CVector(10.0f, 10.0f, 40.0f) }, { CVector(10.0f, 10.0f, 40.0f), CVector(10.0f, 10.0f, 40.0f), CVector(10.0f, 10.0f, 40.0f) } }; // The texture coordinates for the 2 triangles CVector2 g_vTexCoords[2][3] = { { CVector2(0.0f, 0.0f), CVector2(1.0f, 0.0f), CVector2(1.0f, 1.0f) }, { CVector2(0.0f, 1.0f), CVector2(0.0f, 0.0f), CVector2(1.0f, 0.0f) } }; // Two TBNmatrices (one for each triangle). Each matrix consists of 3 vectors CVector g_TBNMatrix[2][3]; 

Following is our error callback function, which is called whenever something goes wrong in the CG shaders, like a compile error when we load and compile the shaders: 

void CGErrorCallback()
{
// Print the error message to the log:
// cgGetErrorString() returns the error that has occurred.
// cgGetLastListing() returns a more descriptive report of the error
g_pLog.PrintLn("%s  %s", cgGetErrorString(cgGetError()), cgGetLastListing(g_context));
}


It's also important to remember to clean up when everyone's going home and we're turning off the lights: 

void DestroyCG() { // Destroy and free our light sphere if (g_lightSphere) gluDeleteQuadric(g_lightSphere); // Destroying the CG context automatically destroys all attached CG programs cgDestroyContext(g_context); } 

Before we can render anything, we need to calculate the TBN matrix for each triangle. The function CalculateTBNMatrix() does just that. As input it takes three parameters: 

void CalculateTBNMatrix(const CVector *pvTriangle, const CVector2 *pvTexCoords, CVector *pvTBNMatrix) { // Calculate the tangent basis for each vertex of the triangle // UPDATE: In the 3rd edition of the accompanying article, the forloop located here has // been removed as it was redundant (the entire TBN matrix was calculated three times // instead of just one). // // Please note, that this function relies on the fact that the input geometry are triangles // and the tangent basis for each vertex thus is identical! // // We use the first vertex of the triangle to calculate the TBN matrix, but we could just // as well have used either of the other two. Try changing 'i' below to 1 or 2. The end // result is the same. int i = 0; // Calculate the index to the right and left of the current index int nNextIndex = (i + 1) % 3; int nPrevIndex = (i + 2) % 3; // Calculate the vectors from the current vertex to the two other vertices in the triangle CVector v2v1 = pvTriangle[nNextIndex]  pvTriangle[i]; CVector v3v1 = pvTriangle[nPrevIndex]  pvTriangle[i]; // The equation presented in the article states that: // c2c1_T = V2.texcoord.x – V1.texcoord.x // c2c1_B = V2.texcoord.y – V1.texcoord.y // c3c1_T = V3.texcoord.x – V1.texcoord.x // c3c1_B = V3.texcoord.y – V1.texcoord.y // Calculate c2c1_T and c2c1_B float c2c1_T = pvTexCoords[nNextIndex].x  pvTexCoords[i].x; float c2c1_B = pvTexCoords[nNextIndex].y  pvTexCoords[i].y; // Calculate c3c1_T and c3c1_B float c3c1_T = pvTexCoords[nPrevIndex].x  pvTexCoords[i].x; float c3c1_B = pvTexCoords[nPrevIndex].y  pvTexCoords[i].y; float fDenominator = c2c1_T * c3c1_B  c3c1_T * c2c1_B; if (ROUNDOFF(fDenominator) == 0.0f) { // We won't risk a divide by zero, so set the tangent matrix to the identity matrix pvTBNMatrix[0] = CVector(1.0f, 0.0f, 0.0f); pvTBNMatrix[1] = CVector(0.0f, 1.0f, 0.0f); pvTBNMatrix[2] = CVector(0.0f, 0.0f, 1.0f); } else { // Calculate the reciprocal value once and for all (to achieve speed) float fScale1 = 1.0f / fDenominator; // T and B are calculated just as the equation in the article states CVector T, B, N; T = CVector((c3c1_B * v2v1.x  c2c1_B * v3v1.x) * fScale1, (c3c1_B * v2v1.y  c2c1_B * v3v1.y) * fScale1, (c3c1_B * v2v1.z  c2c1_B * v3v1.z) * fScale1); B = CVector((c3c1_T * v2v1.x + c2c1_T * v3v1.x) * fScale1, (c3c1_T * v2v1.y + c2c1_T * v3v1.y) * fScale1, (c3c1_T * v2v1.z + c2c1_T * v3v1.z) * fScale1); // The normal N is calculated as the cross product between T and B N = T.CrossProduct(B); // Calculate the reciprocal value once and for all (to achieve speed) float fScale2 = 1.0f / ((T.x * B.y * N.z  T.z * B.y * N.x) + (B.x * N.y * T.z  B.z * N.y * T.x) + (N.x * T.y * B.z  N.z * T.y * B.x)); // Calculate the inverse if the TBN matrix using the formula described in the article. // We store the basis vectors directly in the provided TBN matrix: pvTBNMatrix pvTBNMatrix[0].x = B.CrossProduct(N).x * fScale2; pvTBNMatrix[0].y = N.CrossProduct(T).x * fScale2; pvTBNMatrix[0].z = T.CrossProduct(B).x * fScale2; pvTBNMatrix[0].Normalize(); pvTBNMatrix[1].x = B.CrossProduct(N).y * fScale2; pvTBNMatrix[1].y = N.CrossProduct(T).y * fScale2; pvTBNMatrix[1].z = T.CrossProduct(B).y * fScale2; pvTBNMatrix[1].Normalize(); pvTBNMatrix[2].x = B.CrossProduct(N).z * fScale2; pvTBNMatrix[2].y = N.CrossProduct(T).z * fScale2; pvTBNMatrix[2].z = T.CrossProduct(B).z * fScale2; pvTBNMatrix[2].Normalize(); } } 

So, finally we've arrived at the actual function where all the magic takes place: the Render() function. Here the light is rendered, we set up multitexturing, we enable our shaders (both the vertex and fragment ones) and we call the function RenderQuad() which renders our quad in the middle of the screen: 

void Render() { glClear(GL_COLOR_BUFFER_BIT  GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glPushAttrib(GL_CURRENT_BIT); // Render the white light source glColor3f(1.0f, 1.0f, 1.0f); glPushMatrix(); // We want to make the light source go round in a circle. Here we just apply some basic trigonometry. // Please note we're not factoring in time in this calculation (which might result in a // faster/slower moving circle on some computers) g_fRotAngle += (float)PI * 0.02f * g_fLightSpeed; g_vLightPos.x = cosf(g_fRotAngle) * 10.0f; g_vLightPos.y = sinf(g_fRotAngle) * 10.0f; // Position and render the light sphere glTranslatef(g_vLightPos.x, g_vLightPos.y, g_vLightPos.z); gluSphere(g_lightSphere, 1.0f, 20, 20); glPopMatrix(); glPopAttrib(); // Enable the vertex and fragment profiles and bind the vertex and fragment programs cgGLEnableProfile(g_vertexProfile); cgGLEnableProfile(g_fragmentProfile); cgGLBindProgram(g_vertexProgram); cgGLBindProgram(g_fragmentProgram); // Set the "modelViewProjMatrix" parameter in the vertex shader to the current concatenated // modelview and projection matrix cgGLSetStateMatrixParameter(g_modelViewMatrix, CG_GL_MODELVIEW_PROJECTION_MATRIX, CG_GL_MATRIX_IDENTITY); // Set the light position parameter in the vertex shader cgGLSetParameter3f(g_lightPosition, g_vLightPos.x, g_vLightPos.y, g_vLightPos.z); // Set the diffuse of the light in the fragment shader cgGLSetParameter3f(g_lightDiffuseColor, 1.0f, 1.0f, 1.0f); // Enable and bind the rock texture glActiveTextureARB(GL_TEXTURE0_ARB); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, g_uiRockTexture); // Enable and bind the normal map glActiveTextureARB(GL_TEXTURE1_ARB); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, g_uiNormalTexture); RenderQuad(); // Disable the vertex and fragment profiles again cgGLDisableProfile(g_vertexProfile); cgGLDisableProfile(g_fragmentProfile); // Disable textures glActiveTextureARB(GL_TEXTURE0_ARB); glDisable(GL_TEXTURE_2D); glActiveTextureARB(GL_TEXTURE1_ARB); glDisable(GL_TEXTURE_2D); } 

The RenderQuad() function is shown below. You'll probably notice we've cheated a bit and are using GL_TRIANGLE_FAN to draw the two triangles instead of GL_TRIANGLE. That said, by using GL_TRIANGLE_FAN we only need to specify four vertices instead of having to render each triangle separately which would have required six vertices. 

void RenderQuad() { glBegin(GL_TRIANGLE_FAN); // Set the texture coordinates for the rock texture and the normal map glMultiTexCoord2fARB(GL_TEXTURE0_ARB, g_vTexCoords[0][0].x, g_vTexCoords[0][0].y); // Specify the tangent matrix vectors, one by one, for the first triangle and send them to the vertex shader glMultiTexCoord3fARB(GL_TEXTURE1_ARB, g_TBNMatrix[0][0].x, g_TBNMatrix[0][0].y, g_TBNMatrix[0][0].z); glMultiTexCoord3fARB(GL_TEXTURE2_ARB, g_TBNMatrix[0][1].x, g_TBNMatrix[0][1].y, g_TBNMatrix[0][1].z); glMultiTexCoord3fARB(GL_TEXTURE3_ARB, g_TBNMatrix[0][2].x, g_TBNMatrix[0][2].y, g_TBNMatrix[0][2].z); // Draw the bottom left vertex glVertex3f(g_vQuad[0][0].x, g_vQuad[0][0].y, g_vQuad[0][0].z); //  // // Set the texture coordinates for the rock texture and the normal map glMultiTexCoord2fARB(GL_TEXTURE0_ARB, g_vTexCoords[0][1].x, g_vTexCoords[0][1].y); // Draw the bottom right vertex glVertex3f(g_vQuad[0][1].x, g_vQuad[0][1].y, g_vQuad[0][1].z); //  // // Set the texture coordinates for the rock texture and the normal map glMultiTexCoord2fARB(GL_TEXTURE0_ARB, g_vTexCoords[0][2].x, g_vTexCoords[0][2].y); // Draw the top right vertex glVertex3f(g_vQuad[0][2].x, g_vQuad[0][2].y, g_vQuad[0][2].z); //  // // Set the texture coordinates for the rock texture and the normal map glMultiTexCoord2fARB(GL_TEXTURE0_ARB, g_vTexCoords[1][0].x, g_vTexCoords[1][0].y); // Specify the tangent matrix vectors, one by one, for the second triangle and send them to the vertex shader glMultiTexCoord3fARB(GL_TEXTURE1_ARB, g_TBNMatrix[1][0].x, g_TBNMatrix[1][0].y, g_TBNMatrix[1][0].z); glMultiTexCoord3fARB(GL_TEXTURE2_ARB, g_TBNMatrix[1][1].x, g_TBNMatrix[1][1].y, g_TBNMatrix[1][1].z); glMultiTexCoord3fARB(GL_TEXTURE3_ARB, g_TBNMatrix[1][2].x, g_TBNMatrix[1][2].y, g_TBNMatrix[1][2].z); // Draw the top left vertex glVertex3f(g_vQuad[1][0].x, g_vQuad[1][0].y, g_vQuad[1][0].z); glEnd(); } 

And there we go! We've successfully implemented bump mapping using the shader language CG. 

Conclusion 

One could easily extend the tutorial's code to support the specular component. All you should do is to read up on the specular term in reference [1] and you should be able to implement it almost straight into the vertex and fragment shaders. If you feel like you don't have time for that, don't worry, as we're currently working on an attenuation tutorial as well. 

Acknowledgments 



Demo 

I'm a Visual Studio 2003 user and if you use another IDE all you need to do is to create an empty Windows project and manually add all .h and .cpp files to it. 

References: 

[1]  http://www.delphi3d.net/articles/viewarticle.php?article=phong.htm 

Copyright © Blacksmith Studios 2007 