Greetings all, this is my first post here.
As the fidelity of graphics increase, as there are more and more maps and models and polygons an artist must make to strive towards their intended graphical outcome, one can often feel lost in a sea of terms that we don't understand. Everyone knows what a normal map is, but how does it work? What is "gloss"? What is a specular texture?
Most of know know what a normal map looks like, we know what gloss does, we know that specular means highlights. But its been my experience that actually knowing something about these techniques, getting under the hood of shaders (the things that drive the actual rendering, and all we are really concerned with as modelers and texture-artists), has increased my ability many times over.
If this is popular, I'd like to continue this in a "series," as often as I can complete a new post. But, let me start with the most fundamental component of technical artistry, "Shaders."
---------------------------------
Shaders
Everyone heard of shaders, or shader-based engines or software, but most people don't know what they are, or what the craze is. The most succinct explanation I can think of for a shader is:
A shader takes something, does stuff to it, and gives you something else.
Depending on who you are, that is either the most mundane explanation, or the most intriguing explanation. Obviously, programmers find it intriguing. And I hope you will too.
Shaders come in two sorts. Vertex shaders, and pixel shaders. There is a big difference between them... the difference is the difference in graphical quality between RTW and M2TW. For this article, I will only get into vertex shaders, I will save pixel shaders for next time (pixel shaders are the cool ones, obviously, but you can't have pixel shaders without vertex shaders... one step at a time).
I'm going to ignore the traditional way of learning vertex and pixel shaders; that is, doing your rendering with a vertex shader, and then showing you how to do it so much better with a pixel shader. As artists, this is mostly pointless. Anyway, let's look at what is probably a traditional vertex shader when you are dealing with normal mapping:
Code:
// input from application
struct a2v {
float4 position : POSITION;
float2 texCoord : TEXCOORD0;
float3 tangent : TANGENT;
float3 binormal : BINORMAL;
float3 normal : NORMAL;
};
These are the things the application passes into the vertex shader. The application says "this vector (float3) is your tangent, this is your normal, and this is your binormal. This float2 is the UV coordinates of the vertex. And this vector (float4) is the vertex's position. The only thing you need to understand, really, is the "position." This is in world space (well, not yet, but it gets converted to it), meaning, if the vertex was at 2, 3, 1 in 3ds, the vertex is at 2, 3, 1 in the application (assuming they use the same scale). Since everything we are doing is in world space, don't worry about object and tangent space (actually these terms - world, object, tangent- will come up when we discuss normal maps).
Code:
// output to fragment program
struct v2f {
float4 position : POSITION;
float3 lightVec : TEXCOORD4;
float3 eyeVec : TEXCOORD3;
float2 texCoord : TEXCOORD0;
float3 worldTangent : TEXCOORD6;
float3 worldBinormal : TEXCOORD7;
float3 worldNormal : TEXCOORD5;
};
This is what the vertex shader outputs into the pixel shader (AKA, fragment program). The actual code of the vertex shader will show us how we go calculate these outputs.
Code:
v2f v(a2v In)
{
v2f Out = (v2f)0;
Out.position = mul(In.position, wvp);
Out.texCoord = In.texCoord;
These are just your standard things to do. It converts your vertex position into something the pixel shader can use, and passes through your UV coordinates without modifying them.
Code:
float3 worldSpacePos = mul(In.position, world); //world space position of vertex
Out.lightVec = lightPosition - worldSpacePos; //light vector, in world space
Out.eyeVec = viewInv[3] - worldSpacePos; //eye vector in world space
Matrix multiplication is a doozy... don't even try to think about it mathematically. What this does, is find the world space position of the vertex (I mentioned this above, remember?). It then subtracts that from the light position (in world space), giving us the vector pointing from the light, towards the vertex. We do the same for the camera/eye.
Code:
Out.worldNormal = mul(In.normal, worldIT).xyz;
Out.worldBinormal = mul(In.binormal, worldIT).xyz;
Out.worldTangent = mul(In.tangent, worldIT).xyz;
This just converts your normal, binormal, and tangent inputs, into world space. Having everything in the same space allows us to perform operations on one another.
Well that's it. ***yaaaawn*** That's a vertex shader. Boring, eh? Yes, it is. And, honestly, there's only so much you can do in it... you can only get graphics as good as your textures, because all you can do is do calculations at each vertex, and limited ones at that. This doesn't deal with any rendering in the vertex shader; the output would be a black vertex, because we aren't rendering anything. This shader is merely meant to show that shaders are extremely manipulatable; they can do nothing, or they can give you decent graphics and rendering. They can also be used for skeletal animations, but that's neither here nor there (but it does demonstrate how versatile they are!).
Before I get into pixel shaders, though, there are a couple fundamentals of lighting to get out of the way (what is diffuse lighting? specular lighting and gloss?). Then we can tackle pixel shaders, and normal maps.
Anyway, tell me how useful you would find something like this, and if there is anything you are unclear about, just ask (but please check wikipedia or google first if you don't understand a word or two!).
Buh-bye.