Normal mapping almost ready ( I think )

Coordinator
Nov 1, 2007 at 6:26 AM
Shaw,

I have normal mapping generating on the terrain off of simulated tangents and bitangents. I looks pretty well, and the best thing is that I don't need the binormal and tangent values to be stored in the vertices because I am simulating the values in the shader. However, when I try and remove the tangent and binormal from the custom vertex you modified I get issues with the terrain texture mapping and some other anomolies. Here is what I did to the vertex, which isn't working:

public struct VertexTerrain
{
public Vector3 Position;
public Vector3 Normal;
//public Vector3 Binormal;
//public Vector3 Tangent;
public Vector2 TexCoords;
public Vector3 TerrainColorWeight;

//public static int SizeInBytes = (3 + 3 + 3 + 3 + 2 + 3) * sizeof(float);
public static int SizeInBytes = (3 + 3 + 2 + 3) * sizeof(float);
public static VertexElement[] VertexElements = new VertexElement[]
{
new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ),
new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0 ),
//new VertexElement( 0, sizeof(float) * 6, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Binormal, 0 ),
//new VertexElement( 0, sizeof(float) * 9, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Tangent, 0 ),
new VertexElement( 0, sizeof(float) * 12, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0 ),
new VertexElement( 0, sizeof(float) * 14, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Color, 0 ),
};
}
Coordinator
Nov 1, 2007 at 6:38 AM
Oh, and normal mapping is in fact working, on all video cards :), I just would like to remove the binormal and tangent from the vertices so that we can save about 30% on memory for terrain, which in a 1024x1024 could be about 80-90mb of RAM.

I did disable the binormal and tangent inputs in the vertex shaders as well, and it still works fine like that unless I try and change the custom vertex type...
Coordinator
Nov 1, 2007 at 6:39 AM
Nevermind, I got it! We will have normal mapping once again (v0.178).
Nov 1, 2007 at 6:51 AM
Yeah, you have to make sure you adjust the stream offsets in the last two VertexElement entries. :)

How are you simulating the binormal/tangent data in the shader? That's not something you can actually calculate in a vertex/pixel shader. (You can in a geometry shader, but that's D3D10).
Coordinator
Nov 1, 2007 at 7:32 AM
I assumed that the bi-normals and tangents were horizontal. For terrain and water it seems ok to cheat like this, this of course would never work on a 3d model because normals could face down. The results seem fairly good, I tested different elevation slopes and used the moving sunlight to watch the shadows move.

I tested it for awhile and found it acceptable. It is in the new version, v0.178. You know a little more about normal mapping and vertex types than I, so I'd appeciate if you could test the new version for me, break it, and make me disable something again until a later version :op. Seriously though, its good to find people to break things because it forces us to have a more reliable engine that will work on different hardware specs.
Nov 1, 2007 at 9:26 AM
Do you have any idea of when a new version of the source will be uploaded. Im working on the deformable terrain but dont want to get stung by major changes in the code. should i be aware of anything currently.
Coordinator
Nov 1, 2007 at 1:57 PM
It should be available in a day or so.
Nov 1, 2007 at 2:13 PM
cool. i'll look forward to it :)
Nov 1, 2007 at 2:24 PM
Honestly, I can't really tell how well your normal mapping is working without a good light source.

Also, you say you're using a horizontal binormal/tangent plane, while your vectors are in the y/z plane? They should be normalized as well, not half-unit vectors.

Setting horizontal binormal/tangent vectors would work for flat, horizontal geometry like the water plane, but it'll fail for anything else. At the very least the normal/binormal/tangent vectors should make an orthogonal vector space.

The release is missing the goal3.fx file. I was able to just copy it from my local QuickStart source tree, but new users won't be able to.
Coordinator
Nov 1, 2007 at 2:42 PM
Not sure what happened to goal3, I'll include it when I upload the source.

I used half-unit vectors because the normal mapping effect was too strong with unit-vectors.

I don't understand why the vectors are in the y/z plane myself, I originally had then in the x/y plane and there was no normal mapping effect. Really, I'm not a graphics programmer, I'm fairly new to shaders. I would like to get someone onboard that loves graphics programming. They could sort out, fix, and optimize all of the shaders, as well as make future ones planned. This was a 'rig' to get normal mapping back up and running. The normals are correct, but obviously the bi-normals and tangents are not. The theory is that terrain always has a normal that has a positive Z, so if the bi-normals and tangents act like the ground is facing up, this is the effect you get. I'm sacrificing accuracy to save 30% memory usage with terrain, and saves me (at this point) from having to calculate all of the tangents.
Nov 1, 2007 at 2:45 PM
I'll try to get around to writing a binormal generator one of these days for the engine.
Coordinator
Nov 1, 2007 at 3:41 PM
We have to ask ourselves a couple of questions:

1.) Will having correct bi-normals and tangents give a noticable difference?
2.) Is the difference going to be noticable enough to warrant a 30% increase in memory usage for terrain?
3,) Will generating bi-normals and tangents have any performance impact, like recalculating them on the fly for deformable terrain?
4.) Is it worth the time that will be spent on that when there are many other features on the list?
5.) Anything else you guys can think of?

I'm not saying I'm thrilled with the current setup, I believe all of the shaders need to be gone through by someone more fluent with them. However, we should justify any changes we are going to make, not only with this feature/issue, but with every one. We should always consider the price/performance ratio as well as the ratio of the feature's benefits to the amount of time spent creating it. This is part of the reason we need everyone voting on any issues they believe are important.
Nov 1, 2007 at 5:33 PM
I wish I had the time to go through the shaders, but I think that's a good candidate for a task to do while porting. When the terrain system is incorporated in the new framework, all of these issues should be addressed then.

The terrains textures (diffuse, normal, specular, etc.) will easily take up more space than the geometry data. It comes down to this: if you want normal mapping, you need to spend the extra 30%, otherwise don't bother. Incorrect surface data will just introduce artifacts that will make programmers/artists think something is wrong with their work then its really just errors in the surface data.
Coordinator
Nov 1, 2007 at 8:07 PM
Is it worth fixing the normal mapping inaccuracies if you are unable to tell it is inaccurate? Sometimes "inaccurate" flies in the face of everything I work for, a clean, well-written game/engine. But there are all kinds of "techniques" in the industry that are used to "fake" things for optimization purposes as long as it is rarely noticable to the end-user.

It seems like to me the geometry is taking up a huge chunk of memory. I've had the game use up to 300mb of ram almost for a 1024x1024, but I ran it last night at only 210mb used. I removed 24 bytes of information from each vertex and saved 90mb, leaving a total of 33 bytes per vertex remaining, or around 120mb more. That would leave only 90mb for everything else in the program, like assets. Also, when running the CLR profiler I noticed that most of the games memory allocations are in terrain vertices.

All that being said, if the current technique causes confusion with artists or programmers than I say we scrap it, I just don't want to be hasty and scrap a technique that might work and save us a lot of memory.

Speaking of memory, how much does the 360 have?
Nov 1, 2007 at 8:30 PM
I think the 360 has 512MB shared between the CPU and GPU.

It would be best if normal mapping was fixed properly, then we could compare it to the pseudo normal mapping and see if it is worth the extra 30%.
What was preventing normal mapping from working in the first place?
Nov 1, 2007 at 8:48 PM
Inaccuracies would be more noticeable with some light sources placed in the scene. Exaggerating the specular component, if its even there at the moment, would help to spot errors.

Memory savings will come from finding ways to optimize/pack the data, and reduce the geometry working set. Also, compressing the textures should be a significant saving. By default, I don't think the textures are compressed.

Aphid's correct, the Xbox has 512 megs of shared RAM, minus whatever the kernel uses.
Nov 1, 2007 at 8:50 PM
Also, remember that vast terrain rendering isn't a trivial task. Spatial algorithms will help with render times, but you also need to consider streaming in and out pieces of terrain, including geometry and texture data.
Coordinator
Nov 1, 2007 at 11:37 PM
The original reason normal mapping was not working was because I wasn't defining tangents or bi-normals, but for some reason it was still working with my video card.

I guess the concensus is that we should at least get a working version for comparison, and that either way we should start data compression.

For textures we can use DXT formats, there is a photoshop plugin for them. .pngs typically do ok if you need lossless formats for stuff like heightmaps and texturemaps, and jpgs are ok for textures that are ok to have at less than 100% quality, even still those can get compresses. XNA packs the files into a specific format in the bin and obj folders, but does it compress them?

Vast terrains is definitely not trivial, or I'd have done it already. I wouldn't have the slightest clue on how to setup streaming data and determining when to stream in new zones and stream out old ones.

Also, do you have any links or documentation about spatial algorithms?
Nov 2, 2007 at 12:29 AM
Texture compression doesn't matter at the input level. It would be best to maintain lossless quality in the art files. What matters is when you upload the data to the graphics card. You can upload them in DXT format to the card, and the card will store them compressed. That's what saves space. In XNA, the compression would happen in the content pipeline stage. If a .dds texture is compressed, I'm not sure if it remains compressed in the XNB file for not. If not, we could write a custom importer to keep the compression, and then load/upload it to the card in the DXT format.