Terrain setup

Coordinator
Dec 12, 2007 at 4:03 AM
Edited Dec 12, 2007 at 4:04 AM
I noticed the terrain system isn't listed in any upcoming versions, but I believe it will be needed sometime soon. Simple physics tests, ray tracing tests with camera, not to mention I need to begin converting it to the new coordinate system. I'd like to get the quad-tree system in the scene manager setup soon so we can begin using it or testing against it.

I'm willing to get it started right away, I just didn't know if there are any design questions on it. Will quad-tree sections be entities? Or what about terrain patches?

I will be cleaning up the code in it as I create it.

My plans, things I already have in the template:
  • Convert all code to right-handed standard coordinate system as I go.
  • The root quad-tree node for a terrain is stored in a scene.
  • The scene holds all the information about terrain normals and height for physics.
  • The scene holds the complete vertex buffer for the terrain.
  • Quad-tree is a class, each quad-tree object has 4 child quad-tree objects (or none if it is a leaf).
  • Each Quad-tree section has a bounding-box.
  • Bounding boxes are used for culling of quad-tree sections, but may also end up used for anything else we want the quad-tree for.
  • Each Quad-tree section has one or more terrain patches.
  • Each terrain patch is a different LOD.
  • Terrain patches are simply index buffers, they will reference the scene's vertex buffer.

  • Vegetation:
    • This part still needs to be designed further.
    • The scene will create vegetation through the map, using the vertices. It will create a list of vegetation quads (billboards).
    • Each quad-tree terrain patch will go through and pull billboards from the list and add them to the patch. This will give us LOD of vegetation.
    • After vegetation is added to all patches that need it, the full list is deleted to save on memory.
    • I have a working vegetation shader, but I think Shaw should look over it and optimize it, or complete re-write it as he sees fit.

  • Future plans:
    • I'd like to implement a way to deform the terrain eventually (this would be for the terrain editor/level editor). I've got an idea on how to do this.
    • Random generation of terrain.
    • Terrain that casts shadows (hopefully in realtime for dynamic shadowing, if not at least a shadow map), not just shading by vertex normals.
    • Possibility of more than 3 textures in a scene. Newer cards can support many, but 6 would be sufficient. 6 would require three more texture lookups, and
a different method of creating maps for the textures (different than the current RGB image).

Again, I'd really like to get going on this as soon as I can, terrain is probably one of the most interesting aspects of an engine that does exteriors. I know there are many things we still need, but as I am looking at them, most people are working on them. Shaw has graphics and physics, Sturm has messaging, input system and object pool. Mike has networking, Aphid has audio. These are most the requirements listed for v0.19.

I'm also going to be working on A.I., and I'm really going to need terrain to get anywhere with that. Although I admit A.I. might be another couple versions away, as we will need an animation system as well. In fact, I'm not sure we have an issue for the animation system, I believe it will be a fairly large issue. We may want things like blending between animation types. I've never implemented animation so I wouldn't know much about it.

Anyway, enough rambling, someone give me a thumbs up to start terrain, or something better to do, my camera system is good enough for now (at least I think so).
Dec 12, 2007 at 5:10 AM
I agree that it's important to start getting a terrain system implemented. The sooner we get this engine actually doing something the better off we'll be and the more motivation we'll all have. I would just concentrate on the quad tree system and simple height-mapped terrain patches, then go from there.

Don't worry about the terrain shadowing. ;)
Coordinator
Dec 12, 2007 at 5:55 AM
Due to the fact I have to re-write a good portion of it, it will definitely start simple. Prototype early, and often.
Dec 12, 2007 at 12:53 PM
I love terrain generation. Has anyone seen L3DT(Large 3d Terrain) editor? Basically it randomly generates some pretty impressive terrain based on simple input. It is extremely powerful though. I would love to see even a simple implementation of this in the engine when it comes to generating random terrain. Errosion algorithms, water at different levels, cliffs/terraces, vegetation. Basically where you can pass it certain values and it generates the terrain. It outputs a heightmap, texture map, normal map, water map, maybe even a vegetation map. Let me know what you guys think.
Coordinator
Dec 12, 2007 at 1:11 PM
Edited Dec 12, 2007 at 1:23 PM
Have you had a chance to use the template version of the engine?

We've already got some of these things in the original template engine in the releases section (v0.182), we have heightmap, texture map, normal mapping, water, and I was starting vegetation but put it on hold for the new engine.

There is another thread describing terrain ideas we'd had. We're planning on eventually having the option for terrains from model rather than heightmap so we can have cliffs, overhangs, and caves. Water at different levels is a neat idea, but I've heard that the trouble doesn't always outweight its benefits. However we've discussed water volumes/boxes, rather than water planes.

There is a level editor out there right now for terrain that works with XNA, in the other terrain thread I believe I posted a link, but I'll certainly look into L3DT.

I'll be starting the port in of the terrain from the template tonight, hopefully I'll have a simple quad-tree heightmap up in a patch soon.
Dec 12, 2007 at 2:02 PM
Couple of things (because ive done little work with exteriors):

  • Whats a quadtree?
  • What do you mean by terrain patch?

Going from memory I think riemers.net had a nice sample on terrain lighting. Not sure how efficint it is though, as graphics isnt my expertise. Looks good though and looking forward to see the first bit.

On a side note, will we support streaming maps (therefore terrain) from server to client? If so Ill factor it into the networking.
Dec 12, 2007 at 2:17 PM
Quadtrees is a method of dividing the terrain into a tree for culling and other purposes. Allows you to have larger terrains without taking such a performance hit since it only renders the leafs that are in the cameras view. Terrain patches are, and correct me if I'm wrong, the actual terrain vertices in a leaf node of the tree. It creates a small "patch" of terrain. Though I am not sure if that is correct or not.

Dec 12, 2007 at 2:49 PM

LordIkon wrote:
Have you had a chance to use the template version of the engine?

We've already got some of these things in the original template engine in the releases section (v0.182), we have heightmap, texture map, normal mapping, water, and I was starting vegetation but put it on hold for the new engine.

There is another thread describing terrain ideas we'd had. We're planning on eventually having the option for terrains from model rather than heightmap so we can have cliffs, overhangs, and caves. Water at different levels is a neat idea, but I've heard that the trouble doesn't always outweight its benefits. However we've discussed water volumes/boxes, rather than water planes.

There is a level editor out there right now for terrain that works with XNA, in the other terrain thread I believe I posted a link, but I'll certainly look into L3DT.

I'll be starting the port in of the terrain from the template tonight, hopefully I'll have a simple quad-tree heightmap up in a patch soon.


Yes I had seen the old prototype. But what I was talking about was generating the heightmap, not loading it. Generating the texture map, the water map, etc. Example:

Set input(includes any variables for terrain generation, how much erosion, how much vegetation, how high, how much water, coastal, hills, dessert, etc. and also includes a seed number)
Generate Heightmap(using the seed we could theoretically regenerate the exact same heightmap on a different computer using the same algorithm and settings. If all of our randomness uses the exact same seed from one computer to the next then it will give the exact same output. This allows us to send an algorithm and config file instead of a heightmap or mesh, much smaller files)

Using the heightmap now you can generate normals if you want, but you could also generate a water map. You can use planes to do water still I believe, you would just have to have it create the water only where a plane existed, and the plane would be a square that just encompassed where you want the water(lake, river, ocean) to be. You can have different kinds because, for instance, lakes are not affected by tides, whereas oceans and sometimes rivers are.

Then after everything else you could generate an actual texture by splatting but splatting them to a single, large texture. Throw a detail texture over the top and you have a nice looking large texture to apply over the top of the terrain. You could even theoretically take the heightmaped terrain, turn it into a mesh, and postprocess cliffs and other anomolies(again procedurally) and save the mesh and remove the heightmap. I am just spouting ideas at the moment and I don't know how complex something like this would be though I imagine it would be pretty complex.
Coordinator
Dec 12, 2007 at 3:01 PM
Splatting a single texture over a large terrain will cause a loss of detail in the terrain, but it does help with getting rid of texture tiling. Texture tiling is ok if you use a good texture for it.

I have plans for creating random terrain, but I believe it should wait until we have a level editor. I would start with random heightmap, and then add in texture type dependant on elevation, and randomized vegetation (which is already in design).
Dec 12, 2007 at 3:21 PM
Splatting a single texture will not cause a loss of detail since the single texture will be made up of the same Texture Tiles that your talking about, but instead of putting them directly on the terrain you combine them into a single texture and stick a detail texture on it as well so that you can't see the tiling and there you go, Two large textures loaded instead of a bunch of small ones.

And about the random generation of the heightmap, yes it needs to be random but with user configurable settings, for instance:

Height, roughness, size, erosion, etc.

That, by putting the same settings and the same seed I can recreate the same map again and again. Not a similar map, but the exact same map.
Coordinator
Dec 12, 2007 at 3:32 PM
I see what you're saying with the splatting, and in that case, that is what is already happening. A splat texture is used to tell texture types, which are themselves tiled.

The random generation will be based on settings and a seed like you describe.
Dec 13, 2007 at 12:07 AM
Did you get a chance to take a look at the L3DT program? It will explain a lot better than I can what I am talking about. As I understand what your saying, there is a texture which is used to define where textures go on a map, but the actual textures are tiled onto the terrain at runtime. I am talking about using the splat texture you are talking about, and have it generate 1 texture which will cover the whole terrain. A high detail texture with a detail texture alpha blended onto it so it dosn't look tiled. When the program is actually run there is absolutely no tiling of any kind going on, the texture being used is one large texture stuck over the top of the terrain. I hope I am not being more confusing or misunderstanding what your saying.
Dec 13, 2007 at 12:47 AM
You mean something like Carmack's MegaTexturing?
Coordinator
Dec 13, 2007 at 12:50 AM
I understand what you're saying, and the problem is that you have at 2048x2048 terrain and a 2048x2048 image you get a pixel to vertex match, which can become very blurry, and you must then use a detail texture to try and overcome the blurriness. Benjamin Nitschke's (sp?) XNA book actually details this exact problem.

The way I'm doing it still has a 1:1 ratio, but only only the splatting, the texture itself is tiled.

Let me explain this better. Using a tiled texture I can get many pixels with different colors from my texture into a single triangle, possibly hundreds, because I'm tiling, however if I stretch the texture I'll have a single color per quad on my terrain.

I believe if you change the tiling value in the template's terrain shader you could see this effect. I don't have the code with me, but somewhere in the vertex shader where the texture coordinates are being passed to the pixel shader you'll see something like * 0.01f. If make that something like * 5 or * 10 you'll get the idea.
Coordinator
Dec 13, 2007 at 1:00 AM
Let me add that I'm working on a final project in school using the Ogre engine, to disastrous results. The default scene manager lets you tile terrain, but not splat, so we're forced to use a single texture stretched over the map, and it looks absolutely terrible. If I remember I will take a screenshot for you. I'm not saying your idea couldn't work, I'm just saying I've tried it in a couple of different ways and never gotten good results from it.
Dec 13, 2007 at 1:11 AM
You should watch the id Tech 5 presentations given at one of the Mac conferences this Summer, where Carmack discusses the MegaTexturing algorithm he's using now: http://www.youtube.com/watch?v=HvuTtrkVtns. Basically its a single HUGE texture placed over the entire terrain, but only the visible parts are resident as real textures on the card, and the rest is streamed in/out as needed. It's an interesting idea, not sure how well it works in practice, though the video seems to suggest it works quite well. Quake Wars has a preliminary version of it for its levels.
Coordinator
Dec 13, 2007 at 1:20 AM
This is how the paging scene manager in Ogre (it's a plug-in) works. The only problem is you have to process the terrain and texture map together before runtime, and what is spits out are a bunch of large textures, used for each quad-tree section.

This absolutely would work, but would have the requirement of doing all terrain processing through a custom content processor, and would require a complete overhaul of the current terrain system. I wonder if you have multiple sections in view if it adds to the texture lookups, or number of passes with the shader? It just seems like a lot of work that can be overcome by having textures that tile decently, and then fine tuning the tiling rate.
Dec 13, 2007 at 1:29 AM
Agreed, this technique isn't suited to every case. The real advantage is when many parts of the terrain have their own texture details. Like Carmack said in that presentation, an artist can literally "draw" their name in the side of a cliff and it won't affect the geometry/texture in any other part of the map.

For our purposes, I'm sure the current technique is just fine.
Coordinator
Dec 13, 2007 at 2:11 AM
For drawing names in the map, or stuff like that, we could include a tint value on the vertex, and RGB color basically, that could be applied in the level editor. So you may have a big field of grass, but a patch of that grass might be darker in one area. Rather than have a dark grass texture as well as a regular grass texture, you simply have the one texture and tint part of it.
Dec 13, 2007 at 2:47 AM
That's still a per-vertex effect, not a per-pixel effect.
Coordinator
Dec 13, 2007 at 3:38 AM
Unless you draw onto a texture, and use the texturesample in a per-pixel manner. I'm currently doing just that to keep from losing texture detail when using lower LOD patches. Of course that adds another texture lookup, where doing per vertex adds more memory for the vertex buffer. It think its just something to consider for later on, that is something we could certain add at any time if we decided, so its not too important just yet.
Coordinator
Dec 14, 2007 at 3:18 PM
Ok, I've run into some fairly important issues while trying to setup the terrain system.

Here are the issues I haven't resolved yet:
  • Terrain.Draw(...) needs to be called. The problem is that Entities don't even have a Draw() yet, because they're put into a render queue. However, the way terrain is drawn (at this point) doesn't allow for it to be put into a render queue.

  • Because we don't have a unified Draw()/Update() setup for entities, unless I setup some kind of temporary way to unify them. If I leave them with un-unified then I cannot just add a Camera and Terrain to the scene's entities list and update them as I go, I would literally need a Camera object and a Terrain object, and I would need to call them explicitly each frame, which is ugly. This is the big issue, I talk about it more at the bottom

  • We don't have a lighting system setup yet, so I will have to hard code the light color and direction into the terrain shader.

  • We don't have an input system so it is hard to move the camera around to test collisions with quad-tree bounding boxes.


So, I think I can work around most of the issues, but we need to find a common way to update and draw every entity in a scene from a single list. Also, we may need to enforce the order of updating, for example the camera may need to update first so that every other thing that needs its bounding frustum will have an updated one. Until these issues are solved the code I'm writing is completely 'rigged'. I'd like to get this solved right away, as I would rather not commit code that is ugly as I am imagining.
Dec 14, 2007 at 4:02 PM

Terrain.Draw(...) needs to be called. The problem is that Entities don't even have a Draw() yet, because they're put into a render queue. However, the way terrain is drawn (at this point) doesn't allow for it to be put into a render queue.


Can you elaborate on why it doesn't fit into the render queue? I would like to know about any potential issues here so I can compensate. You should just be able to define a material, and create an IGeometryProvider for your terrain. If need be, I can describe what the IGeometryProvider container should provide, or just looking at the StaticModel implementation should give you a good idea.

Because we don't have a unified Draw()/Update() setup for entities, unless I setup some kind of temporary way to unify them. If I leave them with un-unified then I cannot just add a Camera and Terrain to the scene's entities list and update them as I go, I would literally need a Camera object and a Terrain object, and I would need to call them explicitly each frame, which is ugly. This is the big issue, I talk about it more at the bottom


Before going forward, it's a good idea to "unify" this. We need to maintain consistency.


We don't have a lighting system setup yet, so I will have to hard code the light color and direction into the terrain shader.


A simplistic lighting system (like seen in the graphics patch) will come with the next commit after I finish with the material system updates and update everything to Xbox standard.


It seems like the show-stopping issue here is the entity design. Every entity should have an Update() method. For Draw(), you could argue either way. It really comes down to when/where you want to update the render queue. If you can get away with it in Update(), fine. If you need a secondary scene traversal with Draw(), fine. That's entirely up to whoever is designing the entity system. Now, if its better from your perspective to go back to the "old" system of every entity drawing itself, that can be arranged, but just keep in mind that each frame will need a bare-minimum of 3 passes, which would translate into 3 scene traversals. Also, it would then be up to the entities themselves to handle depth-passes, shadow-passes, and final composition passes.
Dec 14, 2007 at 4:12 PM
I've said it before and can only say it again: IDrawable and IUpdateable (the Xna way)
Coordinator
Dec 14, 2007 at 4:47 PM
I'll have to look at your GeometryProvider. I may need to load the terrain into something like that. Does it support letting me create a single vertex buffer and multiple index buffers, and letting me call them as I need? I'm assuming each terrain patch would need to be its own piece of "geometry", and I would call those for draw.

I'd like to know more about the RenderQueue. Do I simply add and remove stuff from it each frame? For example, each frame I would have to do one of two things:
1.) Somehow remove all terrain patches from the queue, and then add to it any quad-tree terrain patches that were determined to be colliding with the camera's frustum.
or
2.) Know which terrain sections are in view, and somehow know which ones are already in the queue, and add any that are now in view but not in the queue, and remove any that are in the queue but not in view.

Does IDrawable let us choose what we want to pass to it? I'll have to look at it.
Dec 14, 2007 at 4:53 PM

LordIkon wrote:
2.) Know which terrain sections are in view, and somehow know which ones are already in the queue, and add any that are now in view but not in the queue, and remove any that are in the queue but not in view.

This sounds like the right solution, there is no reason to unload something which is going to be used in the next frame. Though arguably the more complex to implement.


LordIkon wrote:
Does IDrawable let us choose what we want to pass to it? I'll have to look at it.

No it's an interface, but you should already have the information needed, so you do not need to pass anything (ecept gameTime which is passed).
Dec 14, 2007 at 5:17 PM
IUpdateable is good, but I don't know about IDrawable. It sends the wrong "message" to the user. There should not be user-code physically rendering anything in Draw(). It should just be setting up the render queue, which actually makes more sense in Update() than it does in Draw().

Now, if we instead ditch the render queue idea and let user code do all of the drawing (which I do not recommend), then we need to pass information along that will tell the user code:
  • If we're doing a depth-pass, or a full material pass.
  • The view/projection matrices.
  • Lighting information.
  • Access to shadow map and other render target data.

At the very least, the entities would need a reference to the graphics system to be able to query for this information.


About IGeometryProvider:
It's simply an interface through which the renderer obtains geometry data. The interface comments should be fairly self-explanatory, but you're main concern will be with the DrawGeometry() method. That method should bind the proper vertex/index buffers and vertex declaration, and issue the DrawIndexedPrimitives call(s). Primitive count (triangle count) and vertex count are not currently used, but it would be nice to implement those for future statistics reporting and renderer optimizations. BindBuffers() is just like DrawGeometry(), except the DrawIndexedPrimitive call is not made. Again, its not currently used but is there for future work.

The interface will definitely be changing when I get around to implementing skinned animation, but that should only involve interface additions that you will not need to worry about instead of breaking changes.

What you won't be able to do at the moment is reflection/refraction mapping. That's on my list of things to do, and basically requires the same changes as needed for lighting.
Coordinator
Dec 14, 2007 at 5:50 PM
Well what I'm trying to do at the moment is just get the terrain rendering, through any way possible. Once I get it rendering I can verify that the culling, LOD, scale, etc... are all working. At the point I have something working then I'll make put up a patch that won't be made to be applied, but simply so we can look over it and describe how we think things should be done. Although I'll look into the render queue and geometry stuff just to get aquainted with it.
Dec 14, 2007 at 6:01 PM

shawmishrak wrote:
IUpdateable is good, but I don't know about IDrawable. It sends the wrong "message" to the user. There should not be user-code physically rendering anything in Draw(). It should just be setting up the render queue, which actually makes more sense in Update() than it does in Draw().

Going for IRenderable is also valid in my eyes, it should just contain the same structure as IDrawable


shawmishrak wrote:
Now, if we instead ditch the render queue idea and let user code do all of the drawing (which I do not recommend),

Could you elaborate on why this is not recommended, I would like to understand it, or just some links would be fine.


shawmishrak wrote:
then we need to pass information along that will tell the user code:
  • If we're doing a depth-pass, or a full material pass.
  • The view/projection matrices.
  • Lighting information.
  • Access to shadow map and other render target data.

At the very least, the entities would need a reference to the graphics system to be able to query for this information.

You do most of that implicitly anyway.
  • Pass could be added to the scene or camera
  • View/Projection is available through this.SceneManager.ActiveCamera
  • Lighting information is also available this.SceneManager.CurrentScene.Lights
  • Shadow maps I'm not sure where belong, could be part of the scene though.

I realize that not all is exposed today, but we could easily expose that. Also graphics is always available since you can just call this.Game.Services.
Dec 14, 2007 at 7:09 PM

Could you elaborate on why this is not recommended, I would like to understand it, or just some links would be fine.


The main issue is the time it takes to do scene traversal. For any given frame, you're looking at a minimum of 3 passes. For arguments sake, let's call these depth, shadow map, and final passes. The depth pass renders the whole scene with no material properties. It just creates the depth buffer for the frame, so the final rendering pass which is most likely very pixel-shader intensive will have minimal overdraw. The shadow map pass (assuming 1 light) will again render the entire scene for depth-only information, but this time from the light source's perspective. The final pass will render the entire scene from the camera, but use the full material shaders this time. If you have multiple lights, this just adds more passes. Using a render queue, we can just quickly run through the render queue for each pass. If we instead need to do scene traversal and visibility calculations each pass, we could lose some serious time. That's why I prefer one pass to build the render queue, then the renderer has all the information it needs about the scene without having to do several traversals.


You do most of that implicitly anyway.

* Pass could be added to the scene or camera
* View/Projection is available through this.SceneManager.ActiveCamera
* Lighting information is also available this.SceneManager.CurrentScene.Lights
* Shadow maps I'm not sure where belong, could be part of the scene though.


  • A pass-type could be added to the scene to tell every entity how to render itself for the current Draw method, yes, at the expense of pushing a lot more work onto the entity.
  • View/projection matrices come from both camera and light source. The entity would need to know which to use.
  • All the needed lighting information could be stored in the SceneManager, again at the expense of pushing all the work to the entity.
  • Shadow maps could be stored along with the lights for the current frame.
Coordinator
Dec 14, 2007 at 7:38 PM
Edited Dec 14, 2007 at 7:42 PM
I was assuming that scene itself, not manager, would have the light.

The only problem with a render queue is how to be able to use it easily. Efficiency is implied I believe, otherwise there would be little point in using it. But as a developer, I need to know which things are being draw each frame, and how to affect that easily. That implies we need a way to tell the render queue "if this isn't being drawn, then please draw it next frame", and "if this is being drawn, then don't draw it anymore (remove it from the queue)". Bear in mind there may be multiple scenes, so the scene manager should be able to go through each scene, update each entity, and during that update, let the render queue know which of the scene's entities should remain in the queue.

So again, a render queue itself isn't a bad thing, but we still need to at the very least, iterate through all entities and say "render this" or "don't render this", and the render queue needs to react to that, or know about that state. Could all entities be kept in the queue, and then each entity simply has a "isVisible" bool? If visible then it would be rendered, otherwise the render queue would skip over it?
Coordinator
Dec 14, 2007 at 8:46 PM
I'd say the entity update/draw/render queue thing is a top priority over anything else right now. It is going to affect the graphics system, and the entire scene manager. It could also affect messaging and physics as well. It is not a 'blocking issue' right now for me, but in order to get terrain working I'm having to hack a few things.
Dec 14, 2007 at 11:13 PM
rant I sware, if I accidentally hit Tab + Backspace one more time and lose an entire post, a furry kitty somewhere is getting kicked in the head! These CodePlex forums suck when it comes to user input! Anyone else have issues with their Enter key adding a newline at the end of the post instead of where the cursor is? F'in annoying... end rant

Anyway, to restate what I said in my lost post, abbreviated because I dont feel like typing the whole thing again:

If there are issues with the render queue system, it's not a huge deal to go with the visitor pattern for rendering (the XNA-esque way), provided that scene traversal is somewhat guaranteed to be as fast as a render queue traversal. What this means is that each entity must be able to say yes/no to it being visible without any extensive computation. Also, entities with transparent materials must be able to be flagged as "render-last" and distinguishable from opaque materials for proper handling. I'm not concerned with virtual overhead at the moment, because its only once per entity per pass, and the virtual call to IGeometryProvider would be there in the render queue system anyway.

Functionally, using the visitor pattern would require the graphics library to become a library instead of a manager. This means I would provide functions that would help entities draw themselves in whatever way is commanded by the SceneManager. This also means the SceneManager would be the one responsible for managing the rendering cycle, and the graphics manager would just provide functionality. Let me state up front that I don't have a problem with doing it this way, provided the criteria I mentioned above is met. My whole point with the render queue system was to optimize the rendering.

I agree that its easier and more intuitive from an XNA-user's perspective to use the visitor pattern approach. It also allows for easier custom rendering. I don't want to force an uncomfortable system upon everyone; I want this to be as easy on end-users as possible. However, I also want to meet performance requirements. That's why I still advocate the render queue system. There's only one scene traversal, you can build the render queue while doing visibility determination.

Perhaps we should put this to a vote:
  1. If you would rather go with a visitor pattern (with a modified IRenderable interface) on the Scene itself, provided the SceneManager allows linear time traversal and constant time visibility check with a boolean, say "#1!"
  2. If you're more comfortable with the render queue as is, say "#2!"
  3. If you're interested in the hybrid system, say "#3!" (see below)

One approach I just thought of while write this is a different way of building the render queue. The problem seems to be the render queue entry persistance. So, I propose to build the render queue entry from scratch each frame. The SceneManager will traverse the scene, and build a RenderQueueEntry (struct this time) for each visible entry. Then, the renderer will just act on this list, and clear it when done. No garbage since its all structs, and no virtual call overhead, but two issues here are: (1) building a render queue struct that sufficiently describes static models, terrain patches, and skinned models and (2) the cost of struct copying (and potentially sorting). Basically, the "win" of this approach over the visitor pattern is it being more intuitive. Whether virtual function traversal of the scene per frame or doing a single set of struct copies per frame is faster, I don't know. I can run some tests and let you all know.
Coordinator
Dec 15, 2007 at 12:11 AM
To truly knew if I could vote for option #2 I'd have to know how you would let a render queue change dynamically each frame. If I couldn't have the knowledge I'd be first interested in #3, and if it seemed like it wouldn't be possible due to performance then I'd go for #1.
Dec 15, 2007 at 12:30 AM
I vote option #2. In looking at it is seems fairly simple. Of course I really didnt understand the problem that Ikon was talking about.
Coordinator
Dec 15, 2007 at 12:40 AM
If you run the framework code (with the ship and a black background), it is running off a render queue.

Currently, when the model is loaded it is also put into a render queue. Each frame the graphics system draws what is in the render queue. This is nice, but what if we need to cull the model out? We have to remove it from the render queue. So how do we dive into the render queue, find it, and remove it? Probably not difficult, but how much time does that take? What if we're trying to do that many thousands of times each frame? Is it still efficient? These are the concerns right now.
Dec 15, 2007 at 12:52 AM
I wote for option #1 (Of cause he does :) )

Actually you wouldn't need to traverse the scene for visibility each entity could simply determine this themselve, or the scene could be really smart and do it. You would simply listen to the PropertyChanged on the active camera and if the frustum changed run through the entity tree. Further the scene could listen to PropertyChanged on the entities and if it was Position or Visiblity then simply move the entity in or out of the visiblity list.

With regards to the transparent issue then you can use DrawOrder to make sure that transparent items are rendered last.

Just some quick thoughts, need more sleep for more input ;)
Dec 15, 2007 at 3:33 AM
Edited Dec 15, 2007 at 5:46 AM
Sturm, your vote doesn't count. ;)

For the sake of completeness, I mocked up a very primitive scene manager that handles both kinds of scene <-> renderer interaction. For the first, the option #3 hybrid approach is used. The scene graph is traversed once (virtual children nodes) and for each node a struct-based RenderQueueEntry is created (around 108 bytes in size, Marshal.SizeOf() doesn't work on Xbox), populated, and sent to the graphics system. After the traversal, the entire render queue is processed three times (for test purposes, just does some math on the data, no actual rendering). For the second scene manager, the scene hierarchy is traversed three times, and for each entity the same math is done on the data as in the first scene manager. Hence, the data in both scene managers is processed over three passes. The "math" done in each is simple and is meant to be cheap, so the real comparison is in the time taken to do the scene traversal/queue traversal. No "visibility culling" is done so this is the worst case in the amount of memory copying required for the render queue method.

<snip!>

Coordinator
Dec 15, 2007 at 5:10 AM
Edited Dec 15, 2007 at 5:18 AM
Based on the specs you just posted, and under the assumption they're accurate, and finally, under the assumption that you can make that system flexible enough for our needs, then you have my vote on that one :)

Oh, and initial terrain tests suggest it worked perfectly, on the first try......seriously, the entire terrain, LOD, quad-tree, patch, normal mapped terrain system, first try. Changing coordinate systems wasn't as bad as I though it would be.

I've got quite a bit of commenting and code cleanup to do still. I also created a very simply light class (before I heard Shaw had already done it). Either way, I don't expect this terrain code to get applied to the source from the patch because of the questions we're having on entities right now. What I do expect is that we can look at the patch, and figure out a clean way to call the draw to it. If we have to convert it to IGeometry... then that needs to be done as well.

I'll try and post the patch tonight if I get the cleanup done. If the commenting/code isn't perfect then you'll have to get over it, its only a patch. :)
Dec 15, 2007 at 5:28 AM
Clearly the performance of the "hybrid" approach is dependent on the size of the queue structure. The one I used is:

    [StructLayout(LayoutKind.Auto)]
    public struct RenderQueueEntry
    {
        public VertexBuffer Vertices;      // 8-byte pointer on Xbox
        public IndexBuffer Indices;         // 8-byte pointer on Xbox
        public Matrix WorldTransform;  // 16 * 4 bytes = 64 bytes
 
        public int StartIndex;                 // 4 bytes
        public int NumTriangles;           // 4 bytes
        public int A;                             // 4 bytes (dummy data)
        public int B;                             // 4 bytes (dummy data)
        public int C;                             // 4 bytes (dummy data)
 
        public TransformData Transform;    // 8-byte pointer on Xbox - Contains all transformation data (world matrix, bone matrices for skinned models, etc.).  This is a _class_ reference.  The TransformData instance will live within the entity and will be directly read by the renderer.
    }
 
    public class TransformData
    {
        public Matrix[] mat = new Matrix[15];
    }

The real structure can even be smaller (push world matrix into TransformData), put out A, B, C fields as they are only dummy fields to make the structure larger and I don't see any data we could put there. The beauty is that all of the Matrix data will be in TransformData, which will live as part of the entity and only a reference needs to be passed.
Coordinator
Dec 15, 2007 at 5:41 AM
Sounds very cool, the engine will be a fun learning experience, not just for others, but us as well (well at least for me).

I've finished initial tests of the terrain, here's a link to the issue with a screenshot and some more info.
http://www.codeplex.com/QuickStartEngine/WorkItem/View.aspx?WorkItemId=4785
Dec 15, 2007 at 6:29 AM
Shit, I'm sorry guys! There was a gapping hole in my test coverage leading to the render queue method only doing half of its required work! I get to wear the "dumbass" hat for the next week.

Results (time to process a scene with n elements):
Windows (Core 2 Duo E6600):
# Entities
100    1,000    10,000    100,000
---    -----    ------    -------
0.3ms  3ms      32ms      345ms      Render Queue (stack-based, new each frame, 3x internal processed)
0.2ms  2ms      25ms      264ms      Direct Draw Traversal (3x traversal)
0.2ms  2ms      26ms      271ms      Render Queue (persistant class-based, 3x internal processed)

Xbox:
# Entities
100    1,000    10,000    100,000
---    -----    ------    -------
2.4ms  28ms     345ms     3508ms     Render Queue (stack-based, new each frame, 3x internal processed)
2.1ms  24ms     283ms     2822ms     Direct Draw Traversal (3x traversal)
2.1ms  25ms     319ms     3225ms     Render Queue (persistant class-based, 3x internal processed)

The methods are a lot closer now, with direct rendering from entities actually pulling slightly ahead, I'm a little surprised that using a reference-based, garbage-less render queue did not win out over the direct drawing method especially since virtual calls are pretty slow on Xbox, but the results are very close. The results appear to scale up linearly with the number of passes, as well.

So, this test basically proves that both methods are very close in performance, with direct rendering being slightly faster in this test case.

Again, sorry for screwing up the original test!
Dec 15, 2007 at 6:31 AM
Edited Dec 15, 2007 at 6:32 AM
That terrain image looks great! How is the lighting/shadowing being calculated? Is it just N dot L lighting in the shader?
Coordinator
Dec 15, 2007 at 6:44 AM
Yup, nothing too fancy. Just like the template. 1024x1024 terrain, with a couple LODs is taking up about 100mb right now.

Patch has been uploaded, it will need an extensive review:

Terrain.cs (500 lines on its own, of course 50-100 are probably comments)
QuadTree.cs
TerrainPatch.cs
QSMathHelper.cs

Changes to SceneManager.cs, Scene.cs, and the example game. Most of these changes are temporary of course, just to get something running.

Bear in mind I'm not commenting all of the code just yet. I'd prefer we finalize a design before commenting for hours on things that may disappear or be rewritten. I'd like a review on GC performance if possibly, I profiled it, but I'm very new to the CLR profiler, and I have no 360 to test on.

Hopefully after a good review we'll get a better idea on scene/entity implementation.
Coordinator
Dec 15, 2007 at 6:48 AM
Well isn't direct rendering also the easiest method to implement? If this is the case, and there are no real downsides to it, I'd prefer going that route. However, my lack of knowledge of graphics systems means that I'd rather leave the decision up to you. All I ask is for a flexible system. Of course we all need to know a little bit about each others' systems to design this engine, but if we recognize who the "expect" in their category is, with a little guidance and trust I think we'll be ok. A good review process should take care of the rest. If something stinks it'll get declined and discussed, and a new version can be made.
Dec 15, 2007 at 6:48 AM
So now that I've successfully proven myself an idiot, we're still left with the original decision of how to implement rendering. The original render queue concept was designed to make rendering more efficient, but apparently isn't really doing its job. By manually using arrays, I can get the class-based render queue version to perform approximately equal to the directly drawing method. So, it really does come down to a matter of team preference! Do you want entities to draw themselves, or tell the renderer how to draw them?
Dec 15, 2007 at 7:01 AM
In theory, yes, direct rendering is the easiest method. The problems start coming up when SceneManager need to take over a significant part of the graphics functionality from the graphics manager. Really, the graphics manager would be shrunk to a fairly trivial class, and all the fun stuff would happen in SceneManager. Later on, we'll want to start sorting by shader/material so the entites will need to be traversed in a dual-order when rendering: by material, then draw order. So, two entities could share a shader but differ in draw order, or vice versa. The SceneManager will need to handle this, even if the graphics system assigns its own draw order to entities. At this point, I really don't know which is going to be the best choice going forward with this. There are pros/cons on both.
Coordinator
Dec 15, 2007 at 7:02 AM
Edited Dec 15, 2007 at 7:06 AM
Well if we let entities drawn themselves then they have to be intelligent about it. How are they going to know what the draw order is for example? However, other than draw order I don't see a downside to letting them draw themselves if drawing from a queue isn't more efficient.

Having each entity draw itself is very intuitive (easy as hell to understand). I think the most important thing is an efficient culling setup. Testing every entities bounding sphere against a frustum isn't incredibly expensive, but it would be nice if there was a fast way to have the entity know what quad-tree section(s) they reside in, because we could cull them by quad-tree section on a first pass. For example, if half the map wasn't in view, and half the entities were in that section, those entities are marked as false/invisible for that frame (through a simple bool). I really don't know how the culling system would work exactly, but I think you get where I'm going with this.

I'd say my vote is for the standard way of letting entities draw themselves. It'll make for a faster production/implementation of things. I can start cleaning up and integrating more stuff, sooner :).

I'd like to know a couple things still:
1.) Will we be using IRenderable or similar?
1a.) If you answered YES to 1, then where will we pass the draw information to this IRenderable entity if we can't pass it through draw? We shouldn't do it in update if update is occuring more than draw, as it would be a waste. We could let entities "pull" their information from a main source, like their scene.
2.) If you answered NO to 1, then what will we be passing into draw? I'd assume at least: view, projection, and light.
3.) Or letting entities pull their information from the scene. I can imagine the scene getting the current render camera info at the beginning of its draw phase, from the camera interface. Then entities would have view and projection from the camera. The light information would already be in the scene.
4.) ??? I'm sure there are other ways or combinations of the first 3.

Again, this vote is still open, that is just how I see it.

Got a lot of stuff done over the last couple of days, I'd love to stay and ponder which way to go with this, but I've been up about 19 hours now, time to hit the hay. I'm sure I'll get time to work on more stuff tomorrow. If you don't get a chance to review my stuff don't sweat it, its not going to do a lot of good until we know how we want to render things and deal with entities.
Coordinator
Dec 15, 2007 at 7:16 AM


Taliesen wrote:
Quadtrees is a method of dividing the terrain into a tree for culling and other purposes. Allows you to have larger terrains without taking such a performance hit since it only renders the leafs that are in the cameras view. Terrain patches are, and correct me if I'm wrong, the actual terrain vertices in a leaf node of the tree. It creates a small "patch" of terrain. Though I am not sure if that is correct or not.


Sorry, just saw this post. The terrain patches are actually just set of indices (index buffer). The vertices for a quad-tree is set, but you could have 3 different LODS through 3 index buffers. I just put up a patch with the terrain code if you'd like to see what I'm talking about. Hopefully I can learn geomipmapping and get that in soon, We'd get some close up high-detail, and very low LOD as you go back. However I'm not sure what performance cost geomipmapping incurs, having to have the entire terrain at 3-4 different LODs. Although any LOD past the first is simply another, smaller index buffer, no extra vertices.
Dec 15, 2007 at 7:29 AM
Draw order will be maintained by the Scene, and the entities will be traversed in draw order. The Scene will really take on the responsibility of rendering the entire scene. It'll look something like this:

Scene:
public void Draw(GameTime gameTime)
{
    // Create VisibleEntities list, a list of renderable entities in draw order
 
    foreach(Light light in NearLightSources)
    {
        DrawScene(light.ShadowMap, light.Projection, light.View, ... some other params);  // These parameters will most likely be a struct
    }
 
    // Same for reflection maps, refraction maps, other types of render targets
 
    DrawScene(backBuffer, CurrentCamera.Projection, CurrentCamera.View, ...);
}
 
public void DrawScene(RenderTarget rt, Matrix projection, Matrix view, ...)
{
    graphics.SetRenderTarget(0, rt);
 
    foreach(IRenderable entity in VisibleEntities)  // VisibleEntites is a pre-sorted list of renderable entities sorted in draw order!
    {
        entity.Draw(... whatever params we need to pass ...);
    }
}

Obviously this is just pseudocode and will be significantly more complex. IRenderable will need to be defined to fit our needs.

I still want to run some more performance tests tomorrow.

Dec 15, 2007 at 7:32 AM
First off, Shaw could you upload a patch the code you used for profiling I would love to have a look at it :)

If we are going for a model where entities draw themselfe, which I do feel is the right way, then we should implement IDrawable. Though it only provides the GameTime as input parameter, that's not an issue. A entity should be able to get all other information from the scene. So in a Draw method you should be able to write:

public virtual void Draw(GameTime gameTime)
{
    Frustrum view = this.SceneManager.ActiveCamera.Frustrum;
    List<Light> lights = this.Scene.Lights;
 
    // No need to be concerned about being visible or not, if an entity isn't visible or culled draw isn't invoked
    // if (this.Visible == false) return;
} 

There is also a DrawOrder property on all entities (It's part of the IDrawable interface) which we would use in order to draw entities. The DrawOrder would simply the same as the Distance, or similar metric which we would use to keep track of culling. While traversing the scenemanager would take the transparent entities and render those last.

The scenemanager would only have to traverse the entities if the cameras frustrum updates, it would be the responsibility of the entities to do their own cull checking every time position is updated. This means that we would most likely have more properties on the entity:

public class Entity
{
    // Indicates that a entity is visible, this is a game state property
    public bool Visible {get/set} 
 
    // Indicates that the entity is culled, this is updated by the scene when the camera frustrum changes or by the entity when position changes
    public bool Culled {get/set}
} 

There might be a need for Culled and InViewFrustrum as entities in the view frustrum might still be culled (Could be obscured by another entity or terrain).
Dec 15, 2007 at 7:50 AM
The Draw() methods will have a lot of redundant logic, too.

public void Draw(GameTime gameTime)
{
    if(this.Scene.RenderPass == RenderPass.LightSource)
    {
        // Render as depth-only with special shader (configured through Material or GraphicsSystem), pulling data from this.Scene.CurrentRenderLight
    }
    else if(this.Scene.RenderPass == RenderPass.DepthOnly)
    {
        // Render as depth-only with "empty" shader, pulling data from Camera
    }
    else   // Full-shader pass
    {
        // Render with full surface shaders, pulling data from Lights, Camera, etc.
    }
}

or alternatively pass along a structure of render options (instead of branching on a render pass type) so we can more easily support future render pass types without revisiting every single entity. We can try to encapsulate this as much as possible, so we have as few points of change as possible if there are changes to the rendering algorithm. I definitely want to avoid having a new rendering pass causing a change in every single Entity.Draw() method.


I'll package up the benchmark code tomorrow, I need some sleep. It's not part of the QuickStart project, its separate.
Dec 15, 2007 at 8:01 AM
Thinking about it some more, I think my biggest concern is how to enforce consistent yet not-too-burdensome rendering logic in each entity. Basically, each entity must be aware of how to do every type of rendering pass that could ever be requested by the scene compositor. You can encapsulate a lot of this inside Material, just let it query for the information it needs for the current pass and setup the shaders accordingly (and hope the constant scene querying doesn't become a bottleneck!)
Dec 15, 2007 at 9:16 AM
It's always a tradeoff, you simply have to decide on a pattern to use. Another one to consider would be the Strategy pattern. I think this would actually fit nicely. But it also requires a strategy to be written for each entity and render pass, though it wouldn't mean touching existing entities just because we introduced a new render pass.

You would still have draw, but it's not very like to be overloaded on any sub types. We then just need some way of registring which rendering stragety should be used for each type. But that could easily be done in the Configuration file.
Dec 15, 2007 at 11:07 AM
Here are my review comments:

In BaseEntity.cs
  • Do not move fields out of the private scope for performance reasons before we can prove that there is a suiteable performance gain
  • Should we follow the Is... pattern for boolean properties or should we just have Visible/etc ?
  • Remember to use this prefix for instance menbers
  • Consider using Quaternion as rotation instead of matrix, it does make life easier

In Camera.cs
  • aspectRatio should be a private field with a public property.
  • The aspectRatio value should be a QSConstant
  • Remember to use this prefix for instance menbers

In FreCamera.cs
  • maxZoomLevel should be a QSConstant
  • Again do not move fields out of the private space
  • Having parameterized constructors makes serializing difficult we should really reconsider this.
  • Remember to use this prefix for instance menbers

In QSConstants.cs
  • Rename DefaultTerrainElevStr to TerrainElevationStrength
  • Rename DefaultQuadTreeWidth to QuadTreeWidth
  • Remember to use this prefix for instance menbers

In Scene.cs
  • Even if sceneTerrain and sceneLight are temporary, do honor Coding Guideline, sometimes temporary bad practices become permanent bad practices
  • Remember to use this prefix for instance menbers
  • do not prefix fields with the type name, it's implicit(it's no the class)

In SceneManager.cs
  • Why static fields, at anyone time there is only one SceneManager
  • Remember to use this prefix for instance menbers
  • Do not use foreach, though here it might be ok as there won't be many scenes, use backward traversing for loop

In QSModel.cs
  • Exclude this as the only change is a space

In QuickStartSampleGame.cs
  • Remember to use this prefix for instance menbers
  • param xml comment is badly formatted
  • Do remember to have brackets even on single line if statements

In Light.cs
  • I think you should create a seperate folder for this as there will be many different types of light
  • Use // instead of /**/ as VS supports this better
  • Remove unneeded using statements
  • Put xml comments where needed
  • Remember to use this prefix for instance menbers
  • Having set only properties does not make sense, this usually indicates misuse of properties
  • Do not move fields out of the private scope for performance reasons before we can prove that there is a suiteable performance gain

In Terrain.cs/QuadTree.cs
  • Do not move fields out of the private scope for performance reasons before we can prove that there is a suiteable performance gain
  • Do not use /**/ for block comments, VS can block comment/uncomment with ctrekc/ctreku
  • Keep types in separate files
  • Do not use all uppercase for enumeration values
  • For single line get/set use single line statements
  • Remember to use this prefix for instance menbers
  • Do not prefix properties with class name, it's implicit
  • Do remember to have brackets even on single line if/for statements
  • Do not expose arrays directly, remember these can be assigned to at any time, even with null. Use readonly lists instead, where possible
  • The second constructor could just call the first constructor as there is duplicated code, instead of base(game) use this(game)
  • Instead of using a nullable int in Initialize create two versions one which take smoothingPasses and one which does not
  • When throwing exceptions remember to add a <exception> tag to the xml comments
  • Split SmoothTerrain up into smaller code units, it makes it easier to understand
  • Create constants for the normal strings, they are used multiple places
  • Use == false instead of ! as this is more intuitive to read and improves code review
  • Check for 0 before dividing, or make sure the field can not contain 0

In TerrainPatch.cs
  • Do not move fields out of the private scope for performance reasons before we can prove that there is a suiteable performance gain
  • Do not use /**/ for block comments, VS can block comment/uncomment with ctrekc/ctreku
  • Remember to use this prefix for instance menbers
  • Do remember to have brackets even on single line if/for statements
  • Do not expose arrays directly, remember these can be assigned to at any time, even with null. Use readonly lists instead, where possible

In QSMathHelper.cs
  • Make the type static as it only contains static members
  • Do not use /**/ for block comments, VS can block comment/uncomment with ctrekc/ctreku
  • Do remember to have brackets even on single line if/for statements

I'm updating base entity with a PropertyChanged event and I'm going to update all entities to reflect this. This now means that all fields will become properties. If this incur too big a performance hit we have to rework where appropriate.
Dec 15, 2007 at 11:10 AM
Crappy discussiongroup (Or me for not learning the format):

ctrekc/ctreku

Should be:
ctrl+k+c/ctrl+k+u
Dec 15, 2007 at 4:29 PM
I uploaded the test project: http://www.cse.ohio-state.edu/~holewins/QuickStart/SceneTraversalTest.zip

Like I said, there is no actual rendering, it just tests the speed of the traversals.
Dec 15, 2007 at 7:32 PM
I think we really need to move terrain generation to a content pipeline task. On Xbox, it takes over a minute to get through the SmoothTerrain method. ;)

The first real problem that Xbox had was creating the terrain VertexDeclaration once per frame and within the Draw cycle. Graphics resources should not be created per-frame, and should not be created within a Draw call. The Xbox loves to throw DriverInternalErrorExceptions in these cases. Second, QuadTree.cs:352 is throwing an exception about not having a vertex and pixel shader bound to the device. But, an effect is being bound, so apparently the Xbox hardware is not happy with the shader for some reason. I'm not sure why yet.

It looks great on Windows though!
Coordinator
Dec 15, 2007 at 9:06 PM
Edited Dec 15, 2007 at 9:08 PM
Good stuff to know. Weren't you able to run the original template on the Xbox, with terrain?

I might be able to speed up the smoothing algorithm.

I've never used my own content processor, but I was thinking the same thing. How would define, within the content processor, which files to use to create the terrain? What kind of file would the content processor create, and how would we read it?
Coordinator
Dec 15, 2007 at 9:18 PM
I will try and optimize what I can, but I will only be able to go off of your feedback for the Xbox360 part of things.

Sturm, I'll take your review into account as well.

Either way though we still need to settle on entity/rendering setup fairly soon.
Coordinator
Dec 15, 2007 at 9:22 PM
Edited Dec 15, 2007 at 9:27 PM
Holy crap! I changed one line of code like you suggested shaw, the vertex declaration, and the framerate has gone through the roof!!

Here's a comparison:
Before:
Release mode
MSAA disabled
320fps

and in the same spot....
After
Debug mode
MSAA enabled
450fps!

That is an 33% increase in efficiency. I'll need to run some more tests, maybe this is some kind of fluke.

EDIT: It must be some kind of fluke, it didn't change the framerate on the template. Hmmm, more tests.
Dec 15, 2007 at 9:30 PM
The original template did run on 1.0 Refresh. I have not tried it on 2.0. With the amount of problems people have been having with various Xbox issues in 2.0 (myself included), I wouldn't be at all surprised that theres an undocumented change or just broken functionality causing it.

Usually, content importers/processers work on a one-to-one asset to XNB relationship. One input file, one output file. However, you can invoke additional processors to build additional content and either embed it inside of the XNB file, or create an ExternalReference which puts a "link" inside of the XNB file to another XNB resource. For your terrain, you can either build it with all geometry and textures in one file, or build just the terrain and map the textures at run-time, or even some combination. The flow would look something like:

Compile-Time:  TerrainHeightMap.dds -> <Texture Importer> -> TextureContent instance (I think, name might be slightly off) -><Terrain Processor> -> TerrainContent instance (in our ContentPipeline.dll) -> <TerrainContentWriter>
Run-Time:  TerrainHeightMap.xnb -><TerrainReader> -> Terrain instance

You can take a look at the StaticModelProcessor in QuickStart.ContentPipeline.dll to get an idea for how the content pipeline will generally work. You'll be working with texture data instead of model data, but the rough process will be the same. Also take a look at the StaticModelWriter to get an idea for how to write XNB files. Additionally, the XNA help files provide a nice diagram of design-time vs run-time data types. For instance, you'll use VertexBufferContent instead of VertexBuffer and IndexCollection instead of IndexBuffer. At run-time, the content pipeline will automatically build VertexBuffer and IndexBuffer types for these.

If after going through the help files and the existing code you have any questions, feel free to post them.
Dec 15, 2007 at 9:33 PM


LordIkon wrote:

That is an 33% increase in efficiency. I'll need to run some more tests, maybe this is some kind of fluke.

EDIT: It must be some kind of fluke, it didn't change the framerate on the template. Hmmm, more tests.


That's an even greater increase since you're now in debug mode with MSAA enabled. It's probably not much of a fluke. Creating graphics resource per frame is very taxing. Remember in the prototype when I nearly doubled the performance by not recreating the sprite batch used for GUI rendering each frame? Dynamically creating graphics resources is a sure-fire way to completely stall the graphics pipeline.
Dec 15, 2007 at 9:44 PM

LordIkon wrote:

Either way though we still need to settle on entity/rendering setup fairly soon.


Agreed. Unfortunately there's no clear-cut solution to this one.

Would it help if I did a real mock-up of both methods in the current framework and posted it as a patch for all to see? The mock-up I did last night was purely a traversal speed test and did no actual rendering. I can actually create both systems (or at least primitive versions of each) if that would help people. That way, you can take a look at how each system works in code instead of just reading about it. At the end of the day, it'll be a majority rule. I'm not going to sit here and dictate how to this should be done, that's not my place.
Coordinator
Dec 15, 2007 at 9:50 PM
Edited Dec 15, 2007 at 9:54 PM

Sturm wrote:

In BaseEntity.cs

Do not move fields out of the private scope for performance reasons before we can prove that there is a suiteable performance gain


Shaw has already run tests show performance hits when using accessors on Vectors or Matrices.


Sturm wrote:
Remember to use this prefix for instance menbers.

Are we really going to use 'this' in hundreds of places? I'd probably rather go with member notation (mVariable). If they want to know if it is a
member or local won't intellisense do that?


Sturm wrote:
Consider using Quaternion as rotation instead of matrix, it does make life easier

I've been considering it, I'll have to learn quaternions a little better before I can reliably throw it into the engine.


Sturm wrote:
In FreCamera.cs

maxZoomLevel should be a QSConstant


It is only a constant of that specific class, not any other camera.


Sturm wrote:
In QSConstants.cs

Rename DefaultTerrainElevStr to TerrainElevationStrength
Rename DefaultQuadTreeWidth to QuadTreeWidth


Then it would seem less like an engine default value, and more like a developer's constant value.


Sturm wrote:
Even if sceneTerrain and sceneLight are temporary, do honor Coding Guideline, sometimes temporary bad practices become permanent bad practices


It's a patch specifically noted not to be applied. The entire light class is going away as soon as Shaw posts his light code. Basically the way I see it is, if the comments are lacking too much, the patch gets denied. Once everything is kosher, patch gets applied. The more time I spend commenting code I know I will be re-writing or throwing away, the less I get done.


Sturm wrote:
In QSModel.cs


QSModel? How did that get in this project? It is from the template. I should add that is not in my solution anywhere. Not sure how you got it.


Sturm wrote:
param xml comment is badly formatted


Whoops, fixed.


Sturm wrote:
In Light.cs

I think you should create a seperate folder for this as there will be many different types of light
Remove unneeded using statements
Having set only properties does not make sense, this usually indicates misuse of properties


  • You'll have to let Shaw know, he's creating the lighting setup. In fact, I'm hoping to get his lighting code before submitting my patch for final review.
  • Removed unneeded using statements, good eye Sturm.
  • Set only properties allows setting to do specific things, but requires that they 'get' directly from the public variable, for performance reasons. As stated earliar, there are stats showing the importance of not using accessors on any Vector or Matrix, unless it is something that will only happen occasionally, not each frame.


Sturm wrote:
Make the type static as it only contains static members

Do remember to have brackets even on single line if/for statements


Another whoops, meant to do those. Thanks.


Sturm wrote:
Do not prefix properties with class name, it's implicit


Not sure what you mean.


Sturm wrote:
Use == false instead of ! as this is more intuitive to read and improves code review


Agreed.


Thanks for the lengthy review Sturm. Everything I didn't mention is being considered as well.
Coordinator
Dec 15, 2007 at 9:53 PM


shawmishrak wrote:
Would it help if I did a real mock-up of both methods in the current framework and posted it as a patch for all to see? The mock-up I did last night was purely a traversal speed test and did no actual rendering. I can actually create both systems (or at least primitive versions of each) if that would help people. That way, you can take a look at how each system works in code instead of just reading about it. At the end of the day, it'll be a majority rule. I'm not going to sit here and dictate how to this should be done, that's not my place.


That sounds like a good idea, if you think it is a good idea, and as long as it doesn't take you too long just to setup a demo.
Coordinator
Dec 15, 2007 at 10:04 PM
Looking through your content processor was definitely a bit enlightening. I guess my one big question would be, if I'm setting up a vertex buffer for terrain, bounding boxes for quad-tree sections, and then each quad-tree section has patches with different index buffers. I'm not sure how to do all of that in a single processor, or at least what kind of output to produce and have them all interact together. Terrain, QuadTree, and TerrainPatch are all dependant upon one another.
Dec 15, 2007 at 10:56 PM
Is there one class that kind of manages the rest of them? If so, let the content processor produce that type and just store the others as children. You really just need to come up with a way to take a set of instances of those three classes that make up a terrain and serialize/deserialize it. For instance, if your setup is something like:

public class Terrain
{
    private QuadTreeRoot quadTree;
    private List<TerrainPatch> patches;
}
 
public class QuadTree
{
    private List<QuadTree> children;
    private int terrainPatchIndex;
}
 
public class TerrainPatch
{
    private VertexBufferContent vertexBuffer;
}

then you could write:

Terrain:
public void Write(ContentWriter writer)
{
    quadTreeRoot.Write(writer);
    writer.WriteInt32(patches.Count);
    foreach(TerrainPatch patch in patches)
    {
        patch.Write(writer);
    }
}
 
QuadTree:
public void Write(ContentWriter writer)
{
    // Write relevant QuadTree node data
    writer.WriteInt32(children.Count);
    foreach(QuadTree node in children)
    {
        node.Write(writer);
    }
}
 
TerrainPatch:
public void Write(ContentWriter writer)
{
    writer.WriteObject<VertexBufferContent>(vertexBuffer);
    // Any other needed data
}

Obviously it would be a bit more complex. You just need to decide what data you need to store for each component, and serialize it in such a way that you can properly deserialize it at run-time. This means storing counts of number of specific objects that are found in the binary file, and making sure you know what data type is coming next at all times when deserializing.

Also, if you didn't notice in the existing processors, you can create an ExternalReference<> to other assets and store those in your binary file with ContentWriter.WriteExternalReference<>(). This will force the content pipeline to also build the other asset and load it when you load your type at run-time. You can just call ContentReader.ReadExternalReference<>() at run-time to get back a real instance of the asset. This is useful for binding to textures and shaders. For example:

Content Processor (taken from MaterialProcessor):
 
// Tell MSBuild to build the given effect file (*.fx) and give us an ExternalReference.  We explicitly tell it to use the "EffectProcessor" processor to build.
// If the effect file was already supposed to be built for this project, nothing additional is done, we just get a reference to the binary content (which may not be built yet!)
// If the effect file is not set to be build for the current project, it is added to the MSBuild queue and built along with all of the other content as if it had been included in the project.
ExternalReference<CompiledEffect> compiledEffect = context.BuildAsset<EffectContent, CompiledEffect>(new ExternalReference<EffectContent>(string.Format("{0}{1}.fx", Path.GetFullPath("Effects/"), input.effect)), "EffectProcessor");
 
 
Content Type Writer:
writer.WriteExternalReference<CompiledEffect>(compiledEffect);
 
 
Content Type Reader:
// Notice we have a real Effect instance now.
Effect effect = reader.ReadExternalReference<Effect>();

To see the power of the ExternalReference system, try excluding one of the effect files used in one of the materials from the project. It will be build anyway since the MaterialProcessor requests it!
Coordinator
Dec 16, 2007 at 1:35 AM
So the question is, which should I do first? Work on vegetation, geomipmapping, and random terrains? Or the content processor?
Dec 16, 2007 at 1:56 AM
On the Windows side of things, the content pipeline isn't a huge deal. We'll need full terrain loading outside of it anyway for the editor, since end users w/o Game Studio installed will not be able to use the content pipeline. (The XNA guys really shot themselves in the foot with that one!)
Coordinator
Dec 16, 2007 at 2:51 AM
Is there a way to specify files through the content pipeline. For example, terrain is generated from a heightmap image, but the terrain 'system' needs to know 3 textures for texturing, 3 textures for normal maps for those textures, and another texture for terrain splatting. I could certain generate the terrain vertex buffer without that information, but I would still be passing all the texture information in through code, rather than the pipeline unless I can put those 7 textures into the pipeline somehow.
Dec 16, 2007 at 3:13 AM
Do you mean how to determine the file name for those textures, or how to process those into XNB files? For the first, you could use an XML-based text file to specify the heightmap texture and all supporting textures, then write a content importer/processor for that file. Or, you could specify texture names in the content processor options. For how to reference the textures, use ExternalReference<>'s, like I mentioned above.
Coordinator
Dec 16, 2007 at 3:17 AM
If I use an XML based file, users could simply input the texture names in that by hand then? I mean, I understand how XML works, but that is what you're implying, as an alternative to putting it in the code? This way it could be done once in the processor, not every single time you compiled.
Dec 16, 2007 at 3:36 AM
It would be processed every time the XML file was changed (and the program was re-compiled). Ideally, the editor would spit out the XML file.
Coordinator
Dec 16, 2007 at 4:16 AM
Currently each terrain patch has at least one index buffer, but may have multiple. Lets say we had 2 LODs per terrain patch, and on 1024x1024 map we have 64 quad-tree leaves. This is 128 index buffers. Normally I'd let the 64 terrain patches each have 2 index buffers.

Ok, here I guess is where I'm confused. You import one model to the processor, you get one model output. I'm trying to input one terrain, and output one terrain object, about 100 quad-tree objects, 64 terrain patches, and 128 index buffers. All stored within different classes. How can this be done from one call.

Basically I see this way for models:
StaticModel blah = Content.Load<StaticModel>(string);

and I don't see how this would work for terrain:
200-300 objects = Content.Load<SomeNewType>(string? or XML? or....?)
Dec 16, 2007 at 4:48 AM
Isn't all of that contained within Terrain? That's the type you should be writing/reading.
Coordinator
Dec 16, 2007 at 4:53 AM
Well I guess you have a point. While there are 300 objects, they're all contained within terrain. :op

Wow, maybe I'll start by creating a simple content processor, and when I feel I have a deeper understanding of it I can integrate it with terrain.
Dec 16, 2007 at 12:34 PM

LordIkon wrote:
Shaw has already run tests show performance hits when using accessors on Vectors or Matrices.

Yes but this was regarding the physics engine, since it update everything very often. The suggested solution to this is to create a Update method which will do it, this way the Physics engine will not touch the properties.


LordIkon wrote:
Are we really going to use 'this' in hundreds of places? I'd probably rather go with member notation (mVariable). If they want to know if it is a
member or local won't intellisense do that?

Yes we are, any prefixing mechanisms are just making things look ugly, just have a look at the MS, Phillips and IDesign, which do encurage you not to do that. Also using reflector and looking at the MS assemblies you will see that that's not common practice (It's not fully gone, neither is the _ notation, but it's getting less).

LordIkon wrote:
I've been considering it, I'll have to learn quaternions a little better before I can reliably throw it into the engine.

I've found that they are much simpler to work with.

LordIkon wrote:
It is only a constant of that specific class, not any other camera.

That's because we only got one camera able to zoom, I really doupt that that will be the case moving forward.

LordIkon wrote:
Then it would seem less like an engine default value, and more like a developer's constant value.

Well they are developer constants :p. But There is no reason for the work Default as they could be using in any context. But I'm not bound to it. Though I'm against using acros It's so much nicer having the full word, and it will easier for outside developers to find it in the lookup.

LordIkon wrote:
It's a patch specifically noted not to be applied. The entire light class is going away as soon as Shaw posts his light code. Basically the way I see it is, if the comments are lacking too much, the patch gets denied. Once everything is kosher, patch gets applied. The more time I spend commenting code I know I will be re-writing or throwing away, the less I get done.

Unless you write redicously slow, like me, placing comments usually doesn't take up any time, also usually I find that if I can't find a proper comment, what I'm doing is often wrong.

LordIkon wrote:
QSModel? How did that get in this project? It is from the template. I should add that is not in my solution anywhere. Not sure how you got it.

It's because there is a extra spcae/tab in the file, create a new checkout and apply your patch you should see it.

LordIkon wrote:
  • Set only properties allows setting to do specific things, but requires that they 'get' directly from the public variable, for performance reasons. As stated earliar, there are stats showing the importance of not using accessors on any Vector or Matrix, unless it is something that will only happen occasionally, not each frame.

Bad design, If there is a property the general expetation is that I can get the value, and it's generally acceptable that I can not set it. If you need faster access then create methods and pass by ref.

LordIkon wrote:
Not sure what you mean.

If I have a class named MyClass then I should not have properties calle MyClassMyProperty, having MyProperty is adequate you are already accessing MyClass.