Graphics System

Nov 7, 2007 at 3:12 PM
While Sturm has apparently taken over the Common assembly and the messaging back-end, I've been working with the graphics system to give us the functionality we need for efficient, high-quality rendering. However, before I proceed too far, I want to get some feedback from the team on the design.

Materials

What I'm planning on doing is implementing a material system that more-or-less combines effects and textures into a single entity. There will be an XML-based material format. Let me give an example:

<material effect="test_effect">
	<sampler semantic="DIFFUSE_MAP">my_texture</sampler>
	<matrix semantic="WORLDVIEWPROJECTION" matrixID="WorldViewProjection"/>
        <float1_constant semantic="DiffuseCoeff">0.6f</float1_constant>
        <float4_constant semantic="SunColor">1.0f, 1.0f, 1.0f, 1.0f</float4_constant>
</material>

In this material, the assigned effect will be testeffect (loaded as testeffect.xnb, compiled from testeffect.fx). The material also uses the texture map "mytexture", which is bound to the "DIFFUSE_MAP" effect semantic. Instead of a texture map file, a renderer-defined render target could also be specified, prepended with "$". This allows materials to work off reflection maps, refraction maps, dynamic environment maps, shadow maps, etc., which are handled by the renderer and created dynamically each frame. There are also 2 shader constants defined, one is a single float and one is a 4d vector. Both are also assigned to specific effect semantics. Now the matrix binding is the interesting one. The matrix node does not take a constant value, instead it has a "matrix ID," which is used per-frame to reference into the renderer and retrieve the appropriate matrix. The renderer will maintain internal matrices based on camera position/orientation and other factors, and then the material can retrieve these matrices at render-time. Matrix constants and float renderer-defined constants will also be implemented eventually.

The point of going with this format instead of a purely HLSL semantic-based approach is that we can re-use effects across materials. Also, different materials will be able to share textures and effects but may define different constants.

Eventually, we can integrate a material editor into the eventual editor for the engine that will emit the XML and possibly HLSL code if we get that advanced.

Note that the specific XML format is just an example and is subject to change, it just serves to show how the system will work.

Models

Going forward, it's going to be advantageous to create new geometry containers instead of using the default Model class. At this point, I'm not yet considering key-framed animation, but I've been outlining a static model container. Basically, it will use the default XNA model importer, then use a different model processor to create a custom format with associated TypeReader and TypeWriter. Eventually, the static model format will be expanded to calculate collision mesh data for the physics processor, but right now I'm just concentrating on the graphics aspects.


Render Queue

I have a basic render queue set up, but I'm still working out the details of the information it contains and how its used. LordIkon, we're going to have to discuss in-depth how to integrate the terrain rendering into this, as the render queue is implemented based on atomic renderable unit, like models (i.e. entities in the world). Partial-visibility determinable is not needed for these entities, so we need to determine how to handle terrain in this framework which does depend on partial-visibility.



Let me know what you think of this system so far. Opinions, critiques, etc.?
Nov 8, 2007 at 10:54 AM
Looks good. Flexible and easy to use.

Are you planning on having the engine create a material from the material settings contained by the .x/.fbx files by default, and allowing the user to override this with a material loaded from the xml files?

Also rember that the renderer needs be able to handle multiple viewports, which may make culling more complicated (Is that going to be done by the scene graph?).
Coordinator
Nov 8, 2007 at 1:51 PM
The scene manager, which will have the quad-tree, will need to be the one to (at the very least) determine which terrain chunks to render. I'll need to know more about your render queue to figure out what to do from there.
Nov 8, 2007 at 2:17 PM
I'm still working out the details of the model <-> material binding, and exactly how geometry is bound to render queue entries.

For materials, the content pipeline does not really provide a way to "override" anything with user options. If this changes in 2.0, then great, other wise it may be necessary to either (a) load the material and model separately and bind them at run-time, or (b) create a custom intermediate format that specifies model and material and then the content pipeline will work on that file, grabbing references to both material and model at build-time.

Right now, the render queue entries hold a reference to an IRenderable interface. This interface contains one member, Render(), which makes the proper GraphicsDevice calls to render the geometry. How you want to handle terrain is up to you, but I was thinking of having each chunk be a separate IRenderable entity, to the render queue will contain all visible chunks and go from there.

Nov 8, 2007 at 2:34 PM
You do mean IDrawable?
Nov 8, 2007 at 2:44 PM
No....?
Nov 8, 2007 at 3:02 PM
IDrawable is defined in Xna and provides Draw(GameTime) which I think we should use instead of a custom one, which I guess IRenderable is?
Coordinator
Nov 8, 2007 at 3:20 PM
The quad-tree component would have to go through its usually process of checking the frustum against each terrain chunk so it could determine which chunks will be rendered. This is currently done during draw, so I guess I'd have the quad-tree communicate to the render queue which chunks were going to be rendered, or it would simply place them in the queue.

I've never worked with a render queue before, so bear with me on this one.
Nov 8, 2007 at 3:54 PM
In other projects the job of the rendering queue was to optimize the rendering and render data send to the gfx, not filtering which entities to render.
Coordinator
Nov 8, 2007 at 4:57 PM
If that is the case then I would think the quad-tree component would determine which patches were going to need to be drawn and then the render queue would do what it needed with those patches.
Nov 8, 2007 at 5:13 PM
IRenderable is a custom interface, and I'm sticking with a custom interface because I'm going to add things to it. Otherwise, the inheritance hierarchy would become even more complicated: IDrawable -> IQuickStartDrawable -> <client classes>.

The quad-tree/scene-manager stills needs to do visibility calculations for everything, terrain and entities. The render queue simply takes what needs to be drawn and renders in the most efficient order possible, taking into account alpha-blending, z-passes, etc.
Nov 8, 2007 at 5:22 PM
You could just let IRenderable inhirit from IDrawable, this would introduce a know interface for devs already familiar with Xna
Nov 8, 2007 at 5:45 PM
The interface is going to be different enough that I dont think it will matter. The only similarity would be the Draw() method, and even then it would be passed a zero GameTime which could be confusing.
Nov 8, 2007 at 6:45 PM
On a side note, I'm also considering moving all of the graphics stuff to the common assembly. It's becoming increasingly difficulty to clearly separate between Common and Graphics, and it's kind of hard to imagine a case where a client is going to create their own renderer based on the interfaces we define.
Nov 8, 2007 at 7:08 PM
Wuhuuu. Though different, they are now not meaningful having seperated :) I fully support that decission :)
Nov 8, 2007 at 7:12 PM
I think it would add much more value having a Draw then a Render method. Devs are already used to the Xna model so I think it would be better from a consistency point of view to have Draw. Also I don't think we will just pass zero to Draw, there might be situations where the render migth decide to use a less complex draw call if time has been taking too long. Also since the shareders are only updated in draw there might be time considerations for effects?
Nov 8, 2007 at 10:26 PM
Well, the renderer will handle timing for shader variables.

I'll change it to Draw(GameTime), though I think the renderer itself should handle the quality downgrades, not the individual renderable children.
Nov 9, 2007 at 7:35 AM
I agree that the render should do as much as possible. Though there might be rendering issues that the render can not solve by itself, and which are handled by the entity.
Nov 9, 2007 at 10:39 PM
Okay, here's what I have to far:

Materials are defined as XML files, and essentially specify an effect file, and how to bind every shader parameter, be it textures from disk, textures from the renderer (render targets), constants, or renderer-controlled variables (transform matrices, view position, etc.). Eventually, the material XML schema will be refined to allow users to specify techniques to use for different vertex/pixel versions supported by hardware.

The materials can either be loaded at run-time directly from XML, or built at compile-time by the content pipeline.

The big open question now is how to bind materials to models. Currently, you load materials and models separately, and then bind them together at run-time. This works, but it seems a little "hack"-ish. However, I cannot find a way to bind the material at compile-time. If I knew which material to bind at compile-time, it would be fine, but the content pipeline does not let you specify options/parameters (so you cannot say, compile mymodel.fbx with material "brick"). So, in order to bind the material at compile-time, the model file (.fbx/.x) would need to be able to specify the material. The only way I can see this working is if in the modeling program you name your materials the same as the material file names in the engine. Then, I could bind at compile-time and hope the modeler doesn't screw up the name. If it can't find the material, then it could attempt to create a material based on the model's material properties, but I see this causing a lot of pain.

Any ideas on what artists would find easy to work with?


Teaser: Renderer Teaser Pic :)

The materials shown here are all data-driven, nothing is hard-coded, except for the model positions.
Coordinator
Nov 9, 2007 at 11:14 PM
Edited Nov 9, 2007 at 11:15 PM
I actually brought this topic up on the forums once. Because I wanted to attach normal mapped textures to models without having to hardcode it. Shawn said it would have to be defined within the modeler, or you'd have to find a way to attach it as a parameter within a custom content pipeline processor.

By the way, that picture looks sweet. I see you've implemented point lighting. I had created a point light class but never got around to working on it.
Nov 9, 2007 at 11:47 PM
As for exporting, that should be no problem, you can write custom exporters for most modelling applications, so it simply a question of adding the needed information.

Of cause this would mean that we need custom importers in the engine as well :)
Nov 10, 2007 at 12:51 AM
I'm passing familiar with the Maya, Max, and XSI Import/Export APIs, but I really want to avoid that if at all possible. Then, users are no longer free to use whatever modeler they want, they are stuck with whatever modeler we write plugins for.

LordIkon, the problem isn't so much attaching the textures in the content pipeline, I can do that easily. The problem is knowing which textures to attach. That's why I'm thinking for now, in the modeler, uses will create a material with some name, let's say "grass." Then, in the content processor, that material name will be read in, and the processor will look for a QuickStart material named "grass." If it finds it, it will bind the material to the model. If not, it can attempt to read the material information from the model and create a default material, along with a warning message. The worry for me is that artists won't be using the same materials in the modeler as they will in-game.

The XML format is working out nicely. The grass material in the images is just defined as follows:
<material effect="NormalMap">
	<sampler semantic="DIFFUSE_MAP">grass</sampler>
	<sampler semantic="NORMAL_MAP">grassNormal</sampler>
	<variable_matrix semantic="WORLDVIEWPROJECTION" varID="WorldViewProjection"/>
	<variable_matrix semantic="WORLD" varID="World"/>
</material>

which binds to the following shader inputs:

float4x4 worldViewProjection	: WORLDVIEWPROJECTION;
float4x4 world					: WORLD;
 
texture diffuseMap				: DIFFUSE_MAP;
texture normalMap				: NORMAL_MAP;

Code-wise, the client doesn't have to do anything but load the material. The two matrices are automatically bound each frame with the proper matrices derived from the camera, and the two textures are automatically loaded from disk and bound to the shader.
Coordinator
Nov 10, 2007 at 3:29 AM
If we support an XNA compatible format, like .X or .FBX, then we could have the model contain either the texture or at least the name of the texture. If we don't have the artist map the texture themselves in the model how would they know how it would map to their model just by supplying the name? If the normal map was simply a match to a regular texture then it would be easy, but sometimes a normal map is just to add detail to a model. Supporting a format like .X and .FBX wouldn't hurt the user because there are free exporters out there that XNA already recommends, like MilkShape. But users using higher end modelers like 3dsMax would be accomodated as well.

I do like the XML Setup as well. Could we auto-generate that XML in any way?
Nov 10, 2007 at 3:54 AM
Of course we're going to be supporting .X and .FBX. The problem is how to represent the material in the modeler and get that information into the content pipeline processor. Imagine you're modeling in Maya. You can apply textures and set colors, but you can't apply an HLSL shader to the model within Maya. (At least not before Maya 2008, I haven't had a chance to look at that yet) So, when the model is imported and sent to the content processor, what happens? If the FBX exporter did its job, then you'll at least have access to the texture file name, as well as the color constants (ambient, diffuse, etc.) of the model, but how does this help you bind one of our engine material definitions to it? Actually, I'm not sure if the XNA content pipeline exposes all of that in the processor stage, but I can't imagine it wouldn't.

Even if we wrote custom Maya exporters, it still won't help as there's no way to select from the QuickStart materials from within Maya and give the artist the ability to see it in real-time before exporting.

For now, I can't think of any good ways of doing this besides just looking at the material name in the processor and trying to find a material with a matching name. Going forward, we can eventually put this into the editor. The user can export geometry and load it into the editor, then use a material browser to select a material and apply it to the model. Then, some sort of intermediate format could be saved out, like another XML file that specifies a model and a material, which the content processor will use to produce the final model loadable by the engine at run-time.

Sure, you can auto-generate the XML any way you like... what did you have in mind? I'd like to support a material editor in the engine editor.
Coordinator
Nov 10, 2007 at 4:09 AM
Well when I asked Shawn it sounded like they applied the normal maps to the model and the custom content processor set that as an effect parameter. This was when I asked him how they attached the normal maps to the model in the Normal Mapping Sample with the model of the lizard on the rock.

So if the artists are applying their own normal maps, it seems like our custom processor will be able to detect this file and we could pass that to the XML generator.

I haven't thought about the auto-generating XML in detail, I just believe it should be either completely automated, or very simple for the user.