Graphics "Teaser" Patch #559

Dec 11, 2007 at 10:42 PM
Edited Dec 11, 2007 at 10:43 PM
I just uploaded a patch for the graphics system to show off some of the new features I'm working on, just to let you all know I'm still alive. :) If interested, just apply to the latest change set and you should be good to go.

The next step is to finish the material system refit and create a true shader environment so we can start creating plug-able shaders instead of having to rewrite entire shaders for every material type. Once that's done, I'll upload the complete change set and we should have a good graphics base. I may upload another patch or two before then as checkpoints.

Please note that the patch is not intended to be committed to the source tree as a change set. The code is still very much a work in progress and is just a demo of the new, upcoming features. I'm also interested in feedback on the performance of this code (in Release mode).

New Features:

Also, one point of interest for others doing graphics, I found out you need to disable MSAA for PIX to be able to debug shaders. Just FYI.

Feel free to play around with the spot light type, though currently only 1 is supported at a time. I'd be especially interested in seeing how everyone's graphics cards handle 1024x1024 and 2048x2048 shadow maps with blur enabled. The relevant settings are in QuickStartSampleGame.cs, just do a search for SpotLight. If you do not have SM 3.0 hardware, you can safely change the techniques in VarianceShadowMap.fx to vs20 and ps20, but you will need to run in Release mode. Debug mode overflows the instructions slots in SM 2.0.


My base line is around 100 FPS with a shadow map size of 2048x2048 with blur enabled in Release mode with the Release DirectX runtime.
Coordinator
Dec 12, 2007 at 12:10 AM
Dude, that sounds awesome, I can't wait to get the code in front of me and get this tested. I'll post my framerate stats. I haven't even seen it yet but can confidentally say good job.
Coordinator
Dec 12, 2007 at 4:27 AM
My lowest FPS is around 75fps. And it flickers between that and 130fps. Looks very good, good job.
Coordinator
Dec 12, 2007 at 4:37 AM
Edited Dec 12, 2007 at 4:37 AM
Not sure if you're aware of this, if you enable the camera movement around the objects in the scene, the ship actually rotates (or appears to), and does so at a different rate than the half-box. The problem is neither of them are actually rotating, we're simply moving the camera, which means only the view and proj matrices are changing (or should be). If you look in scene manager there are a few commented lines for camera movement, if you uncomment moveupdown and strafe you can see what I mean.
Dec 12, 2007 at 4:40 AM
I'll take a look at it, but if you merged all of the code, the ship should be rotating on the Y axis. I added the rotation code in QuickStartSampleGame.Update since I didn't want to mess up your code in the scene manager. Is this what you're referring to, or are you referring to a different issue?
Coordinator
Dec 12, 2007 at 5:05 AM
Nevermind, I'm smoking crack. Seriously I'm just being stupid or something, not really smoking crack though. I forgot the ship was rotating as well. All is good.
Coordinator
Dec 12, 2007 at 5:09 AM
I know you told me this once before, but I forgot since I was only using VS Express. How do you run in DirectX Debug while running your program?
Dec 12, 2007 at 5:22 AM
I was hoping the Visual Studio Pro integration would allow you to retrieve the debug messages in the Output Window like you can with native code debugging, but this is apparently not the case. I'm not sure if the manged code debugger just doesn't support it, or if the XNA team explicitly disabled it. I really miss the native world. :(

To get the debug output, go to the DirectX Control Panel (Start > Programs > Microsoft DirectX SDK > DirectX Utilities > DirectX Control Panel), and select "Use Debug Version of Direct3D 9." Put the debug output level slider somewhere in the middle. You're all set then. You'll need a program like DebugView http://www.microsoft.com/technet/sysinternals/utilities/debugview.mspx to read the output. Open DebugView, then run the program.
Dec 12, 2007 at 2:52 PM
Sounds awesome. Ill post my framerates later, when Im at my dev machine
Dec 12, 2007 at 3:22 PM
I get 7fps at 1024x1024 with blur enabled on my laptop. It goes up to 30fps with multi sampling disabled, and 19 when fullscreened.
1fps with a 2048x2048 shadow map.

Intel Core 2 Duo T7250 2GHz
2 GB DDR2 (cant remember speed)
nVidia 8400 128MB
120GB 5200rpm HDD
Vista Home Premium

I didn't expect great performance, but it does run most new games with med-high graphics quality perfectly well. Interestingly, my performance was exactly the same when I turned up the screen resolution.
Dec 12, 2007 at 4:30 PM
Edited Dec 12, 2007 at 4:46 PM

Aphid wrote:
I get 7fps at 1024x1024 with blur enabled on my laptop. It goes up to 30fps with multi sampling disabled, and 19 when fullscreened.
1fps with a 2048x2048 shadow map.

Intel Core 2 Duo T7250 2GHz
2 GB DDR2 (cant remember speed)
nVidia 8400 128MB
120GB 5200rpm HDD
Vista Home Premium

I didn't expect great performance, but it does run most new games with med-high graphics quality perfectly well. Interestingly, my performance was exactly the same when I turned up the screen resolution.


Sounds like the video memory is the limit then to me. If its the same higher res, the card must have ample performance. Do you know the memorey bus width?

I get between 26 and 59 fps on mine with default settings. Its meant to all be grass textured, right? I ahve no idea why its so variable, and there is no pattern, its eith 26 or 59, no in betweens.
Dec 12, 2007 at 5:16 PM
Thanks for the input, everyone. I haven't profiled it yet and I know I can squeeze more performance out of it, but it looks like I'll still need to come up with something else for mid-to-low end cards.

About the frame rate fluctuation, is that with LordIkon's frame rate measurement code or a tool like Fraps? My frame rate doesn't vary more than 1 or 2 frames per second with Fraps, but LordIkon's code seems to fluctuate like that when its not perfectly synced to the monitor's refresh rate, averaging around 103 and jumping up over a 1000 a couple times a second.. Is this a known issue with it, LordIkon? I can take a look at it.

Aphid, I'd be interested in seeing what the frame rate is at 512x512 (shadow map size), with and without blur enabled. I would really expect an 8400 to do a bit better than that. It doesn't seem like your fill-rate limited, and there's no where near enough geometry in the scene for you to be vertex-processing limited.
Dec 12, 2007 at 5:42 PM
Well, I am running:
AMD Athalon XP 1500+ 1300mhz
756 MB Memory
nVidia Geforce 6200 128mb
80GB HDD
Windows XP Home Edition

I know it is pretty low end but I'm poor. : )

Anyways I got between 10-30 fps without changing anything. When I took off the shadow map it raised to to >30 but very jumpy.
Coordinator
Dec 12, 2007 at 5:46 PM
The framerate issue is strange I agree, but without fixed timestep I've seen this fluctuation on many XNA programs. Fixed timestep however seems to work nicely.

Honestly I don't know, we should look into it.
Dec 12, 2007 at 6:29 PM
I think its the way you calculate it. You're trying to do it every frame by finding the elapsed time for every frame and finding an frame-rate from that. At higher frequencies, that's prone to error. Take a look at the frame-rate code I put into the prototype. It was stable well into the thousands of frames per second.
Dec 12, 2007 at 7:27 PM
Is there a convinient const for shadow map, resolution, and blur?

The 8400, is that nvidias new low-range? I was suprised my 7600 performed better, as I thought the unified processors would give it a nice boost, and being next-gen I expected it to equal, or come close to the 7600. How much bettter is the 8500? In case you cant tell, Im shopping for a new graphics card :)
Dec 12, 2007 at 8:18 PM
What do you mean by convenient constant for shadow map, resolution, and blur? A QSConstants entry? I still believe the graphics settings should stay out of QSConstants.

The 8400 does seem to be nVidia's new low-end card. Aphid's card must be a special laptop version. Looking on NewEgg I don't see any 8400 consumer cards with less than 256 MB of VRAM. Compared to the 8800 series, they seem to be clocked significantly lower in both core and memory speeds, and only have a 64-bit memory interface. Thinking about it some more, I bet its the MSAA that's killing him while Taliesen's 6200 can keep up since it just doesn't support the higher multi-sampling settings.
Coordinator
Dec 12, 2007 at 9:13 PM
Edited Dec 12, 2007 at 9:16 PM
If you're shopping for a new card, and don't want to pay out the a**, then I'd get the 8800 GTS with 512mb of ram, it is much faster than the older 8800GTS's with 320/640mb of ram. Card just came out a few days ago I think.

Here is a list of their desktop cards:
http://www.nvidia.com/object/graphics_cards_buy_now.html
Dec 13, 2007 at 1:33 AM
If you don't mind waiting a bit longer, I would just wait for the 9-series DirectX 10.1 cards. Just buy a mid-range 9-series card a month or two after they come out and you'll have a good DirectX 10.1 card. That is, if you care about trying out 10.1 (outside of XNA/C# of course).
Coordinator
Dec 13, 2007 at 1:42 AM
I have heard rumors that some or all DX10 cards will be able to do DX10.1 with a firmware upgrade, however I am skeptical, I'd say it depends on the differences between 10 and 10.1. If it is some bug fixes then it is a possibility, but if there actual new features, possibly not.
Dec 13, 2007 at 1:50 AM
The problem is that if there is even one new feature in 10.1 that a 10.0 card doesn't support, then that card can't be 10.1 compatible. There are no "capability queries" like in 9.
Dec 13, 2007 at 6:08 AM

shawmishrak wrote:
I was hoping the Visual Studio Pro integration would allow you to retrieve the debug messages in the Output Window like you can with native code debugging, but this is apparently not the case.

What do you mean? The message system does excatly that, though that's not visible when running in release mode, you have to go in and add the DEBUG symbol the the Game_Windows project when set to release (that's under the Build tab in the properties of the project) Then you get debug output as well. The Bebug.Write... are all under conditional complilation so it won't be compiled in unless that flag is set (Look at the ConditionalAttribute in help)
Dec 13, 2007 at 6:30 AM
Its not picking up whatever stream the DirectX debug runtime uses to output messages. It's just like in the Express versions where the Output Window picks up Debug.Write calls from the program itself but not from external sources like the DirectX runtime. I'm not sure if its a limitation of the managed debugger or not, but its not picking up debug messages from native libraries.
Dec 13, 2007 at 7:30 AM
Ah, ok. Try to open Tools->Options->Debugging here you will find Native, try to enable that (Not sure it is available in Express). Haven't tried it using DX though so it might not help.
Coordinator
Dec 13, 2007 at 5:24 PM


shawmishrak wrote:
I think its the way you calculate it. You're trying to do it every frame by finding the elapsed time for every frame and finding an frame-rate from that. At higher frequencies, that's prone to error. Take a look at the frame-rate code I put into the prototype. It was stable well into the thousands of frames per second.


I'll try and include the FPS counter fix with my terrain changes 'patch'
Coordinator
Dec 15, 2007 at 8:21 AM
Edited Dec 15, 2007 at 8:23 AM
If we setup an LOD system for entities, and import them at different LODS (which might be difficult), I'm guessing we could get better performance with shadow mapping, as it requires an entire extra pass. Lower LOD models would be great if we were able to sort them by distance during drawing. Just throwing that thought out there. I know of engines that support it, but I'm not sure how feasable it is for one of us to write a content processor for LOD of a model.

I guess optionally we could give the developers/artist the ability to simply make several models for one entity, each model could be a different LOD. However that requires the art team to setup their own consistant LOD settings when lowering the poly count of their models. And it takes more time than a processor doing it.

finally, if that were the case the draw order system would need to say "At a medium distance, draw the medium LOD (if there is one), otherwise draw the next highest/lowest".
Dec 15, 2007 at 10:21 AM
You will get the best rendering experience it the modellers would create the models at different LODs instead of writing some logic to do this at import. This does mean more work for modellers, but I've been working with modellers that had no problem with that, and they could produce good results, far better than any automated one could.
Coordinator
Dec 15, 2007 at 11:27 PM
I agree. If had to make our own, then our definition of a Medium LOD may be different from what developers would want, letting the artists do it gives them 100% control.
Dec 16, 2007 at 7:00 PM
I dont really get graphics, but why not both? I the model has multiple levels use them, otherwise the engine makes a best guess? Surely the performance gain warants the extra code?
Coordinator
Dec 16, 2007 at 7:04 PM
Yea, miscommunication there. We would be using all the LODs, depending on distance from the camera.