Performance issues (framerate=2)

Oct 30, 2007 at 11:09 AM

Just downloaded this great-sounding engine. I compiled it with release-configuration in Visual Studio 2005 Express Edition.
Alas, the performance is poor for some reason, I only get framerate of 2 (!!) by running version 0.177 right out of the package.
I'm sure it's my set-up. My platform is a Windows Vista on a four years old Dell Latitude with NVisia GeForce FX graphics adapter.
Does anyone else have performance issues with similar platform with this engine? Fixes?

Keep up good work.

Oct 30, 2007 at 1:56 PM
It does seem a little odd, could up give us a little more detailed information if possible?
  • Operation System:
  • Motherboard:
  • CPU:
  • Graphics Card:
  • Memory:
That would prob help us reconstructing the error
Oct 30, 2007 at 2:43 PM
Edited Oct 30, 2007 at 2:46 PM
A 4 year old laptop is going to have trouble with many 3d engines. Latitudes especially are known for having very basic video setups. As a comparison, my 3 year old computer is getting 30-45fps.

Like sturm above said, could we get your specs?

You could try following the tutorial. The tutorial explains how to disable all the components. Then you could see what the framerate is as you add components. Also, try lowering the LOD of the terrain to LOD.Low if you haven't already.

The components that are hardest on performance, in order: Water (GPU), Terrain (GPU), Weather (CPU/GPU), everything else is negligable (sp?).

Could you give us your framerate on common 3d games that you might own, like Quake3 or 4, Doom 3, Unreal Tournament, Elder Scrolls 4, F.E.A.R., Half-Life2, Team Fortress 2, etc...?
Oct 30, 2007 at 3:28 PM

Thanks for the replies.

Here's my equipment:
* Operation System: Vista 32 bit
* Motherboard: Don't know (the machine is Dell Latitude D800 laptop)
* CPU: Intel Pentium M 2.1 GHz
* Graphics Card: NVidia Geforce FX Go5650
* Gfx Memory: 524 MB
* Memory: 2 GB

I'm afraid this is a work laptop and thus I don't have games on it. So, I can't tell you reference game framerates.

Tried disabling the components in the sample app....

This gives framerate 30:

Add LoadSkies() to this and framerate drops to 6.

Adding the rest will drop framerate to 2. So, it is the Skies that has the most effect. Maybe my NVidia Vista drivers would use some upgrading....I'll try to find newer drivers and let you know what happens.

Oct 30, 2007 at 3:40 PM
Actually, the NVidia Vista driver package for my gfx card doesn't want to install anything. It's about the date as drivers that came with vista anyhow.
Also Dell doesn't have drivers for D800/vista.

I'm dl'ing 3DMark 06 so I should be able to get some kind of 3D performance reference with that...I'll keep you informed.
Oct 30, 2007 at 4:09 PM
Were skies the only 3d component you initialized before dropping from 30-6?

I'm surprised that running only a single graphical component (HUD component) still resulted in only 30fps. Although I will say that a 5 series GeForce (I believe) is one of the oldest supported cards for XNA, and it won't do too well even in ideal conditions.

I might add that I've noticed a jumpy framerate issue on my older computer, this may be affecting you as well. I'm not sure if your card will be compatible to run 3DMark06, you may try 3DMark05 just in case. I know that my old laptop couldn't run some of the newer 3DMarks.
Oct 30, 2007 at 5:41 PM
3DMark06 shows clearly that this machine is quite gave framerate ZERO in all tests.

So, nothing wrong in XNA QuickStart Engine! =)

Thank you LordIkon and Sturm for being very cooperative.

I'll start learning about the engine now and perhaps could do something of value to the project.
I'm a .net dev from Finland.

Oct 30, 2007 at 6:11 PM
Wow, well that is good news for the engine, bad news for anyone wanting to develop with a 5 series GeForce card. For development I would recommend any card with a minimum of 128mb ram, 256mb or more preferable. Shader 3.0 also preferable because it is supported by XNA and is required for special shader effects like parallax occlusion mapping. You may not include all of those things during development, but its nice to know you have the power to if you need.
Nov 8, 2007 at 5:03 AM
Ok guys, heres the situation. Performance is actually up quite a bit, which is nice. I believe it is partially due to having only 1 vertex buffer for the terrain now.

On my Radeon 9800se (se was about 25% slower than the standard 9800), I am getting 60-80fps when ONLY the terrain is running. The fluctuation on that computer between 15-60fps lessens as components are disabled. The water and weather components are the hardest on performance.

I recommend we have detail settings that can be set by a simple changing in commonVars.h in which the engine will automatically use more performance friendly settings. For instance a different shader could be used which would skip normal mapping (which is an expensive pixel operation that can cause fillrate issues on older cards), and it could limit the terrain LOD to med instead of high, etc. This will keep our engine user friendly on older cards. Talking with some guys at work, they've said the 5000 series NVidias and 9200 and earliar radeons do not handle texture lookup from texture samplers well and that means texture blending/splatting would have to be re-designed.

I was hoping this engine would run smoothly on all XNA compatible computers but I'm not sure if that will be possible. I believe XNA set the minimum requirement within that range because of its 2D aspects.

I propose we set a minimum specification computer that our engine needs to be able to run on. Unfortunately this means we have to have an older computer to test on.

My machine is a 1.8ghz AMD Athlon XP 2500+ with a Radeon 9800se w/128mb VRAM and 512mb of RAM. However, I can underclock my CPU quite a bit (down to 1Ghz), and I can also underclock my GPU and Video card memory both by 30mhz, which is 9% and 12% respectively. lol, I remember back in the day when I was on forums overclocking the computer and excited about how top-of-the-line it was, now I'm using it for minimum specification tests while I have my laptop that will leave it in the dust. :oP

Also, on large terrains (1024x1024) we are using 220mb right now. That isn't all terrain, but terrain is about 60-70% of our ram right now. We really should find a way to drop that. Unfortunately the only things we can pull out of the terrain to reduce it are multitexturing, which takes up 3/11ths of that 220mb, and normals (which we'd have to calculate in the shader) which also take up 3/11ths of the 220mb. If we calculate normals in the shader we will take a performance hit, and we could no longer perform collisions with the terrain (only height checks).

Let me know what you guys think about all of this.
Nov 8, 2007 at 5:55 AM
First off, how would you go about calculating normals in the shader? There's no mathematical way of doing this per-vertex, unless you restrict yourself to a plane in a known orientation, like the water plane. This won't work for arbitrary terrain geometry. If you just assume a +Z normal similar to how you did the binormal/tangent vectors, any lighting you do is going to come out weird.

How are you getting the 220 meg figure? If you're measuring it based on what task manager gives you as RAM usage, it could be significantly different from the VRAM you actually use.

Are you using any hardware texture compression? What about mip-maps?
Nov 8, 2007 at 6:25 AM
I guess I am determining RAM, not VRAM. VRAM doesn't seem to be a problem, the 128mb card is handling it fine, and not many cards below 128mb are XNA compatible or in our minimum specs. So the 220mb was from task manager, but according to the CLR profiler most the memory allocation is in the terrain vertex buffer and index buffers. This size is determined by the size of terrain, and the size of the terrain vertex.

I don't believe I am using hardware texture compression, Shawn said even using DXT wouldn't help and we'd have to compress our stuff after it went through the content pipeline because even a DXT would come through the pipeline in the standard format XNA creates. However I could've understood that wrong.

I heard normals could be created in a vertex buffer, but if not it doesn't concern me too much, it was only a suggestion, I'd like to have them because they're needed for physics with terrain.

Isn't mip-map setup through the content pipeline? And I believe the engine is already using mip-mapping. For instance, find all textures in the pipeline, right click them and change them to sprite textures, then run the engine and watch the framerate drop and the terrain will look "too" detailed at distance (I'm making assumptions here, I haven't actually done this since another project I was working on). I'd do it right now but I'm tired and lazy.

Good feedback Shaw, keeping me on my toes and learning. I've only been working with 3D programming for about 8-9 months now, so you'll have to pardon my newbness.
Nov 8, 2007 at 3:44 PM
That's disheartening about the DXT compression. I guess we'll have to eventually write our own texture processors.

To calculate normals, you need to work on triangles, not individual vertices. So you could do it with data in a vertex buffer, yes, but not in a vertex shader.

Mip-mapping is set up through the content pipeline, you just have to use the right processor. It's entirely possible it's already being used.

Out of curiosity, why do you want to reduce it below 220 meg? The geometry has to go somewhere, so to reduce it you need to reduce the terrain size, or reduce the vertex size, but I dont think 220 will be a problem.
Nov 8, 2007 at 4:06 PM
Also it could be considered creating a streaming terrain instead of a fixed size. This way you would reduce the needed mem footprint, though you would sacrifice performance for this (Streaming from disk can be expensive).
Nov 8, 2007 at 4:24 PM
220mb just seems high considering we have only 1 model type loading right now ( a simple sphere ), and no animations, or gui, or anything else like that in memory. I believe we should set the minimum specifications for the engine at 512mb, and if that is the case, we're consuming half of the computer's memory when running, on top of the fact that any developers/users of our engine are going to be running VS/VS Express which takes 150mb or more.
Nov 8, 2007 at 6:40 PM
Your simple sphere isn't exactly simple. When I was working on a custom content importer, I found it had around 600 indices. :)

When I profiled your code, I only had 108 megs in the heap, 88 megs of which is VertexTerrain[] data from the List<>. There's 200-some megs of "total" VertexTerrain[] allocation, but only 88 megs in the heap after startup.
Nov 8, 2007 at 7:26 PM
Wow, 600, I even remade a "low poly" version of that sphere because the other was about 4 times as detailed. I may make an even lower poly version I guess.

88megs isn't so bad. Although if we add binormals and tangents that could jump by about 30%. Oh well...
Nov 8, 2007 at 9:03 PM
Shaw are you using .Net Profiler/dotTrace/NProf for profiling?
Nov 8, 2007 at 9:08 PM
CLR Profiler

Can NProf give you memory statistics? I thought it only have time statistics.
Nov 8, 2007 at 9:34 PM
I've actually never used NProof, so I can't tell. I've used dotTrace and found it quite good :)
Nov 26, 2007 at 4:59 PM
To the original poster, I should add that you may get 15-20fps now if you goto CommonVars.h and lower the QSDetail level to GraphicsLevel.Low.

Of course you'll need v0.18 or newer, and anything newer than v0.18 will require XNA 2.0 Beta.