Coding Guidelines?

Oct 27, 2007 at 1:05 AM
When was that coding guidelines document created?

Just curious, as it basically says I need to rewrite the entirety of the prototype...
Coordinator
Oct 27, 2007 at 1:43 AM
Well, it is up to debate if you'd like. Depends on your position on the matter. For instance, if you agree it is a good coding standard then I would say finish the prototype, and we'll go over it until it meets the standard. However, if you disagree with the standard then you should list your disagreements, and reasons, and we can all have a say in it. For the most part this project is a democracy, so I say majority rules.

But seriously, if it comes down to the standard requiring lots of prototype changes then I would just finish getting it up and we can all pitch in to re-code it to standard, as opposed to you re-coding the whole thing now along while we wait for the prototype.

Let me know your thoughts on the code standards.
Coordinator
Oct 27, 2007 at 6:19 PM
Part of the reason for the guidelines that were chosen (.Net Pascal), were because it is what XNA users will be used to, as that is what the XNA GS uses.
Oct 27, 2007 at 6:55 PM
Alright, fair enough.
Oct 29, 2007 at 2:46 AM
Edited Oct 29, 2007 at 2:50 AM
How should public class members be named? If I have a mass variable, should it be

public float Mass;

or

public float mass;


And is there a specific reason for not using m_ notation for member variables? It's a good way to distinguish between member and non-member variables, especially when members and parameters have essentially the same name.
Coordinator
Oct 29, 2007 at 3:26 AM
I believe for member variables it is capitalized (Mass), and any accessors would be lowercase (mass).

In my experience with C++, having m_ prefix was a good thing because intellisense sucked ass, but I feel it is a relic left over from before intellisense. With C# I usually find it easy enough to hover my mouse over a variable to see if it says (member) or (parameter). I guess I have never found a need for m_. But honestly it is up to everyone.
Oct 29, 2007 at 3:31 AM
Edited Oct 29, 2007 at 3:33 AM
I'm running into situations in the physics simulator design where this looks weird. For instance, a rigid body has a mass parameter. For efficiency, variables like this are public instead of private with a public property accessor. In the constructor, the mass is passed in as a parameter, as well. So:

public float Mass;
 
public RigidBody(float mass, ...)
{
    Mass = mass;
}

It just feels so much more natural to do:

public float m_mass;
 
public RigidBody(float mass, ...)
{
    m_mass = mass;
}

Of course you could always prefix with "this.", but I dont see that being any cleaner.
Oct 29, 2007 at 3:48 AM
Edited Oct 29, 2007 at 3:51 AM
First off, do not use public fields. If you need a public access it has to be a property.

With regards to the naming I did the following in the existing QuickStart.Common:
        protected List<SceneNode> m_children;

and converted that into:
        private readonly List<SceneNode> children = new List<SceneNode>();
        /// <summary>
        /// Gets the list of children
        /// </summary>
        public List<SceneNode> Children
        {
            get { return this.children; }
        }

using this prefix makes is clearer as it clearly show where the parameter belongs to. Throughout the source, I always refer to the property instead of the field. This has the advantage that the logic for property is always invoked. There might be special occations where refering to the property isn't good, but these should be very few (like construction).

The constructor will then look something like this:
        /// <summary>
        /// Constructor
        /// </summary>
        /// <param name="manager">The <see cref="ISceneManager"/> for the scene</param>
        /// <param name="parent">The parent of the node</param>
        public SceneNode(ISceneManager manager, SceneNode parent)
        {
            this.sceneManager = manager;
            this.parent = parent;
        }
(It sets the sceneManager and parent field)
Oct 29, 2007 at 4:06 AM
Edited Oct 29, 2007 at 4:07 AM
No, the member variables need to be public. Like I said, its a matter of efficiency/performance. I need to be able to pass them by ref/out outside of the class, which you cannot do with properties. Also, these variables (which include larger data types like vectors/matrices) need to be read/written within tight inner loops and the overhead of property function calls are inhibiting (not to mention you have to use local copies for ref/out passing). Even if the local copies were manageable, the Xbox CLR does not optimize the function call overhead from property access very well.

Believe me, I've written code like this before and I ended up having to refactor the properties into public variables to get it to work efficiently enough.
Oct 29, 2007 at 4:13 AM
Can you give som example of where it's needed to pass (i.e. Position) them as ref. shouldn't it be the entity which is passed?
Oct 29, 2007 at 4:33 AM
Imagine an iterative constraint solver, i.e. resolving a penetration constraint. For each colliding pair, you apply the iterative solver, which can take 10-20 iterations to reach a stable solution. Within the pair solver, you have references to the two bodies. For a sequential impulse solver, within each iteration you have to build an impulse vector based on the physical quantities of the bodies. For each pair and 20 iterations per pair, you only need 5 property accesses per iteration (read and write are two different property accesses) to get 100 function calls per colliding pair. Note that you would need significantly more than 5 property accesses per iteration in practice, so you end up with a few hundred function calls per colliding body. In my experience, the Xbox CLR has a difficult time optimizing out these property references, so it can easily add up to a big performance hit. I've experienced this before.
Oct 29, 2007 at 4:36 AM
Looks like I should probably use capitalization on the fields to match the naming of properties to maintain some sort of consistency.
Coordinator
Oct 29, 2007 at 5:21 AM
I'm only a fan of accessors where required. I only really use them in the cases I need an outside class to access information in a read-only fashion. Such as

private float MyNum;
public float myNum
{
get { return MyNum; }
}

This way within the class anything can use it in its private/protected (capitalized) form, but you know (through the coding standards) that if you call from any other non-derived class that you'll need to use the lowercase form, which will be an accessor, and that you may only have 'get' access to it.

Using accessors for everything would take up an awful lot of space, and from my experience, more code means a higher learning curve. If we truly want the engine to succeed it needs to be powerful, flexible, and most of all, accessible (free, and easy to use).
Oct 29, 2007 at 7:34 AM
Shaw, wouldn't that be solved by having a local copy of the properties? I know that not as efficient as having the ref to the real field, as you only really only want to assign the result. Also I expect that this is happening inside the Physics engine right? In that case you won't, or shouldn't, have access to the entities anyway, as this will be way to inefficient.

LordIkon, you are right that it seems like there is a steep learning curve. But in reality this actually helps learning the model. The developer using the framework will not need to think about having fields or properties, as only properties are exposed. Having a property only interface also help preventing external devs from accententially setting values they aren't supposed to. During the rework of Shaws code I found a few fields which were accessible from outside the type, but actually weren't meant to be setable. Changing this later can have a huge impact, also it might be hard to trace a bug releated to setting a field which shouldn't be set.

Further, while this isn't implemented in the current code, and something I miss, is OnPropertyChanged. Many types will depend on properties of other types. Such as filtering on text in a textbox. Though using the MessageSystem this might not be needed.
Oct 29, 2007 at 1:44 PM
With these value-type fields, you end up with way too much copying going on. What do you mean by you shouldn't have access to entities? Do you mean the rigid bodies inside of the dynamics solver? How are the solvers supposed to work without access to the bodies that are being simulated?
Oct 29, 2007 at 2:05 PM
As I see it the physics system is executed in it's own process (pefferably on it's own CPU). And updating as fast as possible, in order to give the most correct physic behavior possible. This means an update rate on 100..100000 (or at least very high) Here is no reason to update entities this often, they aren't rendered more than 60 timer per frame.

The solution here is to create physic entities which only contain the subset of functionallity needed by the core engine, like position, mass, velocity. Then at specific points in time the values from the physics engine are moved the the corrosponding core Entity and it's updated. Also at specific intervals the physics engine is updated with changes from the core engine.

So we end up in something like this:
    // On before update/render/whatever at initialization
    foreach (Entity entity in this.Game.Entities)
    {
        this.Game.PhysicsEngine.PushPhysicEntity(entity)
    }
 
    // During game execution at a specific time
    List<PhysicsData> physicsData = this.Game.PhysicsEngine.Poll();
    foreach (Entity entity in this.Game.Entities)
    {
        if (physicsData.Contains(entity)
        {
            physicsData.Update(entity);
        }
    }

This is of cause a very simplyfied script of what actually going to happen, but hopefully shows that the entities themself do not belong on the physics thread
Oct 29, 2007 at 4:22 PM
Okay, I see the confusion. You're only considering what clients of the physics simulator sees during the per-frame update, not the internal representations. In the client case, I still wouldn't worry about properties as your "PhysicsData" object is really just a structure of data, namely position and orientation. It probably wouldn't even have any methods.

Imagine a rigid body entity as follows:

public class RigidBody
{
    public float mass;
    public float invMass;
    public CollisionBody collisionBody;   // The collision geometry descriptor
    ...
}

This type of class is used internally by the physics simulator. Now, I could create interfaces for these and only allow clients to access these variables through properties (the interfaces would not contain these variables), but the fact still remains that internally, the simulator needs the public variable access. As far as I know, our "coding guidelines" apply to internal code as well as client-visible code. That's the root of my question.

Also, at some point, the clients of the physics simulator will need to create/modify the RigidBody instances directly.

Oct 29, 2007 at 5:16 PM
Edited Oct 29, 2007 at 5:18 PM
Well, for private elements you can use what ever you want to. Though I would perfer to keep as close to the guideline as possible, in order to to confuse other developers. So in the example above I would use:
public class RigidBody
{
    public float Mass;
    public float InvMass;
    public CollisionBody CollisionBody;   // The collision geometry descriptor
    ...
}
If possible I would keep the class as an inner class of PhysicsEngine like:
public class PhysicsEngine
{
    private class RigidBody
    {
        public float Mass;
        public float InvMass;
        public CollisionBody CollisionBody;   // The collision geometry descriptor
        ...
    }
}
But I can see that this isn't a practical solution in many cases, but should still be considered.

Also I would place a remark on the field as to why it's needed to be non-private i.e.
    /// <remarks>Due to performance the field has to be exposed directly, "Possible link to site explaining the implication of this"</remarks>
    public float Mass;
This would help other developers understand why and also a possible link will help support the learning curve for others.
Oct 29, 2007 at 5:40 PM
I don't see any problem with using a public field instead of a property, if performance was an issue and the property was only going to get and set a private field.

If, however, the property was hiding either get or set, then it may be better to keep it a property.
Oct 29, 2007 at 6:43 PM
Well, it can't be an inner class because clients need to be able to instantiate RigidBody objects to feed to the physics simulator.

I'll try to draft a rough overall design to help explain what I mean. I know what has to happen as far as implementation, but at the same time I want to have at least some level of consistency with what you're trying to do.
Oct 29, 2007 at 7:27 PM
It shouldn't be needed by the clients to create RigidBody I would guess. RidgidBody is part of the physics engine and should be instantiated by the physics engine when PushPhysicsEntity (or however it's addedto the physics engine) is invoked.There is no reason to have that information on the core thread.
Oct 29, 2007 at 8:01 PM
Even if you used some sort of descriptor for the client to provide information to the physics simulator on how to create the rigid bodies, you would still need some way to link the RigidBody to the scene manager entity. The way I'm envisioning it, at the end of each physics frame, the position/orientation in each RigidBody instance are copied to another location in the RigidBody class, under a synchronization lock. The scene manager can then go to the RigidBody instance for each scene entity and pull out these two (copied) fields, again under a synchronization lock. Otherwise, you would need to maintain a third class which acts as a link between physics entity and scene manager entity. Who controls such a class?
Oct 29, 2007 at 10:48 PM
There has to be a syncronization schema, an example could be:
    QuickStartGame
    public void Update(...)
    {
        // this polls the physics data from the physics engine so it can continue updating
        PhysicsDataCollection physicsData = this.PhysicsEngine.Poll(); 
        foreach (Entity entity in this.Entities)
        {
            if (physicsData.Contains(entity) == true)
            {
                physicsData.ApplyTo(entity);
            }
        }
 
        // Invoke game logic
 
        this.PhysicsEngine.Push(this.Entities);
    }
The big issue here is how to merge changes if both system makes a change to the same entity, or should the Push simply override any changes on the physics layer. There are a lot of things that need to be considered here, but to begin with I think that just overriding and accepting the artifacts will do.

I don't think that copying the data every frame will be a good idea. The engine will run 100's of passes while the main engine thread only will make a few. This will lead to a huge performance ompact. Doing it when polled and pushed is much more efficient.
Oct 30, 2007 at 12:51 AM
The issue I'm having with your scheme is the semantics of Poll(). What happens when Poll() is called half-way through a physics frame dynamics cycle? Reading the physics data should only happen between physics frames. Not to mention that its entirely possible the physics simulator itself will be multi-threaded, so that makes the synchronization mechanisms even more complex.

Bottom line, I really don't think it's a good idea for the client to be able to tell the physics simulator when to stop and copy data.

The more I think about it, the more I'm liking a somewhat synchronous approach. Kind of like this:

    public void Update()
    {
        physics.WaitForFrameEnd();
        // Copy positional data from physics simulator
        physics.BeginFrame();
 
        // Other processing
    }

WaitForFrameEnd() will pause the calling thread until the current physics frame completes, with an optional time-out parameter so if the physics simulator is running slowly, you'll only get updates every few game frames. BeginFrame() will start processing the next physics frame while the rest of the update loop continues and then the Render loop is called.

The beauty of this is that you don't have to worry about synchronization between client/physics simulator. You're only accessing data between physics frames, so you are guaranteed consistency. Internally, the physics simulator can run 1, 2, 4, 8, however many physics frames it wants/can process between BeginFrame() and WaitForFrameEnd(). This can even be client customizable. This is the approach taken by AGEIA PhysX and it seems to work pretty well. Everything is still as parallelized as possible, but you still have a central point of synchronization in the main game loop. Chances are, the latency you'll see in the WaitForFrameEnd() call won't be much greater, if any, than waiting for a lock on the positional data on a completely asynchronous physics simulator.

I think you're a little overestimating the physics processing frequency. It's only going to run at 60-120 Hz, certainly not 50 or 60 times per game frame.
Oct 30, 2007 at 6:26 AM
Edited Oct 30, 2007 at 6:27 AM
I actually didn't care about the inner workings of the physics engine. When invoking Poll it would require the physics engine to do one of 2 things:
  • Wait for the physics engine to finish the current loop, Potentially dangerous if it takes too long (But could use timeout as you mentioned)
  • Copy the data from the physics engines temporary storage (Which might not be updated every cycle, but only every 2nd or 3rd physics cycle)
I think it goes without saying that it wouldn't copy any currently processing data, as this would give a potentially jaggy experience. I was hoping for a much higher update rate as the physics engine is running on a dedicated CPU, and only doing calculations on "simple" data types
Oct 30, 2007 at 7:04 AM
Haha, yeah, this topic has gone way off topic from the original "how should I name the public variables to be consistent with everyone else's work."

What do you mean by "simple data types?" A 60-120 Hz update rate is pretty standard on modern hardware.
Oct 30, 2007 at 8:05 AM
Yup, so I created Physics Implementation for that discussion :)
Oct 31, 2007 at 11:53 PM
Could the documentation be saved in a different format? i dont have word on most machines. :(
Nov 1, 2007 at 12:18 AM
What do you have? I was going to do latex/PDF, but it seems the consensus was to stick with Word/DOC.
Coordinator
Nov 1, 2007 at 1:50 AM
Isn't OpenOffice free and supports Word documents?

Microsoft Word is pretty much industry standard, either that or PDF. Is the PDF editor free?
Nov 1, 2007 at 2:15 AM
What PDF editor are you referring to? Adobe Acrobat and most decent PDF exporters aren't free, but the latex compilers are.
Coordinator
Nov 1, 2007 at 7:03 AM
I assumed PDF was Adobe proprietary and only could be edited with Adobe. :op. I believe we could have most documentation in both formats, however, either way we go I believe there are free editors. I would say we could save everything as a wordpad document, but there are some nice formatting options that it doesn't support. We could post all Word documents in a WordPad format as well as a fallback, as long as people that downloaded the wordpad version accepted they might miss out on the frills.
Nov 1, 2007 at 10:36 AM
I personally wouldnt be able to use pdf either, i was thinking more along the lines of an online editor. google docs for example, codeplex is crying out for an online version of word to be plugged in.
If not then a wordpad version would be fine. im not fussed for the fancy word formatting etc, this is the exact reason i dont use it, so a wordpad version with all the same content but slightly different formating would be just fine by me. although it would certainly miss screenshots.... uhm, i personally believe it being online would be the best option, but whatever you decide, anything other than a word or pdf document would be just fine.
Nov 1, 2007 at 12:59 PM
Using word (at least 2007) you can easilly export any document as pdf. Using wordpad isn't really an option as there isn't any good formating options. Using a online editor has the limitation that you require internet access to get to the docs, which is not something everyone has everywhere.

Having the documentation as word documents should be sufficient for most, you can download a word viewer if you want to see the doc, but do not have word installed. As an option we can use time to periodically generate the pdf docs based on the word docs.
Nov 1, 2007 at 2:50 PM
Online documentation should definitely be a priority, though off-line documentation is still important. Many "serious" developers would want to a paper reference manual.

What's nice about Doxygen is that you can simultaneously create HTML and Latex sources, HTML for the online version and Latex for the PDF/printed version.
Nov 1, 2007 at 2:59 PM
On a side note, CodePlex doesn't have provisions for uploading custom HTML pages, does it? That would be a nice feature.
Nov 1, 2007 at 3:32 PM
Doxygen is source code documentation and doesn't really support xml comments used in C#, at least last I looked at it. There is still the lack of design and feature documentation.
Nov 1, 2007 at 3:47 PM
I was pretty sure I saw support for C# XML comments.

Yes, we need design documentation, but an API reference is a very valuable document.
Nov 1, 2007 at 3:53 PM
Yeah i agree an api reference will be valuable but this is easily done. just but comments above the methods and properties then run sandcastle to generate the chm file.
it just takes disciplin from the programmers to make the comments above a method worth reading.
Nov 1, 2007 at 4:13 PM
Since we now have guidelines in place which do expect devs to do exatly (Code Guidelines) that I'm confident that we can use sandcastle
Coordinator
Nov 1, 2007 at 5:06 PM
There should be guidelines in regards to specific programming practices I believe. For instance, in testing v0.178 before release last night I found an infinite loop condition inside a while loop. It is not the first time I've had this type of issue within a while loop. While loops will literally freeze the game if their conditions are always met, and so they should be handled with caution.

Things like that could be addressed in a seperate section of the coding guidelines. There are many things like this to think about putting into the documentation. For instance, when using messaging systems or multi-threading there is a greater chance of having race conditions occur, so it would be good practice to have a check for a race condition occurance. For instance if a message is not processed within a certain amount of time, or if a thread is "starving" or causing others to "starve". These are just some things to consider.
Nov 1, 2007 at 6:47 PM
I took a look at Sandcastle, looks interesting... but can it produce HTML documentation? I see it can generate HTML-based help files, but what about uploadable HTML?
Nov 1, 2007 at 6:54 PM
Yeah sandcastle can do html with a bit of jiggery.. we use it for our company and every night it builds docs based on the latest source code and emails every developer the new api docs :)
html link along with chm
sandcastle has my vote :)
Nov 1, 2007 at 7:02 PM
Can you write up a short little how-to on generating the HTML? Does it generate the index.html pages, or do you just have to manually open the HTML files you want?
Nov 1, 2007 at 7:11 PM
Ye sure, ill knock up a quick sample :)
Nov 1, 2007 at 7:32 PM
I've just found this and its awsome http://www.codeplex.com/SHFB/Release/ProjectReleases.aspx?ReleaseId=7543
Nov 1, 2007 at 8:14 PM
Nice, it did a decent job documenting the physics components I have written thus far. I'd be interested to see the HTML version, and the complete documentation for the whole project once we finish the XML comments.
Nov 1, 2007 at 8:35 PM
Yeah, in my experience i have found it best to have a basic explenation in the code and reference externally to a doc file which has the extended content in with example code etc.. this makes it much easier to do seperatly from the code. We never had a good experience of adding the code at the time of conception, we got much better results when writing the documentation later on and even more so when someone other than the coder wrote it. so we could maybe do a stint before every release of each others bits and generate at that point
Dec 6, 2007 at 12:21 PM
I would like to help out with documentation, everything from an online wiki API to a user's desktop manual.

I could use Doxygen, optimised for C# output, with basic diagrams (For the online, and downloadable API), and use PackageThis* for creating a .chm desktop reference.

*or is it Sandcastle, or am I missing something completely?
Dec 6, 2007 at 3:11 PM
I just integrated SandCastle into the source tree because it's a MS product and I didn't see any reason not to use it. If you have any reasons to use doxygen by all means do so. I will finalize the integration once you decide what we should use.
Coordinator
Dec 7, 2007 at 7:31 AM
I'd like to add a suggestion or two for the coding guidelines.

- All float values must end in f . This way a float value of 1000 should be passed as 1000.0f as a requirement. The compiler should enforce all decimal values for us, I'm simply concerned with floats that are whole numbers.

- All float values between -1 and 1 should begin with 0. . Example -.3f would have a required format of -0.3f .

- We might also add that floats should be deferred to in place of doubles in any place possible.
Dec 7, 2007 at 8:51 AM
Yeah, that's a good idea with the floats.

Another consideration is namespace/folder matching. I've always been fond of matching namespaces to folders. For instance, in our project, if we had the folder Graphics/TypeLoaders/ in the common project, then all classes defined in files in that folder would be in the namespace QuickStart.Graphics.TypeLoaders. I've been coding like that, but I've noticed a lot of the template code isn't like that. There are several folder hierarchies all in the same namespace. Thoughts?

Dec 7, 2007 at 10:22 AM
Having folders match namespace (or vice versa if you want) is a bad idea. You wont be able to structure dependet classes and end up with folders containing a lot of classes.

I created a folder for Messaging as it's logically grouped, and looks something like:
Messaging
    | Message.cs
    | IMessageData.cs
    | MessageType.cs 

Thought the namespace for them is simply QuickStart. The reason for not having them in QuickStart.Messaging is that they belong to the engine, and that's the only context you will ever use them. Also if you ever are going to use the QuickStart namespace you are most likely going to use Messages. Also another consideration is the the type is directly exposed by the engine, so it prob doesn't make sense to having it in another namespace. Furthermore if I did put it in a seperate namespace the number of classes would be limited and it would just make more confusion.
Dec 7, 2007 at 12:24 PM
Maybe having every folder belong to a corresponding namespace would be a bad idea, but we should have a folder for every namespace.
If you can justify a file belonging to a separate namespace, it should belong in a different folder.
Dec 7, 2007 at 2:52 PM
Yes having a namespace implies that there is a corresponding folder structure. Though there would be more folder than namespaces.
Dec 7, 2007 at 5:50 PM
I agree, as long as all files in a particular folder are a part of the same namespace, and all child folders fall into that namespace (either directly or in a child namespace).
Jan 18, 2008 at 7:32 AM
About fields and properties...

Why did you guys choosed to put them together? Wouldn't be more easy to look and search for them if we have them in separeted places?

Also, why not to use regions? I find this feature to be very helpfull to locate code.
Jan 18, 2008 at 12:19 PM

xnasorcerer wrote:
Why did you guys choosed to put them together? Wouldn't be more easy to look and search for them if we have them in separeted places?


There are various reasons, but the main reasons are:
  • Ease of code review
  • Good structure and understandability from others, someone is going to take over your code or extend it.

Also searching for fields isn't always a good option, it might play well with small files, but working with files in >50k size this get very tedious.


xnasorcerer wrote:
Also, why not to use regions? I find this feature to be very helpfull to locate code.


Regions do tend to offuscate the code, most of the time you end up expanding the regions anyway. Having code grouped by visibility might not be the best structure in a file, as you might have to scroll a lot in order to get an overview.

We aren't saying don't use regions, just consider the value they add. There are certainly scenarios where a section of code can be initially hidden, but keep it limited.
Jan 25, 2008 at 12:09 PM
Look at this:
	/// <summary>
        /// Holds a list of all <see cref="Camera"/>s attached to this entity.
        /// </summary>
        public List<Camera> Cameras
        {
            get { return this.cameras; }
        }
 
        private readonly List<Camera> cameras;
 
        /// <summary>
        /// Gets the <see cref="QSGame"/> for the entity
        /// </summary>
        public QSGame Game
        {
            get { return game; }
        }
        private readonly QSGame game;
 
        /// <summary>
        /// Position in the scene
        /// </summary>        
        public Vector3 Position
        {
            get { return position; }
            set
            {
                position = value;
                this.OnPropertyChanged(PropertyName.Position);
            }
        }
        private Vector3 position = Vector3.Zero;
 
	/// <summary>
        /// Rotation matrix, which stores its right, up, and forward vectors.
        /// </summary>
        /// <remarks>Currently public for performance reasons</remarks>
        public Matrix Rotation
        {
            get { return rotation; }
            set
            {
                rotation = value;
                this.OnPropertyChanged(PropertyName.Rotation);
            }
        }
        public Matrix rotation = Matrix.Identity;
 
        /// <summary>
        /// Entity's scale
        /// </summary>
        public float Scale
        {
            get { return scale; }
            set
            {
                scale = value;
                this.OnPropertyChanged(PropertyName.Scale);
            }
        }
        private float scale = 1.0f;

Now look at this:
	#region Fields
 
        private readonly List<Camera> cameras;
        private readonly QSGame game;
        private Vector3 position = Vector3.Zero;
        public Matrix rotation = Matrix.Identity;
        private float scale = 1.0f;
 
	#endregion
 
 
	#region Properties ...
 

With the first scenario, my eyes tend to get lost before I can find anything. If we want to find a property we have
to look at all that code.
And why do we have to put the Fields after the Properties if even the VS puts the Field before the Property?

Now, with the second scenario, we can collapse the regions and look only at the regions that we want to look.
And having a Field region, this region acts like some kind of indice, helping us to find more fast a specifique
property. If we look at the way XNA or microsoft guys release their code, you can also see that this is the way
the make their code.

Jan 25, 2008 at 1:45 PM
I agree with grouping properties, fields, methods, etc. but I don't like regions. I don't like code collapsing in general. It's too easy to accidentally collapse something and then miss details on a code scan.

The XNA team stated their main use of regions was to highlight code that was important for a given sample. They would put regions around the book keeping code that was practically the same for all samples.
Jan 25, 2008 at 2:52 PM
" I don't like code collapsing in general. It's too easy to accidentally collapse something and then miss details on a code scan."

That is why I use regions like this:

region Fields
region Properties
region Constructor
region Initializing
region Loading / Unloading
region Update
region Draw
region Events

Then, I put, for example, everything that is updating, inside Update region. This way I know where I can find everything easilly. There is no way I can miss a scope of a code like this. ;)


"I agree with grouping properties, fields, methods, etc. but I don't like regions."

Perhaps we should then vote for a new Coding Guideline.
Jan 25, 2008 at 3:33 PM
I'm certainlly ok with grouping, having fields first, but using regions is, for me, out of the question. In the above it's even more confusing, what if a event is noly raised during draw why does it not belong there, what it a method is invoked during Load/Update and draw where should it be located?

I agree with shaw that region just obfuscate the code which can be dangerous. And if everyone/most have to go in and use "Expand all" what is the value then of having these regions?
Coordinator
Jan 25, 2008 at 3:36 PM
Sorceror, it probably isn't worth your time to argue this point, I argued it with both of them a couple of months ago, for about a solid week I would say, and we even took it to the XNA forums where I was mostly overruled as well. I'm on your side on this one, but we seem to be the minority, and the majority is what we're concerned with in a game engine.
Jan 25, 2008 at 4:21 PM
There are just too many fringe cases, like Sturm mentioned. I have yet to see a regioning scheme that works well for all classes.

To be honest, anytime I see regions, I just hit "Expand All." I find it so much easier to read that way.
Jan 25, 2008 at 5:15 PM


There are just too many fringe cases, like Sturm mentioned. I have yet to see a regioning scheme that works well for all classes.
To be honest, anytime I see regions, I just hit "Expand All." I find it so much easier to read that way.



And if everyone/most have to go in and use "Expand all" what is the value then of having these regions?


:) Well, the answer would be: There two group of users! One likes to use region and to see collapsed code. And the other doesn't like! So, that is great thing about Regions, it has "Expand All". For those that like to look at the completely expanded code, then just expand it all and you will be happy. ;) Not everyone would be using the "Expand all".


what it a method is invoked during Load/Update and draw where should it be located?


Well, a simple solution would be: "#region Method called by Load/Update and Draw".

Why am I arguing about this? Because I really do think that this is a great tool! Almost all the time, we can go right where we want to make a change or look at some code without having to search the entire code file.
If we really think about this, we would be thinking about "Regions" like separeted class files. We could be using all classes inside just one big file, but it is better to separate things because it makes more easy to read and to find, isn't?


Sorceror, it probably isn't worth your time to argue this point,


Well, I think that it is! But only because it helps a lot on coding! I think that they are not acostumated to using this feature.
Also, I think that from time to time, we should be discussing things, discussing the paths that were taken. This is good for every project.
Jan 25, 2008 at 6:51 PM

xnasorcerer wrote:
Well, I think that it is! But only because it helps a lot on coding! I think that they are not acostumated to using this feature.
...
Well, a simple solution would be: "#region Method called by Load/Update and Draw".


Well I've been using the language since somewhere around 2000, and regions since Everett (or were they not in before Whidbey can't really remember). My initial use of regions did something similar to your suggestion but that quicky prove to be way to expensive, methods moved around creating new, and deleting existing regions. Others who looked at the code just saw methods being deleted and new once created, though the only thing really happening was changing the method inter dependencies.

I ended up using regions which group members by access, but after a few years I came to realize that most of the time I do "Expand All" so there was no value in using it.

There are tool many pitfalls when you start using regions to group types of members together and the code can get a mess fast.


xnasorcerer wrote:
:) Well, the answer would be: There two group of users! One likes to use region and to see collapsed code. And the other doesn't like! So, that is great thing about Regions, it has "Expand All". For those that like to look at the completely expanded code, then just expand it all and you will be happy. ;) Not everyone would be using the "Expand all".


I like regions and find they have their validity. They are good in tutorials, demos, hiding generated code (i.e. winform.designer.cs) and obscuring ugly unmanaged unsafe calls you were too lazy to factor out.


xnasorcerer wrote:
If we really think about this, we would be thinking about "Regions" like separeted class files. We could be using all classes inside just one big file, but it is better to separate things because it makes more easy to read and to find, isn't?


Well no, I would split a class up into different files, that's actually one thing I don't miss from C++. I like having everything nicely together.
Jan 25, 2008 at 8:42 PM


shawmishrak wrote:
There are just too many fringe cases, like Sturm mentioned. I have yet to see a regioning scheme that works well for all classes.


Shaw, we don't need to have a regioning scheme.

Ok. So far, we don not agree on the regions yet. But we all agree about grouping the Fields first and properties later?
Coordinator
Jan 25, 2008 at 8:45 PM
I believe we all agreed on that at one point, but never put it into the coding guidelines.
Jan 25, 2008 at 8:58 PM
Without a regioning schemes, everyone is free to create whatever regions they want. While this may work in some cases, in general it will just lead to inconsistencies. While I do believe being too strict is not a good idea with coding guidelines (<hint> --i </hint>), having some files with regioning scheme A, some with regioning scheme B, and some with no regioning at all will just confuse people and lower their opinion of our code.

Do you put regions in every class, even if there are only 1 or 2 methods and 1 or 2 private fields? What about structs?

I'm not against regions completely. There are interesting cases where they are helpful, like enclosing an implemention of IEnumerable. What I don't like seeing are the Fields/Properties/Draw Methods/Update Methods schemes. By all means keep an order to the file, but is it necessary to have the ability to collapse the properties in the file?
Jan 25, 2008 at 9:13 PM
I don't apply regions to all classes, but only after they start to beeing to big. And properties are one thing that increase the code a lot.
Well, just an idea for a basic regioning scheme would be:

Fields
Properties
Methods

This wouldn't annoy you guys too much, but also help to separe thing a little bit. This would give a structure to the code. What do you think?
Jan 25, 2008 at 11:41 PM
brawdf

Most of the time when working with methods you need to access properties or fields, which mostlikely means that you need to expand those as well, and mostly when working with properties you need to access the fields.

Also this scheme isn't compatible with VS.
Jan 26, 2008 at 3:21 PM
Edited Jan 26, 2008 at 3:44 PM


Sturm wrote:
brawdf

Most of the time when working with methods you need to access properties or fields, which mostlikely means that you need to expand those as well, and mostly when working with properties you need to access the fields.

Also this scheme isn't compatible with VS.


Yes, and what is the problem with expading a region when you need to access it? Of course we will need to expand any region if we want to access it.

And why that scheme isn't compatible with VS? I just create a ClassWithRegionsTemplace and whenever I want to add a new class, I can choose this template and it will add me a starting empty class with the regions already. Like this:

using System;
using System.Collections.Generic;
 
namespace TestProject
{
    public class test
    {
        #region Fields
 
        #endregion
 
        #region Properties
 
        #endregion
 
        #region Contructors
 
        public test()
        {
 
        }
 
        #endregion
 
        #region Methods
 
        #endregion
 
    }
}

Do you guys know how to create a class template, don't you?

Also, for those that like to see everything expanded, you can go to: Tools – Options – Text Editor – C# - Advanced - Enter outlining mode when files open, and whenever you open a file, everything will be expanded.
Jan 26, 2008 at 6:01 PM
One example of this is the prop snippet in VS it's not compatible with this, but there are many external tools which aren't compatible with any region schema you are using.

If most in the team have to do this I still do not see the reason for implementing this.
Jan 26, 2008 at 6:18 PM


Sturm wrote:
One example of this is the prop snippet in VS it's not compatible with this, but there are many external tools which aren't compatible with any region schema you are using.

If most in the team have to do this I still do not see the reason for implementing this.


What? Yes, it is! But you have to move the field to the field region, or you can do what I did: I made my own prop snippet to only write a property. I removed the field declaration of the snippet. This way, I write a field first and later a add a property using this new snippet.
Jan 26, 2008 at 6:49 PM

xnasorcerer wrote:
What? Yes, it is! But you have to move the field to the field region, or you can do what I did: I made my own prop snippet to only write a property. I removed the field declaration of the snippet. This way, I write a field first and later a add a property using this new snippet.


I can't see the logic in that every team member have to change VS and their tools in order to comply with a coding standard, in my mind that's just simply plain wrong. We certainly do not want to impose this on everyone who is ever going to write some code for the engine.
Jan 26, 2008 at 7:25 PM


Sturm wrote:

xnasorcerer wrote:
What? Yes, it is! But you have to move the field to the field region, or you can do what I did: I made my own prop snippet to only write a property. I removed the field declaration of the snippet. This way, I write a field first and later a add a property using this new snippet.


I can't see the logic in that every team member have to change VS and their tools in order to comply with a coding standard, in my mind that's just simply plain wrong. We certainly do not want to impose this on everyone who is ever going to write some code for the engine.


First, no one have to change their VS. You only need to add two files:
One inside here: "\Microsoft Visual Studio 8\Common7\IDE\VCSExpress\ItemTemplates\1033"
And the other here: "\Microsoft Visual Studio 8\VC#\Snippets\1033\Visual C#"

Both files togheter have only 10kb. And it doesn't change anything to VS. It only adds two features.
The concept of templates and snippets are to be used!
Also, there is noone imposing anything! It is just one more opition to use. If you don't want to have also a property snippet that only creates a property without a field, well just don't use it.
You can even make a snippet for adding comments like this:
        /// <summary>
        /// <see cref="MyProperty"/>
        /// </summary>
Jan 26, 2008 at 8:16 PM

xnasorcerer wrote:
First, no one have to change their VS. You only need to add two files:
One inside here: "\Microsoft Visual Studio 8\Common7\IDE\VCSExpress\ItemTemplates\1033"
And the other here: "\Microsoft Visual Studio 8\VC#\Snippets\1033\Visual C#"
Both files togheter have only 10kb. And it doesn't change anything to VS. It only adds two features.


That's the same it's not an option to change exsiting VS files.


xnasorcerer wrote:
The concept of templates and snippets are to be used!


I'm not implying otherwise, I guess that you do not use the BE, otherwise you see that there is a added snippet (called unittest) which adds a new unittest function to a file.


xnasorcerer wrote:
Also, there is noone imposing anything! It is just one more opition to use. If you don't want to have also a property snippet that only creates a property without a field, well just don't use it.
You can even make a snippet for adding comments like this:


I'm using snippets in this and a little more elaborated manner daily, I'm aware that you can add snippets for just doing the property or attribute part, but the whole power of the prop snippet is that it's all together in a nice easy to edit package.
Jan 28, 2008 at 6:51 AM

xnasorcerer wrote:
Also, for those that like to see everything expanded, you can go to: Tools – Options – Text Editor – C# - Advanced - Enter outlining mode when files open, and whenever you open a file, everything will be expanded.


Just because I like everything expanded doesn't mean I do not use outlining. I use it heavily, as I often need to collapse some methods in order to get an overview, either during refactor or during coding. No here regions wouldn't help as the code is too internally connected to make any sensible regions. unless you make field/props/methods in which case I would have to expand everyone anyway.
Jan 30, 2008 at 11:52 AM
I guess that a few people here aren't looking forward to C# 3.0 :P

C# 3.0 new language features

...
Automatic properties: The compiler will automatically generate a private instance variable and the appropriate getter and setter given code such as: public string Name { get; private set; }


Another reason that we should really enforce properties over non private fields, don't want to get any nasty surprises when going 3.0
Jan 30, 2008 at 3:44 PM
To the contrary, new language features are usually good. I find LINQ especially interesting, despite its shortcomings.

My issue with properties isn't so much with the language syntax as it is run-time code generation. The .NET motto is: do everything at run-time, don't let the compiler do shit. For obvious compatibility reasons, you can't do much interface-changing optimizations, but there are IL level optimiations that can be done with knowledge of the run-time environment.

In a perfect world, this isn't a bad idea. If you can maintain enough information about the original program structure in the form of metadata, a run-time optimizer has a better chance than a compile-time optimizer. Add in run-time profile analysis and dynamic recompilation and you have an amazing optimization potential. However, the current CLR has proven to me time and time again that it sucks at code optimization. Everyone knows the Xbox code generator is borderline horrible, but even the Desktop CLR has many, many issues. .NET can be so much better than it currently is in the high performance category if they just spent a chunk of time developing a good CLR that caters to high-performance code, not just business code.

The point I'm trying to make is: I'm not against language syntax. I find a lot of C# to be very elegant. However, I cringe when I see value-type property accesses in tight loops. Not because it's ugly syntax, but because I know the generated x86 assembly is going to involve lots of unnecessary calls and memory copies.

Speaking of this, the feature I would want in C# above all others is ref properties and const ref properties. Something like:

    public Matrix World
    {
        const ref get
        {
            return this.world;
        }
        set
        {
            this.world = value;
        }
    }

The semantics would work as follows:
  • const ref get: When the getter is called, return a reference to the local class's data member, not a copy. The const keyword will tell the compiler that any uses of this reference cannot involve modifications to the instance.
  • ref get: When the getter is called, return a reference to the local class's data member, not a copy. The difference here is that the reference can be used to change the instance data.

There are issues with these semantics when considering them as a whole, but something along these lines would be good. Of course, a good code analysis could identify candidates here and generate optimized code appropriately without language syntax.
Coordinator
Jan 30, 2008 at 3:45 PM
Properties are fine if they're not causing a performance overhead, like copying of vectors and/or matrices thousands of times per frame.

It's the same arguement that happened when C++ came out and people were arguing it would never perform well enough. But for that same reason people still need to use C in their code, just as we may not be able to propertize(?) everything.
Jan 30, 2008 at 6:45 PM
Well I'm still a strong enforcer of PIMPL and can't see any performance issues there. The real problem is that you aren't using the right patterns. you shouldn't have to update the properties thousands of times per second. We should look into what patterns are in play and how we can optimize this.
Jan 30, 2008 at 8:47 PM
That's the difference between business-class apps and game/high-performance apps. Especially when you consider the automatic properties feature of C# that you mentioned. When even internal class methods go through properties to get at data, you're going to rack up significant overhead when you have tight processing loops. I haven't played around with C# 3.0 too much yet, but I can see potential issues there.

As a classic example for me, take a look at a physics simulator. You have n bodies, each with a handful of dynamic data, including position, velocity, orientation, angular velocity, accumulated force, accumulated torque, and the list goes on. For each simulation iteration, you may read and write each of these pieces of data at least once for each body. So let's say that's a conservative 10n accesses, 5 reads and 5 writes per body. Now, with a lot of modern, real-time solvers, you take an iterative approach. So let's say you do 20 iterations of simulation for each time step. We now have 200n accesses per simulation step. At 60 Hz, that's 12000n accesses.

So let's take a closer look at data access patterns. Clearly it makes sense to store the data within the physics body itself. So who accesses the data? Does the simulation core handle all of this and incur 12000n property accesses, or do we push the work off to the bodies and incur 20 * 60 * n = 1200n method calls? This isn't a trivial decision, both aren't that great of options. Doing 12000n indirections for direct member access is going to be significantly faster than 1200n method calls, especially when each method call will be doing the memory references anyway. So in reality you have 1200n method calls + 12000n memory accesses instead of just 12000 n memory accesses. In the case of having a central solver that visits the bodies to get data, you have 12000n property accesses + 12000n memory copies on potentially large data types + 12000n memory references. Not to mention all of the calls to accessors and all of the .NET checks will thrash smaller caches in tight loops.

Of course, most of this is dependent on the CLR run-time code generator, not the C# language itself. In a perfect world, the CLR would take care of all of this and transform property accesses to direct accesses when it can and use references/stack instances when it needs to. However, this does not happen currently and is why we need to worry about such issues.
Coordinator
Jan 30, 2008 at 9:07 PM
^^^ What he said
Jan 31, 2008 at 4:47 AM
If the only objection to the pattern is performance then lets have a look at the performance issues. as you say there are 5 reads and writes, the reads is not something which we can do anything about really, as long as these are on different properties. The writes on the other hand can certainly be handled differently. Instead of doing 5 individual writes and different points you could do a single write setting all at once. But we need to look into the communication between the physics and scene in order to do any perf optimization here.

But as you say, the only reason we are having this discussion is that the CLR on the xbox isn't optimized for any of this, and the only known medigation is to use ref paramenters. I'm just saying that while we of cause should do anything to optimize for performance, I also do not think that the CLR won't handle this better in future releases and I would really not like to go back and rework everything when that happens.
Coordinator
Jan 31, 2008 at 4:59 AM
Part of the issue is priorities, and everyone's are different.

For example, I place performance above all, especially because we're using XNA and the CLR. After performance I prefer readability, organization, and consistancy of code.

I'd rather not go back and change the code later either, so I'd so we don't go back and change it all. We've already said most people won't need to bother with the engine core anyway, so as long as we're consistant with the format now, then it doesn't even sound like an issue. I'm ok with coding for performance as long as we do it right.
Jan 31, 2008 at 1:45 PM
I wasn't referring to the physics <-> scene interaction, that was an example of the kind of data flow that could occur internally in a physics simulator. It's not just the Xbox CLR, either. The Desktop CLR isn't exactly good at optimizing this stuff. I'm just not comfortable with saying "The CLR does X poorly now, but we shouldn't change anything because it might do it a little bit in a couple of versions."

All I'm saying is you need to balance the language features you want to use. I'm all for making the code as clean as possible, but it has to be done within reason.
Jan 31, 2008 at 2:58 PM
I totally agree, but I'm just opposing the term that we make fields public because there might be a suitable performance improvement here, I would say we do that when we have proven that and using update methods with ref params wont improve enough.
Coordinator
Jan 31, 2008 at 3:08 PM
I thought Shaw did tests a couple of months ago. I remember seeing that using properties/accessors for matrices and vectors caused copying and quite a bit of slowdown.

Either way, maybe a more comprehensive test, or at least re-posting those results.
Jan 31, 2008 at 4:20 PM
Property accesses on value types cause one, potentially two, copies per call. No ifs, ands, or buts. If you don't believe me, go look at x86 assembly dumps with cordbg. The humorous irony is that property access overhead was actually slightly less on Xbox due to its super-fast RAM.

Note that I'm not advocating global public variable access. I'm just saying we need to write coding guidelines that allow for this eventuality when it arises. I'm also saying that I am unconvinced that these kind of optimizations can be taken care of with different data access patterns.

I'll state my main point again just to be clear: It is not the C# language and its syntax that bothers me. The current state of the CLR (both Desktop and Xbox) requires C#-level optimization. We cannot just naively write "clean" code and leave all optimization to the CLR. Nor can we put all of our faith on what comes with the .NET 4.0 CLR or the XNA 3.0 Xbox CLR.
Coordinator
Jan 31, 2008 at 6:22 PM
In cases where we would have property accesses on value types I propose we simply use standard OO practices. C++ users have no problems without accessors, and that .NET standard for C# that uses properties for everything possible certainly does not take into account high-performance applications, so I'm not sure why it is being considered seriously.
Jan 31, 2008 at 6:52 PM
But accessors suffer from the same problem, if you're not careful. What do you think C# properties really are? They're nothing more than syntactic sugar for accessors. When you write a property with a getter and a setter, the compiler internally creates a getter and a setter. For instance:

    public Matrix MyMatrix
    {
        get { return this.myMatrix; }
        set { this.myMatrix = value; }
    }

translates to

    public Matrix get_MyMatrix()
    {
        return this.myMatrix;
    }
    public void set_MyMatrix(Matrix mat)
    {
        this.myMatrix = mat;
    }

So why don't you see C++ programmers complain about accessor speed? Two reasons:
  • Inlining
  • Pass-by-ref semantics

In C++, you woud write accessors like:

    inline Mat44& GetMyMatrix()
    {
        return myMatrix;
    }
    inline void SetMyMatrix(const Mat44& mat)
    {
        myMatrix = mat;
    }

To get the same thing in C# requires a bit of creativity with ref/out semantics. And you have to hope and pray that the CLR inlines these methods and doesn't do something stupid like keeping around a temporary copy.

    public void GetMyMatrix(out Matrix mat)
    {
        mat = this.myMatrix;
    }
    public void set_MyMatrix(ref Matrix mat)
    {
        this.myMatrix = mat;
    }

Clearly this isn't very elegant, and you still can't get quite the same power out of the syntax, especially with the getter. You could potentially still incur the copying costs if you're not careful.
Coordinator
Jan 31, 2008 at 7:05 PM
We need to find a way that is clean, looks good, is easily understandable, is flexible, but most importantly, is high-performance.