Screen modes come in several flavours, based on how many bits are used to store the color of each pixel on the screen. Naturally, the more bits you use per pixel, the more colours you can display at once; but there is more data to move into graphics memory to update the screen.
These modes are available, typically, in the following resolutions:
with 640x480 being probably the most common mode for running games in at the moment.
Monitor's generally have a width that is 4/3 times their height (called the aspect ratio); so with modes where the number of pixels along the width is 4/3 times the number of pixels along the height, the pixels will have an aspect ratio of 1, and thus be physically square. That is to say, 100 pixels in one direction should then be the same physical length as 100 pixels in a perpendicular direction. Note that 320/200 does not have this property; so in 320x200 pixels are actually stretched to be taller than they are wide.
There are a number of different ways that colors can be represented, known as "color models". The most common one is probably RGB (Red,Green,Blue). Nearly all possible visible colors can be produced by combining, in various proportions, the three primary colors red, green and blue. These are commonly stored as three bytes - each byte represents the relative intensity of each primary color as a value from 0 to 255 inclusive. Pure bright red, for example, would be RGB(255,0,0). Purple would be RGB(255,0,255), grey would be RGB(150,150,150), and so on.
Here is an example of some C code that you might use for representing RGB colors.
SColor make_rgb( int r, int g, int b )
ret.r = r;
ret.g = g;
ret.b = b;
Alternatively you may want to store an RGB color in an unsigned 32-bit integer. Bits 0 to 7 are used to store the blue value, bits 8 to 15 for the green and so on.
typedef unsigned int rgb_color; #define MAKE_RGB(r,g,b) ( ((r) << 16) | ((g) << 8) | (b) )
Anyway, I'm rambling now.
There are other color models, such as HSV (Hue, Saturation, Luminance), but I won't be going into them here. The book "Computer Graphics, principles and practise" by Foley & van Dam (often referred to as The Computer Graphics Bible) explains color modes in some detail, and how to convert between color modes.
In high-color and true-color modes, the pixels on the screen are stored in video memory as their corresponding RGB make-up values. For example, if the top left pixel on the screen was green, then (in true-color mode) the first three bytes in video memory would be 0, 255 and 0.
In high-color modes the RGB values are specified using (if I remember correctly) 5, 6 and 5 bits for red, green and blue respectively, so in the above example the first two bytes in video memory would be, in binary: 00000111 11100000.
Indexed color modes use the notion of a color "look up table" (LUT). The most common of these modes is 8-bit, better known as 256 color mode. Each pixel on the screen is represented by a single byte, which means that up to 28 can be displayed on the screen at once. The colors assigned to each of these 256 indexes are stored as 3 byte RGB values in the LUT, and these colors are used by the graphics hardware to determine what color to display on the screen.
Creating an application using indexed modes can be a pain, especially for the graphics artist, but there are sometimes advantages to using indexed modes:
ModeX is a special type of VGA 256 color mode in which the contents of graphics memory (i.e. what appears on the screen) is stored in a somewhat complex planar format. The resolution of ModeX modes isn't very high. DirectDraw knows how to write to ModeX surfaces, but the Windows GDI doesn't, so be careful when trying to mix GDI and DirectDraw ModeX surfaces. When setting the DirectDraw fullscreen mode, it is possible to choose whether or not DirectDraw is allowed to create ModeX surfaces. These days you probably want to avoid ModeX.
Even though the screen resolution might be, say, 640x480x32, this does not necessarily mean that each row of pixels will take up 640*4 bytes in memory. For speed reasons, graphics cards often store surfaces wider than their logical width (a trade-off of memory for speed.) For example, a graphics card that supports a maximum of 1024x768 might store all modes from 320x200 up to 1024x768 as 1024x768 internally. This leaves a "margin" on the right side of a surface. This actual allocated width for a surface is known as the pitch or stride of the surface. It is important to know the pitch of any surface whose memory you are going to write into, whether it is a 2D DirectDraw surface or a texture map. The pitch of a surface can be queried using DirectDraw.
Text diagram illustrating pitch:
Display memory: +--------------------------+-------------+ | | | | -- screen width -------- | | | | | | -- pitch/stride ---------------------- | | | | | | | | | | | | | | | | +--------------------------+-------------+
A bitmap is an image on the computer that is stored as an array of pixel values. That's a pretty crappy description. Basically, a bitmap is any picture on the computer, normally a rectangular block of 'pixels'. A sprite is the same thing as a bitmap, except normally it refers to a bitmap that has transparent areas (exact definitions of sprite may vary from programmer to programmer.) Sprites are an extremely important component of games. They have a million and one uses. For example, your mouse cursor qualifies as a sprite. The monsters in DOOM are also sprites. They are flat images with transparent areas that are programmed to always face you. Note that the sprite always faces you - this doesn't mean the monster is facing you. Anyway, enough said about bitmaps and sprites, I think.
If your game did all its drawing straight to the current display, the user would notice horribly flickery artefacts as the elements of the game got drawn onto the screen. The solution to this is to have two graphics buffers, a "front buffer" and a "back buffer". The front buffer is visible to the user, the back buffer is not. You do all your drawing to the back buffer, and then when you have finished drawing everything on the screen, you copy (or flip) the contents of the back buffer into the front buffer. This is known as double buffering, and some sort of double buffering scheme is used in virtually every game.
There are generally two ways to perform the transfer of the back buffer to the front buffer: copying or page-flipping.
A problem that can arise from this technique is "tearing". Your monitor redraws the image on the screen fairly frequently, normally at around 70 times per second (or 70 Hertz). It normally draws from top to bottom. Now, it can happen that the screen has only drawn half of its image, when you decide to instruct it to start drawing something else, using any one of the two techniques described above. When you do this, the bottom half of the screen is drawn using the new image, while the top half still had the old image. The visual effect this produces is called tearing, or shearing. A solution exists, however. It is possible to time your page flipping to co-incide with the end of a screen refresh. I'll stop here though, having let you know that it is possible. (fixme: i think DirectDraw handles this for you, check this)
Clipping is the name given to the technique of preventing drawing routines from drawing off the edge of the screen or other rectangular bounding area such as a window. If not performed, the general result could best be described as a mess. In DirectDraw, for example, when using windowed mode; Windows basically gives DirectDraw the right to draw anywhere on the screen that it wants to. However, a well-behaved DirectDraw application would normally only draw into it's own window. DirectX has an object called a "clipper" that can be attached to a DirectDraw surface to prevent it drawing outside of the window.
DirectDraw uses "surfaces" to access any section of memory, either video memory or system memory, that is used to store (normally) bitmaps, texture maps, sprites, and the current contents of the screen or a window.
DirectDraw also provides support for "overlays"; a special type of sprite. An overlay is normally a surface containing a bitmap with transparent sections that will be "overlaid" on the entire screen. For example, a racing car game might use an overlay for the image of the cockpit controls and window frame.
The memory a DirectDraw surface uses can be lost in some circumstances, because DirectDraw has to share resources with the GDI. It is necessary for your application to check regularly that this hasn't happened, and to restore the surfaces if it has.
All DirectX functions return an HRESULT as an error-code. Since DirectX objects are based on the COM architecture, the correct way to check if a DirectX function has failed is to use the macros SUCCEEDED() and FAILED(), with the HRESULT as the parameter. It is not merely sufficient to check if, for example, your DirectDraw HRESULT is equal to DD_OK, since it is possible for COM objects to have multiple return values as success values. Your code will probably still work, but technically it is the wrong thing to do.
Something to be on the lookout for, is that some DirectX functions return failure codes when they succeed. For example, IDirectPlay::GetPlayerData will "fail" with DPERR_BUFFERTOOSMALL when you are merely asking for the data size. This behaviour isn't documented either, which is incredibly frustrating. There aren't many of these, but be on the lookout.)
When you install the DirectX SDK you get a choice of whether to install the retail version of the libraries, or the debug version. The debug version will actually write diagnostic OutputDebugString messages to your debugger. This can be very useful. However, it slows things down a LOT - if you have anything less than a Pentium 166, rather choose the release libraries. Also, if you want to mainly play DirectX games, install the retail version. If you want to do mainly DirectX development, and your computer is quite fast, install the debug version. If you want to do both, then you should probably use the retail libraries, unless you have a very fast computer that can handle it. I normally install the retail version, but the debug versions can probably be quite useful for people starting out.