00 |  ng_zz  Blopvertz
Video Cards for Gamers

01 | Anand Tech -
GeForce FX (NV30)

02 | Anand Tech -
GeForce FX

03 | Toms Hardware -
GeForce FX

04 | Anand Tech -
ATI Radeon 9700 (R300)

05 | Anand Tech -
ATI Radeon 9800 (R350)

06 | Toms Hardware -
ATI Radeon 9800 (R350)

07 | MURC -
Matrox Parhelia-512

08 | Hard Ware Zone -
Matrox Parhelia-512

09 | GameSpot -
Matrox Parhelia-512

10 | nV News -
Matrox Parhelia-512

11 | Beyond 3D -
Matrox Parhelia-512

12 | GameSpot -
GeForce 4

13 | Anand Tech -
nVidia GeForce 4

14 | Anand Tech -
nVidia Quadro 4

15 | Toms Hardware
GeForce 4

16 | Technoyard -
GeForce4 MX440

Info / Misc

17 | Toms Hardware
nVidia GeForce 3
Technical Squiz

18 | Toms Hardware
CDRW - Back-Up Copy
Mysteries Revealed

19 | Toms Hardware
DVD - Six Burner Tests
Seven Times The Capacity

20 | Tweek 3D's -
Video Dictionary

21 | Downloads -
File Swap / Share

22 | Linux -
Version Line Up

23 | Cheaters Suck -
Some Thoughts Shared


16 | Demo Zip's
Complete Goof Off

24 | Wolfenstein
Single player

25 | Wolfenstein
Multi Player

26 | Serious Sam 2
Second Encounter



By Sam Parker
Published: 2/6/2002

It was just less than a year ago that the GeForce3 was announced. That generation started us talking about stuff, like vertex and pixel shaders, which still may seem a little abstract to some of us. The shader effects that the GeForce3's DirectX compliance make possible still haven't found their way into more than a few games. But in some ways, it's a short jump from the GeForce3 to the Nvidia graphics chip in the Xbox, and there have already been some impressive Xbox games that have been fully optimized for the new effects. There's no doubt that PC games are showing increasing support for the power of this new generation of cards.

The GeForce4 is the next step for Nvidia, and it's a big one. For the sake of convenience (and slick marketing), Nvidia is actually grouping two rather different chip and card designs with the GeForce4 name. Time will tell if this grouping is more confusing than convenient for end users. Listen up, because we'll lay out all the differences for you. And we've run two GeForce4 boards through our lab tests, so there are a number of performance graphs to check out too.

Hosted by

'); return false;"> screenshot

The GeForce4 Ti 4600 card has a cool heatsink.
The successor to the GeForce3 is the GeForce4 Ti. And just to be clear, this new chip--at least in its fastest configurations--does have more raw power than the chip in the Xbox. (Isn't it good to have bragging rights back on the PC?) The fastest GeForce4 Ti runs fast and hot--Nvidia has managed to get clock speeds up to 300MHz for the core and 325MHz for the memory. Not as high as early Web rumors, but certainly enough to consume a lot more power and a fancy new heat sink. The card's higher power requirements explain the forest of capacitors at the end of a longer (8.5-inch) board. Having a whopping 128MB of DDR memory onboard also means that there are memory chips on both sides of the board, much like the recently announced 128MB Radeon 8500. The GeForce4 Ti line will come in three versions, ranging in price from $200 to $400.

Hosted by

'); return false;"> screenshot

The smaller GeForce4 MX 460 card.
Not everyone needs or can afford a GeForce4 Ti. Sales numbers show that the vast majority of the add-in graphics cards that are sold cost less than $200, with about an even split between the sub-$100 and $100 to $200 range. So it should be no surprise that Nvidia will update its successful GeForce2 products to pack more performance into low-cost cards. The result is the GeForce4 MX, a series of cards that combines a simpler dual-pipeline core with with fast DDR memory and high clock speeds. The $179 GeForce4 MX 460, the fastest in the new MX family, is clocked at 300MHz for the chip and 275MHz for the DDR memory. These speeds earn the card the right to carry the same spiffy heat sink as its Ti cousins. But despite having a name that makes these cards automatically seem superior to the GeForce3, be sure to note that they completely lack the hardware for advanced vertex and pixel shader effects.

There's plenty that's new in the GeForce4 chips, but first let's look at how the basic specs stack up for all the new variants.


Raw speed never hurts, and the most obvious difference between the new cards lies with core and memory speeds. In the GeForce4 Ti, Nvidia has managed much higher speeds on a chip that's about 10 percent larger without shrinking the .15 micron production process introduced last year. Nvidia decided to add the GeForce4 Ti 4200 to the lineup in response to ATI's announcement of the $199 Radeon 8500LE, and this goes to show that we should continue to see sub-$200 high-performance cards. But the recent addition of this model means that it won't launch at the same time as the other GeForce4 Ti cards. The GeForce4 MX cards will be the first to market and are set to arrive in stores in the next week or two.

  GeForce4 Ti 4200 GeForce4 Ti 4400 GeForce4 Ti 4600 GeForce4 MX 460 GeForce4 MX 440
Price (list) $199 $299 $399 $179 $149
  Check latest prices
Core clockspeed 225 275 300 300 270
Memory clockspeed 500 550 650 550 400
DX8 Vertex Shaders 2 2 2 none none
DX8 Pixel Shader standard 1.3 1.3 1.3 HW T&L HW T&L
Pipelines 4 4 4 2 2
Texture units 2 2 2 2 2
Transistor count 63 million 63M 63M    
Manufacturing process 0.15 micron 0.15 0.15 0.15 0.15
Availability April March March February February

  GeForce4 MX 420 GeForce3 Ti 200 GeForce3 Ti 500 Radeon 8500LE 128MB Radeon 8500 128MB
Price (list) $99 $199 $349 $199 $299
Core clockspeed 250 175 240 250 275
Memory clockspeed 166 SDR 400 500 500 550
DX8 Vertex Shaders none 1 1 2 2
DX8 Pixel Shader standard HW T&L 1.3 1.3 1.4 1.4
Pipelines 2 4 4 4 4
Texture units 2 2 2 2 2
Transistor count   57M 57M 60M 60M
Manufacturing process 0.15 micron 0.15 0.15 0.15 0.15
Availability February Released Released March March

But Nvidia hasn't added just raw power to these cards. The GeForce3 improved rendering efficiency with several key features in what Nvidia calls the Lightspeed Memory Architecture, and the GeForce4 cards include these and further efficiency improvements. The GeForce4 Ti continues in another unconventional direction by greatly increasing performance of vertex and pixel shaders, in what's now dubbed as nFiniteFX II. As for video features, the TwinView dual-monitor support of the GeForce2 MX has been added directly to the GeForce4 chips and renamed to "nView." The nView software makes it easier than ever to use the extra desktop space that multiple monitors provide. Nvidia has also finally added iDCT support to the GeForce4 MX to help out with DVD decoding. Since higher-end systems are less likely to need help decoding DVD video, Nvidia hasn't included this feature in the GeForce4 Ti.

New Features

What helps clearly separate the performance of the GeForce4 Ti from the GeForce3 is beefed-up nFiniteFX II processing for shader effects. The new chip has two programmable vertex pipelines, compared with GeForce3's single pipeline. In this way, it catches up with the Xbox graphics chip, which also has two. Having the extra pipeline helps the GeForce4 Ti pump out three times the geometry power as the GeForce3. As for pixel performance, the chip can claim a 50 percent increase with pixel effects, like per-pixel bump mapping. This is a direct result of higher clock speeds. One of the new tech demos shows how the GeForce4 Ti can actually calculate collision with bump maps so that small waves lapping against a bumpy shore won't strangely ignore the height of the rocks' bumps. On the whole, more hardware performance for vertex and pixel shaders makes it practical for game developers to add more and more complex effects.

Improvements in efficiency help raw power translate into practical performance. CPUs have long had small memory buffers to keep things running smoothly, and the GeForce4 Ti does something very similar. Four different parts of the chip have specific caches that buffer commonly used data that'd otherwise have to be sent for from the card's DDR memory. The GeForce4 Ti's QuadCache feature stores primitives, vertex, texture, and pixel data in a way that isn't present in the GeForce3. Another sort of efficiency savings was gained by making the chip more aggressively reduce the number of unseen pixels that are processed before being thrown out in the final frame because they're obscured by an object. Occlusion culling, the process of removing hidden surfaces, was an approach made popular by PowerVR's graphics hardware, but the approach in today's chips from Nvidia and ATI is rather different. In any case, the GeForce4 is about 25 percent better at throwing unseen pixels out than the GeForce3 is.

Nvidia has renamed its antialiasing feature "Accuview AA," but the major difference is just one new quality mode. In the control panel, there's now a "4XS" mode that uses more texture samples to create a higher-quality image. There are more chip resources devoted to Nvidia's multisample Quincunx antialiasing method, so it should now be very close to the performance levels we see when running the lower-quality 2X mode. Antialiasing continues to be a good option when you have performance to burn, as is often the case with today's best cards. But antialiasing does have a significant, though reasonable, cost in performance, and particularly demanding games coming in the future--like Doom III--aren't likely to leave you much extra performance for such niceties.

Tech Demos

Nvidia is great at making in-house tech demos. From the stunning demos at the first Xbox announcement to the "experience" demos that accompany each major PC chip release, the company's internal development staff has a knack for showing off the cool features and performance of new hardware. Tech demos often pare down such game essentials as physics, AI, or even nice-looking background environments, all just to show off one particularly impressive element. This group of demos is no different, and while some will be blown away by the sheer graphics power represented here, others will simply wonder when such graphics will make it into actual PC games.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
The Wolfman model has 100,000 polygons, 61 bones for skeletal animation, and eight layers of fur. This impressive demo is a good example of how fur can now be simulated with shader effects. Concentric shells of polygons add layers of textured hair fins to the skin, which are alpha-blended into the background. There's per-pixel anisotropic lighting on the hair textures to capture the way strands of hair pick up light. It's also easy to see how the skeletal animation system bends and smooths the model's skin as it runs.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
This demo of a robotic dancer shows how particle physics can be used to simulate flowing clothing. The physics of the dress and the chain on the head are both simulated in real time, as you can see when the dancer stops, but their momentum keeps the particle-based objects moving smoothly.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
CodeCreatures is a demo by German developer CodeCult, and it shows off the company's extremely high polygon count engine. As we first saw at last year's ECTS, this engine demo has about 200,000 to 300,000 polygons per frame, but the whole environment is about 3 million polygons, and the engine is said to be capable of 10 million polygons per second.

Since the GeForce4 MX doesn't have pixel shader support, it won't run most of the GeForce4 Ti demos (in the case of dancer, the demo runs but is missing the background and surrounding effects). But Nvidia has come up with some additional demos specifically for these cards.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
This demo shows a scene that's invaded by an army of bugs. This is a test of the chip's ability to handle complex scenes by culling out unseen pixels.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
This demo of particle effects shows a twister gobbling up trailers and tearing them apart. Part of the point of the demo is that the graphics card handles the particle load well enough for the CPU to do the physics modeling of the destruction of this poor trailer park.


Hosted by

'); return false;"> screenshot

 WM - 56K | Cable | T1

 RM - 56K | Cable | T1
The Chameleon demo first used to show off the GeForce3's shader features has been made to run on a GeForce4 MX with a few changes--there's no refraction effect or pixel shaders to show wrinkling skin.


No matter how good the eye candy looks, it's not practical to turn on if there isn't enough performance in real games. We've had the chance to test reference cards for both the $399 GeForce4 Ti 4600 and the $179 GeForce4 MX 460, and the results were quite impressive. To be sure that the cards' performance could stand out without being tied down by a slow system, we installed the cards in a 1.6GHz Athlon XP 1900+ system with a KT266A motherboard, 512MB of PC2100 memory, and Windows XP. Do keep in mind that the fastest cards are best paired with fast CPUs and won't give the same results on a slower PC.

MadOnion's 3DMark 2001 benchmark was one of the very first to test DirectX 8 shader effects, and it includes some very pretty demo scenes. The engine used in this test is the same as the one used in Max Payne, and it gives a good first look at card performance under Direct3D.

3DMark 2001

The Radeon 8500 does very well in this test, besting the GeForce3 Ti 500, but the GeForce4 Ti 4600 blows past any previous results to hit a score of almost 10,000 points at a resolution of 1024x768. It's not surprising that the GeForce4 MX 460 doesn't fare very well in this test, because it lacks support for pixel shaders. However, this is still much better than previous generations of Nvidia cards in this price range. There's a big jump from a GeForce2 MX 400 (2,890 points) or a GeForce2 GTS (4,271).

When we compare the results to the higher-resolution and antialiasing tests, the GeForce4 Ti 4600 clearly benefits from its high 10.4GB per second of memory bandwidth and increased efficiency. Since higher resolutions and antialiasing require sending much more data over the memory bus to the card's memory, insufficient memory bandwidth (due to slower memory speeds) strongly limits performance at higher settings.

3DMark 2001 Theoretical Tests
High poly count - 1 light 50M triangles/sec 29.9M triangles/sec 26.3M triangles/sec 36M triangles/sec
High poly count - 8 lights 12.6M triangles/sec 7.3M triangles/sec 6.2M triangles/sec 9.7M triangles/sec
Vertex shader 97fps 47.4fps 62.8fps 87.3fps
Pixel shader 122.5fps not supported 89.6fps 101.4fps
Point sprites 30.4M sprites/sec 10.4M sprites/sec 17.7M sprites/sec 28.1M sprites/sec

We generally don't look this closely at the theoretical tests in 3DMark 2001, but it's one of the few specific measures of performance in vertex and pixel shaders. It's interesting to see that the GeForce4 Ti 4600 churns out twice as many triangles per second as the GeForce3 Ti 500 in the single-light geometry test. But the card's power still isn't a match for the test with eight dynamic lights, as the scene still just stutters along.

OpenGL Performance

There are still plenty of game developers working with id's Quake III engine, so this venerable benchmark is still plenty relevant. We've moved up to using the latest version of the game, 1.31, and the "four.dm_67" benchmark demo that id included zipped up in the game's directory (in the "pak6.pak3" file).

Quake III 1.31
Quake III 1.31 - Four demo

Without any extra filtering or antialiasing turned on, we're now seeing very high frame rates in Quake III. Still, it's good to see that the GeForce4 MX 460 has plenty of raw rendering power. Its 150 frames per second (fps) average score at 1024x768 compares with the 51fps we've seen from a GeForce2 MX or the 85.1fps of a GeForce2 GTS. Clearly, it's a long ways from the previous generation of cards. With the high-end cards, it's most interesting to look at tests run with the premium visual quality options turned on. The GeForce4 Ti 4600 loses 30 percent of its 1024x768 score when 8X anisotropic filtering is turned on to make the floor and wall textures extra sharp, and that drops to half the original result when 4X antialiasing is also on. It's notable that the Radeon 8500 still loses a smaller part of its performance when anisotropic filtering is turned on.

Return to Castle Wolfenstein is a very popular game based on the Quake III: Team Arena engine, and it's a good reminder of how game developers can add a lot more detail as gamers get their hands on faster hardware. We get better-looking games, but the frame rate drops back into the double-digit averages we generally expect for current games. Here we've used the "" benchmark demo, which is available for download from various Wolfenstein community sites.

Return to Castle Wolfenstein 1.1 - Checkpoint demo

What's most impressive in these results is how little the GeForce4 4600's score drops when we move to higher resolutions. The card has plenty of performance to allow for both high resolutions and Quincunx antialiasing. In contrast, the fastest GeForce4 MX can run this game at 1024x768 with Quincunx on, but it really doesn't quite have to power to keep things uniformly smooth.

How we tested: AMD Athlon 1900+, 256MB PC2100 DDR SDRAM, IBM Deskstar ATA-100, Nvidia drivers version 27.30, ATI drivers version 7.66, Creative Labs Sound Blaster Live!, Windows XP.

Quake III (v1.31) and Return to Castle (v.1.1) were set to maximum geometric and texture detail levels, with trilinear filtering. OpenGL and D3D Vsync settings off. The Windows desktop was set to 1024x768 at 75Hz.

Final Words

While those who follow Nvidia's regular product cycles have been awaiting the GeForce4's public announcement for some time, there's more to this product launch than we could have expected. Now that the test results are in, we can see that the performance jump between the GeForce3 cards and the new performance cards is quite big--by 50 percent in some very demanding tests. Generally, the smaller generational steps bring much smaller speed increases. The GeForce4 MX is also a much bigger jump over the GeForce2 cards it replaces. The GeForce2 MX will now take the very low-end place that the TNT2 has held on to, so the TNT2 should finally disappear from shipping PCs completely.

The announcement of the $199 GeForce4 Ti 4200 was a decent surprise at the Nvidia press conference for the product launch. It won't be out at the same time as the other cards, and Nvidia's board partners seemed unprepared for the sudden change of plans, but the card should be worth the wait. The $200 cost is where the battle between ATI and Nvidia will be shaping up for the upcoming months, since both will now have DirectX 8 cards for the mainstream gamer. The Ti 4200 actually is a good reason for serious gamers to steer clear of the GeForce4 MX 460, at least at the $179 list price. The lack of support for the shader effects that are coming in future games means that the GeForce4 MX card shouldn't be considered as a long-term upgrade.

We've scratched only the surface of the GeForce4's performance potential. Since the GeForce4 MX cards will be streaming out later this month and the Ti cards are coming in March, there's a little time to consider upgrading. We'll have reviews of final GeForce4 cards in the coming weeks.

Hosted by