'); return false;">
The smaller GeForce4 MX 460
card.Not everyone needs or can afford a
GeForce4 Ti. Sales numbers show that the vast majority of the add-in graphics
cards that are sold cost less than $200, with about an even split between the
sub-$100 and $100 to $200 range. So it should be no surprise that Nvidia will
update its successful GeForce2 products to pack more performance into low-cost
cards. The result is the GeForce4 MX, a series of cards that combines a simpler
dual-pipeline core with with fast DDR memory and high clock speeds. The $179
GeForce4 MX 460, the fastest in the new MX family, is clocked at 300MHz for the
chip and 275MHz for the DDR memory. These speeds earn the card the right to
carry the same spiffy heat sink as its Ti cousins. But despite having a name
that makes these cards automatically seem superior to the GeForce3, be sure to
note that they completely lack the hardware for advanced vertex and pixel shader
There's plenty that's new in the GeForce4 chips, but first let's look at how
the basic specs stack up for all the new variants.
Raw speed never hurts, and the most obvious difference between the new cards
lies with core and memory speeds. In the GeForce4 Ti, Nvidia has managed much
higher speeds on a chip that's about 10 percent larger without shrinking the .15
micron production process introduced last year. Nvidia decided to add the
GeForce4 Ti 4200 to the lineup in response to ATI's announcement of the $199
Radeon 8500LE, and this goes to show that we should continue to see sub-$200
high-performance cards. But the recent addition of this model means that it
won't launch at the same time as the other GeForce4 Ti cards. The GeForce4 MX
cards will be the first to market and are set to arrive in stores in the next
week or two.
||GeForce4 Ti 4200
||GeForce4 Ti 4400
||GeForce4 Ti 4600
||GeForce4 MX 460
||GeForce4 MX 440|
|DX8 Vertex Shaders
|DX8 Pixel Shader standard
||GeForce4 MX 420
||GeForce3 Ti 200
||GeForce3 Ti 500
||Radeon 8500LE 128MB
||Radeon 8500 128MB|
|DX8 Vertex Shaders
|DX8 Pixel Shader standard
But Nvidia hasn't added just raw power to these cards. The GeForce3
improved rendering efficiency with several key features in what Nvidia calls the
Lightspeed Memory Architecture, and the GeForce4 cards include these and further
efficiency improvements. The GeForce4 Ti continues in another unconventional
direction by greatly increasing performance of vertex and pixel shaders, in
what's now dubbed as nFiniteFX II. As for video features, the TwinView
dual-monitor support of the GeForce2 MX has been added directly to the GeForce4
chips and renamed to "nView." The nView software makes it easier than ever to
use the extra desktop space that multiple monitors provide. Nvidia has also
finally added iDCT support to the GeForce4 MX to help out with DVD decoding.
Since higher-end systems are less likely to need help decoding DVD video, Nvidia
hasn't included this feature in the GeForce4 Ti.
What helps clearly separate the performance of the GeForce4 Ti from the
GeForce3 is beefed-up nFiniteFX II processing for shader effects. The new chip
has two programmable vertex pipelines, compared with GeForce3's single pipeline.
In this way, it catches up with the Xbox graphics chip, which also has two.
Having the extra pipeline helps the GeForce4 Ti pump out three times the
geometry power as the GeForce3. As for pixel performance, the chip can claim a
50 percent increase with pixel effects, like per-pixel bump mapping. This is a
direct result of higher clock speeds. One of the new tech demos shows how the
GeForce4 Ti can actually calculate collision with bump maps so that small waves
lapping against a bumpy shore won't strangely ignore the height of the rocks'
bumps. On the whole, more hardware performance for vertex and pixel shaders
makes it practical for game developers to add more and more complex effects.
Improvements in efficiency help raw power translate into practical
performance. CPUs have long had small memory buffers to keep things running
smoothly, and the GeForce4 Ti does something very similar. Four different parts
of the chip have specific caches that buffer commonly used data that'd otherwise
have to be sent for from the card's DDR memory. The GeForce4 Ti's QuadCache
feature stores primitives, vertex, texture, and pixel data in a way that isn't
present in the GeForce3. Another sort of efficiency savings was gained by making
the chip more aggressively reduce the number of unseen pixels that are processed
before being thrown out in the final frame because they're obscured by an
object. Occlusion culling, the process of removing hidden surfaces, was an
approach made popular by PowerVR's graphics hardware, but the approach in
today's chips from Nvidia and ATI is rather different. In any case, the GeForce4
is about 25 percent better at throwing unseen pixels out than the GeForce3 is.
Nvidia has renamed its antialiasing feature "Accuview AA," but the major
difference is just one new quality mode. In the control panel, there's now a
"4XS" mode that uses more texture samples to create a higher-quality image.
There are more chip resources devoted to Nvidia's multisample Quincunx
antialiasing method, so it should now be very close to the performance levels we
see when running the lower-quality 2X mode. Antialiasing continues to be a good
option when you have performance to burn, as is often the case with today's best
cards. But antialiasing does have a significant, though reasonable, cost in
performance, and particularly demanding games coming in the future--like Doom III--aren't likely to leave you much extra performance
for such niceties.
Nvidia is great at making in-house tech demos. From the stunning demos at the
first Xbox announcement to the "experience" demos that accompany each major PC
chip release, the company's internal development staff has a knack for showing
off the cool features and performance of new hardware. Tech demos often pare
down such game essentials as physics, AI, or even nice-looking background
environments, all just to show off one particularly impressive element. This
group of demos is no different, and while some will be blown away by the sheer
graphics power represented here, others will simply wonder when such graphics
will make it into actual PC games.
'); return false;">
56K | Cable | T1
RM - 56K | Cable | T1
The Chameleon demo first
used to show off the GeForce3's shader features has been made to run on a
GeForce4 MX with a few changes--there's no refraction effect or pixel shaders to
show wrinkling skin.
No matter how good the eye candy looks, it's not practical to turn on if
there isn't enough performance in real games. We've had the chance to test
reference cards for both the $399 GeForce4 Ti 4600 and the $179 GeForce4 MX 460,
and the results were quite impressive. To be sure that the cards' performance
could stand out without being tied down by a slow system, we installed the cards
in a 1.6GHz Athlon XP 1900+ system with a KT266A motherboard, 512MB of PC2100
memory, and Windows XP. Do keep in mind that the fastest cards are best paired
with fast CPUs and won't give the same results on a slower PC.
MadOnion's 3DMark 2001 benchmark was one of the very first to test DirectX 8
shader effects, and it includes some very pretty demo scenes. The engine used in
this test is the same as the one used in Max Payne, and it gives a good first
look at card performance under Direct3D.
The Radeon 8500 does very well in this test, besting the GeForce3 Ti 500, but
the GeForce4 Ti 4600 blows past any previous results to hit a score of almost
10,000 points at a resolution of 1024x768. It's not surprising that the GeForce4
MX 460 doesn't fare very well in this test, because it lacks support for pixel
shaders. However, this is still much better than previous generations of Nvidia
cards in this price range. There's a big jump from a GeForce2 MX 400 (2,890
points) or a GeForce2 GTS (4,271).
When we compare the results to the higher-resolution and antialiasing tests,
the GeForce4 Ti 4600 clearly benefits from its high 10.4GB per second of memory
bandwidth and increased efficiency. Since higher resolutions and antialiasing
require sending much more data over the memory bus to the card's memory,
insufficient memory bandwidth (due to slower memory speeds) strongly limits
performance at higher settings.
|3DMark 2001 Theoretical
|High poly count - 1 light
|High poly count - 8 lights
We generally don't look this closely at the theoretical tests in 3DMark 2001,
but it's one of the few specific measures of performance in vertex and pixel
shaders. It's interesting to see that the GeForce4 Ti 4600 churns out twice as
many triangles per second as the GeForce3 Ti 500 in the single-light geometry
test. But the card's power still isn't a match for the test with eight dynamic
lights, as the scene still just stutters along.
There are still plenty of game developers working with id's Quake III engine,
so this venerable benchmark is still plenty relevant. We've moved up to using
the latest version of the game, 1.31, and the "four.dm_67" benchmark demo that
id included zipped up in the game's directory (in the "pak6.pak3" file).
Quake III 1.31 - Four
Without any extra filtering or antialiasing turned on, we're now seeing very
high frame rates in Quake III. Still, it's good to see that the GeForce4 MX 460
has plenty of raw rendering power. Its 150 frames per second (fps) average score
at 1024x768 compares with the 51fps we've seen from a GeForce2 MX or the 85.1fps
of a GeForce2 GTS. Clearly, it's a long ways from the previous generation of
cards. With the high-end cards, it's most interesting to look at tests run with
the premium visual quality options turned on. The GeForce4 Ti 4600 loses 30
percent of its 1024x768 score when 8X anisotropic filtering is turned on to make
the floor and wall textures extra sharp, and that drops to half the original
result when 4X antialiasing is also on. It's notable that the Radeon 8500 still
loses a smaller part of its performance when anisotropic filtering is turned on.
Return to Castle Wolfenstein is a very popular game based on the Quake III:
Team Arena engine, and it's a good reminder of how game developers can add a lot
more detail as gamers get their hands on faster hardware. We get better-looking
games, but the frame rate drops back into the double-digit averages we generally
expect for current games. Here we've used the "checkpoint.dm" benchmark demo,
which is available for download from various Wolfenstein community sites.
Return to Castle Wolfenstein
1.1 - Checkpoint demo
What's most impressive in these results is how little the GeForce4 4600's
score drops when we move to higher resolutions. The card has plenty of
performance to allow for both high resolutions and Quincunx antialiasing. In
contrast, the fastest GeForce4 MX can run this game at 1024x768 with Quincunx
on, but it really doesn't quite have to power to keep things uniformly smooth.
How we tested: AMD Athlon 1900+, 256MB PC2100 DDR
SDRAM, IBM Deskstar ATA-100, Nvidia drivers version 27.30, ATI drivers version
7.66, Creative Labs Sound Blaster Live!, Windows XP.
Quake III (v1.31)
and Return to Castle (v.1.1) were set to maximum geometric and texture detail
levels, with trilinear filtering. OpenGL and D3D Vsync settings off. The Windows
desktop was set to 1024x768 at 75Hz.
While those who follow Nvidia's regular product cycles have been awaiting the
GeForce4's public announcement for some time, there's more to this product
launch than we could have expected. Now that the test results are in, we can see
that the performance jump between the GeForce3 cards and the new performance
cards is quite big--by 50 percent in some very demanding tests. Generally, the
smaller generational steps bring much smaller speed increases. The GeForce4 MX
is also a much bigger jump over the GeForce2 cards it replaces. The GeForce2 MX
will now take the very low-end place that the TNT2 has held on to, so the TNT2
should finally disappear from shipping PCs completely.
The announcement of the $199 GeForce4 Ti 4200 was a decent surprise at the
Nvidia press conference for the product launch. It won't be out at the same time
as the other cards, and Nvidia's board partners seemed unprepared for the sudden
change of plans, but the card should be worth the wait. The $200 cost is where
the battle between ATI and Nvidia will be shaping up for the upcoming months,
since both will now have DirectX 8 cards for the mainstream gamer. The Ti 4200
actually is a good reason for serious gamers to steer clear of the GeForce4 MX
460, at least at the $179 list price. The lack of support for the shader effects
that are coming in future games means that the GeForce4 MX card shouldn't be
considered as a long-term upgrade.
We've scratched only the surface of the GeForce4's performance potential.
Since the GeForce4 MX cards will be streaming out later this month and the Ti
cards are coming in March, there's a little time to consider upgrading. We'll
have reviews of final GeForce4 cards in the coming weeks.