Micha
Senior Member
Offline
Got milk?
Posts: 317
Gender:
|
Part #1 Thanks for your great definition amp_man, now I'm fully advised! Sorry, but it doesn't look like you know enough about today's graphics processor internal architecture to understand why nVidia "cheats" and why ATI is believed to do not. For your better comprehension - and inviting everybody else interested - here's a short summary of the problem you're all worried about:
We know ATI and nVidia to be the main developers for Microsoft when speaking of Pixel & Vertex Shading. Microsoft's task is to implement their work in the actual DirectX version, that's recently DirectX 9.0b. Due to decisions made by ATI and nVidia years ago, they gap between both firms' way of realizing new technologies became bigger. Along with DirectX9 a special topic became highly interesting, that one is, as you should know, internal arithmetic precision. As the Radeon 8500-9200 (R2XX) works with a unique format (16bit precision), this is - simply said - not enough for complicated material shaders like the ones in DirectX9. So ATI gave the R300 and following models 24bit internal precision. Meanwhile nVidia went another way: even the NV2X (GeForce 3/4) made use of 2 different formats: texture coordinats were calculated with 32bit precision, colour data got a mixture of 9bit and 10bit precision. To realize material shaders and the whole other implementations nVidia extended the colour precision to 12bit, and even a new 16bit format was introduced. (Remember, it's also a matter if a GPU makes use of integer or floating point precision! Integers: have a defined field of numbers and a fixed depth precision Floating points: cover a wider range of numbers with the same quantity of bits but pay this achievment with a mostly lower, variable precision -> the higher the number, the lower the precision) Anyway, Microsoft had to decide which format should be used by Pixel Shaders 2.0, and they grabbed 24bit precision. You see, if they would have set a higher standard, ATI wouldn't even be able to deliver DirectX9 hardware! But Microsoft also legalized a 16bit floating point precision. See what that means? nVidia can't make use of their 12bit colour precision format combined with Pixel Shaders 2.0! Here's why this fact hits the GeForceFX series so hard: the NV30 has 2 arithmetic units: the first handles Pixel instructions with 32bit. This high precision hits performance -> means that most operations could only be done 4 times per clock. That'S a bit too poor, so nVidia decided to create a second unit. The second unit handles frequently used instructions with 12bit precision very fast, but therefore (-> 12bit!) it can't be used by Pixel Shaders 2.0. Got it? That's why the GeForceFX series (precisely GeForceFX 5200, 5600, 5800) is so slow when we speak about DirectX9 performance! Only the 5700 & 5900 got a 32bit second arithmetic unit, therefore they were able to shorten the performance gap towards ATI. Moreover, HLSL (High Level Shading Language) has been developed by Microsoft to give game programmers a relativly easy program language for Shaders 2.0. In its first version, the compiler shipping with SDK for DirectX programmers produced code in that order preferred by the Radeon's arithmetic units. Thus, the GeForceFX series got another performance impact, as it prefers another order. There's a new compiler for the SDK now which also can produce GeForceFX preferred code. I don't want to refer here to ATI's R3XX architecture, let's just say it was more compatible than nVidia's way. Please post here if you want to know more about it, though. It is now nVidia's job to work closer with game developers, Microsoft & their customers. They optimize their drivers for specific games (e.g. Halo) & for performance. e.g. only the main texture is filtered anisotropic when enabled in some detonator versions. As for ATI, they did the same! And they still do! But nobody cares. Something about "cheating", i was surprised you state that optimizing hardware for games etc. is faint..! That's a stupid opinion man, I want my hardware fast and I don't care if only 3 or 4 of 6 or 8 textures (in multitexturing) are filtered anisotropic 'cause nobody would see any difference! Listen, I still prefer ATI because the GeForceFX series has some more bugs than the R3XX & their mid-range and budget cards are a lot faster than nVidia's. Not to forget the customer support which is more personal (3dfx-like) than the one of nVidia. And, of course, i hate nVidia for bying 3dfx, like everyone here. But I also know that the GeForceFX series has a great potencial when adressed in the right way, and that's what you just didn't get!
|
Back to top
|
« Last Edit: 24.02.04 at 16:11:40 by Micha »
AMD Athlon XP Thorton 2400+/2GHz (256KB L2, FSB DDR266MHz) @ Barton 2800+/2.083GHz (512KB L2, FSB DDR333MHz), HIS Radeon 9800Pro, Kingston 768MB PC2700 DDR-RAM (CL 2-3-3-7), Asus A7V8X-X, Creative Soundblaster Audigy 2 ZS, Seagate 160GB 7200rpm ATA100 HDD, be quiet! 400Watt PSU, Windows XP Pro MCE05
IP Logged
|