I currently have an Intel D945Gnt motherboard that has proven to be a looser in various ways. Using an Intel Dual-Core 3.2 Ghz processor, 2x512 Mb dual channel RAM. Am considering change to the Asus P5N-E motherboard so need advice--will my existing CPU work Ok in the Asus?
HELP!
I read some of the reviews over here and it was a big help in oc my E6600 on the p5n-e. Currently im runin it at 3.21ghz, but my mem timings are wack. Im kinda new to this so any help would be appreciated. I have the system set to 1425 fsb (qdr) x 9 (multiplier) using pc5300 mem 2gb pqi and evga 7950gt ko. I tried setting the fsb to 1608 like it says in one of the reviews but it overloaded the system. Hoping to get some results out of this so i can make this thing a bit faster. Thanks.
quote: a bit more testing and validation in the future before launch might be a better solution than BIOS patches after the fact
Testing? But that would delay getting the product out to market before the competition, and possibly stuff up their overly enthusiastic deadlines and announcements. Not to mention costing money that they could save to let the customers beta test it. The people buying these things are tweakers anyway.
Hey, the software industry gets away with releasing shoddy, half-finished products all the time, and in fact gets the same people to keep buying them. Not to mention releasing essentially the same product with a slightly different name (nF5/nF6).
I would like to know where you are setting this FSB to 402X9 (Exactly what are you setting to 402 ?)or other FSB# Settings. I just received 2 of theses Boards and Compared to a Gigabyte DQ6 or ASUS P5W DH Board which I have also I'm at a complete loss with this Board. So far no where in the BIOS do I see where I can make this change, I've been in all the Sections & Sub Sections of the BIOS but have yet to find where to change the FSB ... ???
Changed Linked mode to Unlinked, feel free to change the FSB (QDR) rates. In this BIOS, 402FSB will be set as a 1608 (QDR) in this field.
I beleive section 2.24 of the manual has further details if my memory serves me. I just arrived at the airport and will be offline for a week in a few moments. ;-)
Thanks for the Tip Gary, that's what I figured I had to do. The only problem is the FSB (QDR) only allows me to set the FSB between 533 & 3000. That's not going to work for me, even @ 533 with a 9 Multiplier that's way to high a CPU Clocks speed for the system to run.
I tried backing off the Multiplier to 6 and going with 533 which should be about 3.2Ghz & about where I want to run the PC. The PC booted up but was only showing me a 1.59Ghz for the CPU ... ??? I'm starting to dislike this MB immensely, sometimes more is not better IMO...All the different Options, Linked, Unlinked, AUTO, Manual, I guess is something for the Die Hard OClockers but for somebody like me who just wants to go in the BIOS & set the FSB & Voltage without all the Head Scratching on what the different Options are this isn't a good board for them.
I would return the boards but the policy where I got them is for replacement only for defective boards so I may have to just eat them & get something else that I'm familiar with. I do have 4 E6600's running on different boards @ 3.5-3.6Ghz with no problems & a X6800EE running @ 3.8Ghz also with no problems. Live and learn I guess ... Thanks again ... Steve
It would be very interesting if you could test the 8800GTX SLI setup in high resolution in several games that are known for acctually benefiting from SLI! So we can see how the performance difference is between the 2x16x on the 680i and the 2x8x on the 650i :-) Maby having 2x16x pci-e is more "placebo" than really important for perfomance? ;-)
I also think it's interesting that there are no s775 motherboard chipset with 2x16x pci-e lanes. Both the 975X and RD600 offers "just" 2x8x pci-e if I am correct. Only the RD580 chipset for s939 and AM2 have the 2x16x pci-e feature. I wonder how the upcoming R600 cards will perform on these different platforms, how they also in Crossfire perform on the two different "speed grades" of motherboards :-) I wonder if ATI/AMD will come with a s775 chipset with true 2x16x pci-e for the release of R600 :-)
quote: So we can see how the performance difference is between the 2x16x on the 680i and the 2x8x on the 650i
yeah me too. i remember there were discussions about the pci-e transition because apparently the agp interface was quite sufficient for the traffic gf cards generated back then. i think it's also because the agp interface was not so reliable when approaching its limits but i'm really not too sure about that.
anyway, it's interesting to know whether today's gf cards make benefit of the higher digital bandwidth
The big problem with AGP is that it only allowed for one high-speed port. PCIe allows for many more (depending on chipset), plus you get high up and down bandwidth, whereas AGP had fast writes (CPU to card) but slow reads (card to CPU). X8 PCIe is still at least as fast as 8X AGP in terms of bandwidth, and in most instances we aren't stressing that level of bandwidth.
x8 PCIe can be as slow as AGP4X depending on the traffic pattern. 4 lanes of PCIe (or 8 half-lanes technically; the number of lanes in each direction in x8) is 1GBps, AGP4X is 1.066GBps. So if most of the data were being streamed in one direction, those two would be equivalent, theoretically. AGP8X would have 2.13GBps in which to stream that uni-directional data. If half the data were going in each direction, then x8 PCIe would be equivalent to AGP8X since they'd both have 1GBps available for each direction, or 2GBps half the time for AGP actually (though performance might be lower with AGP because of the non-independent half-duplex nature).
But since AGP4X is probably still capable of handling the majority of applications, it doesn't really matter much.
Too bad we can't manually control the number of lanes in use to a particular slot. It would be very interesting to compare performance using the same graphics card on the same mainboard using x1, which could depending on the pattern be about equal to a simple PCI card or AGP1X, to x2, x4, x8 and x16 (since x16 can in some cases be comparable to AGP8X). That would help to definitively say whether all the increased bandwidth is actually making a difference, or if other factors are involved.
AGP 3.0 supports multiple slots depending on what the chipset is designed to support. According to Wikipedia, HP AlphaServer GS1280 has up to 16 AGP slots. Those basically all connect to a single interface on the chipset. It's likely that since it's a part of the AGP3 spec, every chipset could have supported multiple ports, but normal mainboard makers never used it. There were probably reasons that it wouldn't have worked well for an SLI type feature, possibly the read/write bandwidth issue.
Any chipset designer also could have just put in multiple AGP interfaces I bet, even if they only supported one card a time. Don't know what effect that would have on bandwidth or contention for access to the CPU. The cards probably also would have not been able to work in any sort of SLI configuration where the data had to go over the chipset bus.
Your article starts with questions about this, and they remain unresolved at least up until nForce4 chipsets to my knowledge (because I have one). Of course I'm not stupid enough to risk using nVidia's hardware firewall and associated drivers, but even their IDE drivers can cause a normal installation of Windows XP to have trouble starting which means I cannot safely enable NCQ (I have a dual-core processor) or even benefit from any acceleration the nForce4 chipset might provide, because the nVidia drivers are unstable.
I once used to trust nVidia, especially with drivers back in the early GeForce days, but the latest official GeForce drivers have been bug-ridden what with incorrect monitor refresh-rate detection (even after using the .inf file), and stupidity like doubling the reported memory clock speed of the card when it had always previously been correct.
Their good graphics-card drivers were why I bought an nForce4 based board, and also on this site's recommendation, and I must admit I'm only so-so about it. It works and does everything it says it should on the box, but the computer doesn't feel as responsive as it should and I suspect that is partly because I had to revert to the default Microsoft disk drivers.
All reviews of nVidia chipset motherboards should include a mention about their driver issues (bugs) until they are fixed. Just because you test a mobo for one day and it seems to work and overclock to a given level, does not mean it can be trusted day-in day-out. If you cannot install the IDE drivers, then NCQ and other hard-drive features are negated. If the hardware firewall drivers are so bad no one with any sense goes near them, then that hardware in the chipset is worthless and could best be described as a liability.
I like this site, but it would be nice if you sometimes looked back on products you've been given earlier in the year and report on whether they actually lived up to expectations. Assuming you get to keep any of your stuff. If you don't, then the opinions of the writers becomes almost meaningless because anything looks good for a day or two.
Gary Key should be sensitive to this issue more than anyone. Gary tried to facilitate contact between me and Nvidia to try to nail down the cause of the hardware firewall corruption issues. He contacted Nvidia several times for me, and I was contacted by an Nvidia rep twice. I provided the Nvidia rep with detailed steps that I had used to install Windows and the drivers. I conducted tests without any software installed, and continually experienced issues. I provided screen shots of errors to the rep as well. I offered to install Windows and drivers of any version they requested, using whatever steps they wanted.
After providing them with all of the details and making that offer, Nvidia never contacted me again. Gary followed up with me, and contacted Nvidia again on my behalf to try to get them to get in touch with me. Ultimately, they just removed official support for the firewall. I am honestly surprised a class action suit never came of it. Nvidia used the hardware firewall as a selling feature, then made no attempt to solve the issues that were being experienced by many users, and finally just pulled the plug on it.
Anyway, I too have little faith in Nvidia actually taking the issues seriously and finding a solution. I'm not going to say that I'll never buy a board with an Nvidia chipset again, but I can guarantee I won't be buying 680/650 when there are already known issues, and any future board based on an Nvidia chipset will have to go through months of retail availability and positive user feedback before I'd be willing to try again.
Insightful post. I'm still using an nForce 4 Ultra chipset board (MSI 7125 K8N Neo4 Platinum), and it's been good for me, but I've never used their firewall software after hearing reports from others.
The current 680i issues have led me to the same conclusion as you: I have no interest in buying an nVidia chipset mainboard next time around (so far, Intel's i975X seems to be the only one I'd be interested in). It seems nVidia has a history of sweeping troubles (i.e., this issue, first-generation PureVideo fiascos with the NV40/45 graphics chipsets that I'm surprised never caused a class-action, the nForce3 250Gb firewall that didn't provide the acceleration they first claimed it did) under the rug if they cannot resolve them through software fixes, and hope nobody raises enough of a ruckus (a method which seems to have worked well for them).
I've just bought a new Geforce graphics card, but experiencing the PureVideo issues alone caused me to skip to ATI for two generations. It's also taught me to read forums with additional user experiences of a product for the first month after release, before I purchase. It seems review sites often miss driver issues/bugs in first-rev. hardware, due to limited time envelopes for review, or not being able to test with as wide a variety of hardware as the community (admittedly, not their fault). I'm not willing to pay the early-adopter/rev 0.9 price any more.
Was just wandering isn't the power numbers of idle and full load are a little to high for the stability of the system.. i m not sure but i feel the higher power is going to reduce the stability of the over clock in the longer run...
Performance and feature wise it look pretty ideal to me.. only if its power number has been inline with P965.
Any chance that these power number coming down due to the BIOS fix/update.?
I doubt the power req's will drop much at all over time. However, higher power draw doesn't necessarily mean less stable. It does mean you usually need more cooling, but a lot of it is simply a factor of the chipset design. I'm pretty sure 650i is a 90nm process technology, but for whatever reason NVIDIA has always made chips that run hot. The Pentium 4 wasn't less stable because it used more power, though, and neither is the nForce series.
Perhaps part of the cause of the high power is that NVIDIA uses HyperTransport as well as the Intel FSB architecture. Then having two chips that run hot.... Added circuitry to go from one to the other? I don't know. Still, the ~40W power difference is pretty amazing (in a bad way).
For $130, that's a pretty good looking board. I was expecting the 650SLI chipset based boards to be more around $150-$175. Now this makes me curious as to how 650Ultra will pan out.
Yeah, feature wise, its not too bad, too bad Asus has long ruined their reputation with me over the years. Would be just my luck, if I bought this, would make my 7th (in a row) Asus board that was bad out of the box . . .
Might it also suggest that I've been building systems since the 80's, and still don't know what I'm doing ? You, and I both can make random assumptions about each other all day long, but it wont make anything change the fact that each board WAS dead. Period.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
27 Comments
Back to Article
Thats Me - Tuesday, March 6, 2007 - link
I currently have an Intel D945Gnt motherboard that has proven to be a looser in various ways. Using an Intel Dual-Core 3.2 Ghz processor, 2x512 Mb dual channel RAM. Am considering change to the Asus P5N-E motherboard so need advice--will my existing CPU work Ok in the Asus?HELP!
jdrom17 - Tuesday, January 23, 2007 - link
Just wondering if you are going to update the review, as ASUS released a new BIOS version yesterday (Jan 22) which says it fixes memory compatibility.It may solve the issues you ran into, and I'd like to know if it does.
MikeeeE18 - Tuesday, January 9, 2007 - link
I read some of the reviews over here and it was a big help in oc my E6600 on the p5n-e. Currently im runin it at 3.21ghz, but my mem timings are wack. Im kinda new to this so any help would be appreciated. I have the system set to 1425 fsb (qdr) x 9 (multiplier) using pc5300 mem 2gb pqi and evga 7950gt ko. I tried setting the fsb to 1608 like it says in one of the reviews but it overloaded the system. Hoping to get some results out of this so i can make this thing a bit faster. Thanks.Operandi - Tuesday, January 2, 2007 - link
Nice review, bonus points for the fan control information.Lord Evermore - Monday, December 25, 2006 - link
Testing? But that would delay getting the product out to market before the competition, and possibly stuff up their overly enthusiastic deadlines and announcements. Not to mention costing money that they could save to let the customers beta test it. The people buying these things are tweakers anyway.
Hey, the software industry gets away with releasing shoddy, half-finished products all the time, and in fact gets the same people to keep buying them. Not to mention releasing essentially the same product with a slightly different name (nF5/nF6).
PoorBoy - Saturday, December 23, 2006 - link
I would like to know where you are setting this FSB to 402X9 (Exactly what are you setting to 402 ?)or other FSB# Settings. I just received 2 of theses Boards and Compared to a Gigabyte DQ6 or ASUS P5W DH Board which I have also I'm at a complete loss with this Board. So far no where in the BIOS do I see where I can make this change, I've been in all the Sections & Sub Sections of the BIOS but have yet to find where to change the FSB ... ???Gary Key - Monday, December 25, 2006 - link
Go into the BIOS -Enter the Advanced Section -
Change AI Tuning to Manual -
Go to FSB & Memory Config -
Changed Linked mode to Unlinked, feel free to change the FSB (QDR) rates. In this BIOS, 402FSB will be set as a 1608 (QDR) in this field.
I beleive section 2.24 of the manual has further details if my memory serves me. I just arrived at the airport and will be offline for a week in a few moments. ;-)
PoorBoy - Monday, December 25, 2006 - link
Thanks for the Tip Gary, that's what I figured I had to do. The only problem is the FSB (QDR) only allows me to set the FSB between 533 & 3000. That's not going to work for me, even @ 533 with a 9 Multiplier that's way to high a CPU Clocks speed for the system to run.I tried backing off the Multiplier to 6 and going with 533 which should be about 3.2Ghz & about where I want to run the PC. The PC booted up but was only showing me a 1.59Ghz for the CPU ... ??? I'm starting to dislike this MB immensely, sometimes more is not better IMO...All the different Options, Linked, Unlinked, AUTO, Manual, I guess is something for the Die Hard OClockers but for somebody like me who just wants to go in the BIOS & set the FSB & Voltage without all the Head Scratching on what the different Options are this isn't a good board for them.
I would return the boards but the policy where I got them is for replacement only for defective boards so I may have to just eat them & get something else that I'm familiar with. I do have 4 E6600's running on different boards @ 3.5-3.6Ghz with no problems & a X6800EE running @ 3.8Ghz also with no problems. Live and learn I guess ... Thanks again ... Steve
Marlowe - Saturday, December 23, 2006 - link
It would be very interesting if you could test the 8800GTX SLI setup in high resolution in several games that are known for acctually benefiting from SLI! So we can see how the performance difference is between the 2x16x on the 680i and the 2x8x on the 650i :-) Maby having 2x16x pci-e is more "placebo" than really important for perfomance? ;-)I also think it's interesting that there are no s775 motherboard chipset with 2x16x pci-e lanes. Both the 975X and RD600 offers "just" 2x8x pci-e if I am correct. Only the RD580 chipset for s939 and AM2 have the 2x16x pci-e feature. I wonder how the upcoming R600 cards will perform on these different platforms, how they also in Crossfire perform on the two different "speed grades" of motherboards :-) I wonder if ATI/AMD will come with a s775 chipset with true 2x16x pci-e for the release of R600 :-)
semo - Sunday, December 24, 2006 - link
anyway, it's interesting to know whether today's gf cards make benefit of the higher digital bandwidth yeah me too. i remember there were discussions about the pci-e transition because apparently the agp interface was quite sufficient for the traffic gf cards generated back then. i think it's also because the agp interface was not so reliable when approaching its limits but i'm really not too sure about that.
JarredWalton - Monday, December 25, 2006 - link
The big problem with AGP is that it only allowed for one high-speed port. PCIe allows for many more (depending on chipset), plus you get high up and down bandwidth, whereas AGP had fast writes (CPU to card) but slow reads (card to CPU). X8 PCIe is still at least as fast as 8X AGP in terms of bandwidth, and in most instances we aren't stressing that level of bandwidth.Lord Evermore - Monday, December 25, 2006 - link
x8 PCIe can be as slow as AGP4X depending on the traffic pattern. 4 lanes of PCIe (or 8 half-lanes technically; the number of lanes in each direction in x8) is 1GBps, AGP4X is 1.066GBps. So if most of the data were being streamed in one direction, those two would be equivalent, theoretically. AGP8X would have 2.13GBps in which to stream that uni-directional data. If half the data were going in each direction, then x8 PCIe would be equivalent to AGP8X since they'd both have 1GBps available for each direction, or 2GBps half the time for AGP actually (though performance might be lower with AGP because of the non-independent half-duplex nature).But since AGP4X is probably still capable of handling the majority of applications, it doesn't really matter much.
Too bad we can't manually control the number of lanes in use to a particular slot. It would be very interesting to compare performance using the same graphics card on the same mainboard using x1, which could depending on the pattern be about equal to a simple PCI card or AGP1X, to x2, x4, x8 and x16 (since x16 can in some cases be comparable to AGP8X). That would help to definitively say whether all the increased bandwidth is actually making a difference, or if other factors are involved.
Lord Evermore - Monday, December 25, 2006 - link
AGP 3.0 supports multiple slots depending on what the chipset is designed to support. According to Wikipedia, HP AlphaServer GS1280 has up to 16 AGP slots. Those basically all connect to a single interface on the chipset. It's likely that since it's a part of the AGP3 spec, every chipset could have supported multiple ports, but normal mainboard makers never used it. There were probably reasons that it wouldn't have worked well for an SLI type feature, possibly the read/write bandwidth issue.Any chipset designer also could have just put in multiple AGP interfaces I bet, even if they only supported one card a time. Don't know what effect that would have on bandwidth or contention for access to the CPU. The cards probably also would have not been able to work in any sort of SLI configuration where the data had to go over the chipset bus.
PrinceGaz - Friday, December 22, 2006 - link
Your article starts with questions about this, and they remain unresolved at least up until nForce4 chipsets to my knowledge (because I have one). Of course I'm not stupid enough to risk using nVidia's hardware firewall and associated drivers, but even their IDE drivers can cause a normal installation of Windows XP to have trouble starting which means I cannot safely enable NCQ (I have a dual-core processor) or even benefit from any acceleration the nForce4 chipset might provide, because the nVidia drivers are unstable.I once used to trust nVidia, especially with drivers back in the early GeForce days, but the latest official GeForce drivers have been bug-ridden what with incorrect monitor refresh-rate detection (even after using the .inf file), and stupidity like doubling the reported memory clock speed of the card when it had always previously been correct.
Their good graphics-card drivers were why I bought an nForce4 based board, and also on this site's recommendation, and I must admit I'm only so-so about it. It works and does everything it says it should on the box, but the computer doesn't feel as responsive as it should and I suspect that is partly because I had to revert to the default Microsoft disk drivers.
All reviews of nVidia chipset motherboards should include a mention about their driver issues (bugs) until they are fixed. Just because you test a mobo for one day and it seems to work and overclock to a given level, does not mean it can be trusted day-in day-out. If you cannot install the IDE drivers, then NCQ and other hard-drive features are negated. If the hardware firewall drivers are so bad no one with any sense goes near them, then that hardware in the chipset is worthless and could best be described as a liability.
I like this site, but it would be nice if you sometimes looked back on products you've been given earlier in the year and report on whether they actually lived up to expectations. Assuming you get to keep any of your stuff. If you don't, then the opinions of the writers becomes almost meaningless because anything looks good for a day or two.
Tanclearas - Saturday, December 23, 2006 - link
Gary Key should be sensitive to this issue more than anyone. Gary tried to facilitate contact between me and Nvidia to try to nail down the cause of the hardware firewall corruption issues. He contacted Nvidia several times for me, and I was contacted by an Nvidia rep twice. I provided the Nvidia rep with detailed steps that I had used to install Windows and the drivers. I conducted tests without any software installed, and continually experienced issues. I provided screen shots of errors to the rep as well. I offered to install Windows and drivers of any version they requested, using whatever steps they wanted.After providing them with all of the details and making that offer, Nvidia never contacted me again. Gary followed up with me, and contacted Nvidia again on my behalf to try to get them to get in touch with me. Ultimately, they just removed official support for the firewall. I am honestly surprised a class action suit never came of it. Nvidia used the hardware firewall as a selling feature, then made no attempt to solve the issues that were being experienced by many users, and finally just pulled the plug on it.
Anyway, I too have little faith in Nvidia actually taking the issues seriously and finding a solution. I'm not going to say that I'll never buy a board with an Nvidia chipset again, but I can guarantee I won't be buying 680/650 when there are already known issues, and any future board based on an Nvidia chipset will have to go through months of retail availability and positive user feedback before I'd be willing to try again.
LoneWolf15 - Tuesday, December 26, 2006 - link
Insightful post. I'm still using an nForce 4 Ultra chipset board (MSI 7125 K8N Neo4 Platinum), and it's been good for me, but I've never used their firewall software after hearing reports from others.The current 680i issues have led me to the same conclusion as you: I have no interest in buying an nVidia chipset mainboard next time around (so far, Intel's i975X seems to be the only one I'd be interested in). It seems nVidia has a history of sweeping troubles (i.e., this issue, first-generation PureVideo fiascos with the NV40/45 graphics chipsets that I'm surprised never caused a class-action, the nForce3 250Gb firewall that didn't provide the acceleration they first claimed it did) under the rug if they cannot resolve them through software fixes, and hope nobody raises enough of a ruckus (a method which seems to have worked well for them).
I've just bought a new Geforce graphics card, but experiencing the PureVideo issues alone caused me to skip to ATI for two generations. It's also taught me to read forums with additional user experiences of a product for the first month after release, before I purchase. It seems review sites often miss driver issues/bugs in first-rev. hardware, due to limited time envelopes for review, or not being able to test with as wide a variety of hardware as the community (admittedly, not their fault). I'm not willing to pay the early-adopter/rev 0.9 price any more.
KeypoX - Saturday, December 23, 2006 - link
anyone notice how low quality these articles have become? A couple years ago this site was a decent place to get some info but now ...Please go back to the old good qual cause now you guys are not good at all ... i feel pretty sad everytime i visit the site
Xcom1Cheetah - Friday, December 22, 2006 - link
Was just wandering isn't the power numbers of idle and full load are a little to high for the stability of the system.. i m not sure but i feel the higher power is going to reduce the stability of the over clock in the longer run...Performance and feature wise it look pretty ideal to me.. only if its power number has been inline with P965.
Any chance that these power number coming down due to the BIOS fix/update.?
JarredWalton - Friday, December 22, 2006 - link
I doubt the power req's will drop much at all over time. However, higher power draw doesn't necessarily mean less stable. It does mean you usually need more cooling, but a lot of it is simply a factor of the chipset design. I'm pretty sure 650i is a 90nm process technology, but for whatever reason NVIDIA has always made chips that run hot. The Pentium 4 wasn't less stable because it used more power, though, and neither is the nForce series.Perhaps part of the cause of the high power is that NVIDIA uses HyperTransport as well as the Intel FSB architecture. Then having two chips that run hot.... Added circuitry to go from one to the other? I don't know. Still, the ~40W power difference is pretty amazing (in a bad way).
Avalon - Friday, December 22, 2006 - link
For $130, that's a pretty good looking board. I was expecting the 650SLI chipset based boards to be more around $150-$175. Now this makes me curious as to how 650Ultra will pan out.yyrkoon - Friday, December 22, 2006 - link
Yeah, feature wise, its not too bad, too bad Asus has long ruined their reputation with me over the years. Would be just my luck, if I bought this, would make my 7th (in a row) Asus board that was bad out of the box . . .tayhimself - Saturday, December 23, 2006 - link
That suggests that you are the problem, not Asus.yyrkoon - Sunday, December 24, 2006 - link
Might it also suggest that I've been building systems since the 80's, and still don't know what I'm doing ? You, and I both can make random assumptions about each other all day long, but it wont make anything change the fact that each board WAS dead. Period.LoneWolf15 - Tuesday, December 26, 2006 - link
Personally, I think you have just offended the Great Spirits of Technology in some way. ;)cryptonomicon - Friday, December 22, 2006 - link
I like how this board has two firewire ports yet the pricing on the board is still close to the 965 based boards, which don't have them.rallyhard - Friday, December 22, 2006 - link
Thanks for the great article!(One thing I noticed...Page 5, CoH SLI Test...shouldn't that be the P5N-E with the 8800GTX SLI on top in the bar graph?)
JarredWalton - Friday, December 22, 2006 - link
Corrected, thanks, although hopefully it was clear that was SLI. :)