Does this indicate that sound cards will be moving to PCI Express.
Just curious because I have an older Board and am going to want to upgrade and I find it hard to fit a sound card around my twin 8800 boards. Due to them taking up the available slots.
how many ppl are using even crossfire 2x let alone think about crossfire 4x? 4 pcie slots IMHO is overkill. hope amd at least delivers crossfire x drivers in time. else it'll all be an utter waste.
"many x16 devices are only capable of down-training to speeds of x4 or x8 and without this bridge chip the last x1 lane would be otherwise useless." This does interest me, I have had 3 nvidia cards (2x6600GT and 6200) running on a plain jane MSI neo4 OEM (Fujitsu Seimens bios), simply by cutting the ($25) 6200 down to a x1 connector and cutting the back out of one of the motherboard x1 slots to allow the 6600GT to fit physically.
I thought that part of the PCIe standard was auto-negotiation, wouldn't any device NOT compatible with x1 be breaking the standard?
I am very curious about this, as the PCIe technology doesn't seem to be getting as much use as it could(IE it is MUCH more flexible than it is given credit for). The PCIe scaling analysis at Tomshardware showed that an 8800GTS was still quite capable at x8, so on PCIe 2.0 a x4 slot could be used for gaming at acceptable resolutions! (I am fully aware that only the first 2 slots are PCIe 2.0)
The new Radeon "X2" card with 4 outputs could fit in this motherboard 3 times over, that is 12 displays on 1 PC with off-the-shelf technology!! With the quad-core and 12 displays, 2 PCs at around ~$1,000-$3,000 apiece could service a whole classroom of kids using learning software, typing tutor programs, or browsing the web. Even with regular old 2 output video cards you could get 8 displays on a much cheaper rig with sub-$50 video cards. So I wouldn't say "the performance potential of such a setup is marginal", unless I was measuring performance in such meaningless terms as how many $xxx video cards I can jam in a PC to get xx% increase.
You are correct when you say that PCIe devices are capable of auto-negotiating their link speeds; however, not all devices will allow for negotiated speeds of only x1. This includes most video cards, which will allow themselves to train to x16, x8, and x4 speeds but not x1. They are flexible to the extent possible, but nowhere does the PCIe specification require that that all devices support all speeds...after all, cards that make use of an x8 mechanical interface are obviously incapable of x16 speeds, too...
I can't find it now, but a couple of days ago I found this X48 board listed on MSI's website along with an X48C which would take either DDR3 or DDR2. Would be great to be able to use it now with DDR2 and upgrade to DDR3 when the prices get sane and it becomes clear why DDR3 is better.
almost always recommend replacing the thermal interface material (TIM)
You state to replace teh TIM for the PWM and Chipset heatpipe coolers. I have a question regarding that. I have a IP35 pro, and I bought a new case. I thought now might be a good time to replace my pushpins with bolts, but I am hesitant about removing the thermal pad. I know that a direct contact to the heatpipe cooling system will result in better heat transfer, but I am afraid of shorting something out. Is it safe to have the cooler setting directly on the PWM? Does the pad also function as an insulator? I can live with a bit higher temps, but I can't live with killing my MOBO. Anyone's comments with some experience on this would be greatly appreciated.
I get 9631mb/s on my Nvidia EVGA 680i chipset at only 750mhz ddr2 with 4-4-3-5 1T.
And my brother who owns an Intel P35 Foxconn Mars board gets 9132mb/sec at 950mhz ddr2 with 5-5-5-18 2t.
So what is the point on moving to ddr3 when it offers no performance gains in memory bandwidth even at a whopping 1600mhz. Is it just me who thinks Cas7 is wayyyy too high to even consider to push ddr3 to the market right now?
I believe this is what only Intel wants so it can make AMD look old just like how they forced AMD a few years ago to make AM2 boards that only supported dd2.
You cannot directly compare CAS latency across DDR revisions.
"Consider the latency ratings of the three most recent memory formats: Upper-midrange DDR-333 was rated at CAS 2; similar-market DDR2-667 was rated at CAS 4 and today's middle DDR3-1333 is often rated at CAS 8. Most people would be shocked to learn that these vastly different rated timings result in the same actual response time, which is specifically 12 nanoseconds." - Tomshardware
Actually, you can compare the latency pretty directly across DDR technologies, as shown in your example. 2 clocks at DDR-333 = 4 (twice as fast) clocks at DDR2-667 = 8 (four times as fast) clocks at DDR3-1333.
Please include stability testing. Who cares if you can get 1-5% more performance via exotic tweaks. Lets make sure that the board doesnt lock up when overclocked and laden with RAM by doing some stress testing. And make the stress testing transparent. These reviews are not as useful as or TR reviews for this reason.
We will be including this type of information and much, much more in our upcoming X38/X48 motherboard round-up. As we mentioned in the review, this article is meant to provide you an early look at the layout, features, specifications, interesting BIOS options and a quick preview of any overclocking results. Stay tuned, we're confident we will address the concerns you brought to day in much more detail in just a short time.
And please let us know how Nvidia cards work in SLI under Intel chipsets, not only under Nvidia's chipsets.
I am particularly interested in twinned 8800 GT, since AnandTech called them "The only cards that matter".
We all know that nVidia does not "certify" this Intel chipset to run SLI, but does that mean it won't work? I agree with Vikendios: I would like to see how SLI performs on these Intel-based motherboards.
I know of someone at a hardware site that was threatened with a lawsuit if they showed SLI performance on a non-NVIDIA system. (I don't know if those threats are still being sent around, but it wouldn't surprise me.) At present, the only way to make SLI work on a non-NVIDIA chipset requires a hack.
Hacked drivers, but the latest drivers use some sort of encryption I believe so cracking them breaks the DMCA. I don't even know if anyone can break the encryption, and the last hacked drivers I heard about are quite old, XP only GeForce 7xxx or earlier only, and probably won't work with many modern games.
The other approach that might work would be to hack your BIOS so that it identifies itself as an nForce chipset. I don't know exactly what would be required for the ID string, or if it would work properly afterwards.
Note that SLI works on stuff like SkullTrail and PM945 (i.e. http://www.anandtech.com/mobile/showdoc.aspx?i=307...">in my Alienware m9750 review) because there's an nForce 100 bridge chip in use. nForce 100 is the precursor to the nForce 200 that's used to provide 780i with dual PCI-E 2.0 slots.
It seems that the author of this article is very much “into” memory stuff. I have a little suggestion. Why don't you consider writing kind of “Everything about motherboards & RAM” guide. You could cover some practical aspects which are NEVER addressed by reviewers. For example: On motherboard supporting up to 8GB of RAM (like the one reviewed today), what is the limiting factor for RAM amount? Is it electrical(?) design of PCB, or is the address space limitation of chipset (BIOS)? Because if the BIOS can not address more than 8GB of memory, memory remapping will not help and you just can't have 8GB of RAM available to your (64bit) OS. Is that the case? Personally I don't run Virtual Machines nor do I have other reasons for installing 8GB of RAM , but other people do. Besides, it would be nice to just KNOW.
[quote]Adding options for tRD (MCH Read Delay) and a couple other key memory timings will go a long way improving the already good memory latency time.[/quote]
I would hope that everyone in the industry read your article on the Asus X48 board with adjustable tRD to realize how important this will be to the enthusiast community. If you keep pushing, I imagine most of them will capitulate
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
21 Comments
Back to Article
taylormills - Monday, February 4, 2008 - link
Hi all,Just a newbie question.
Does this indicate that sound cards will be moving to PCI Express.
Just curious because I have an older Board and am going to want to upgrade and I find it hard to fit a sound card around my twin 8800 boards. Due to them taking up the available slots.
Any info ?
karthikrg - Saturday, February 2, 2008 - link
how many ppl are using even crossfire 2x let alone think about crossfire 4x? 4 pcie slots IMHO is overkill. hope amd at least delivers crossfire x drivers in time. else it'll all be an utter waste.ninjit - Friday, February 1, 2008 - link
At the beginning of the article you mention that this is a DDR3 board, yet in the specifications chart you have lines for:[quote] DDR2 Memory Dividers [/quote]
&
[quote] Regular Unbuffered, non-ECC DDR2 Memory to 8GB total [/quote]
nubie - Friday, February 1, 2008 - link
"many x16 devices are only capable of down-training to speeds of x4 or x8 and without this bridge chip the last x1 lane would be otherwise useless." This does interest me, I have had 3 nvidia cards (2x6600GT and 6200) running on a plain jane MSI neo4 OEM (Fujitsu Seimens bios), simply by cutting the ($25) 6200 down to a x1 connector and cutting the back out of one of the motherboard x1 slots to allow the 6600GT to fit physically.I thought that part of the PCIe standard was auto-negotiation, wouldn't any device NOT compatible with x1 be breaking the standard?
http://picasaweb.google.com/nubie07/PCIEX1/photo#5...">http://picasaweb.google.com/nubie07/PCIEX1/photo#5...
I am very curious about this, as the PCIe technology doesn't seem to be getting as much use as it could(IE it is MUCH more flexible than it is given credit for). The PCIe scaling analysis at Tomshardware showed that an 8800GTS was still quite capable at x8, so on PCIe 2.0 a x4 slot could be used for gaming at acceptable resolutions! (I am fully aware that only the first 2 slots are PCIe 2.0)
The new Radeon "X2" card with 4 outputs could fit in this motherboard 3 times over, that is 12 displays on 1 PC with off-the-shelf technology!! With the quad-core and 12 displays, 2 PCs at around ~$1,000-$3,000 apiece could service a whole classroom of kids using learning software, typing tutor programs, or browsing the web. Even with regular old 2 output video cards you could get 8 displays on a much cheaper rig with sub-$50 video cards. So I wouldn't say "the performance potential of such a setup is marginal", unless I was measuring performance in such meaningless terms as how many $xxx video cards I can jam in a PC to get xx% increase.
kjboughton - Friday, February 1, 2008 - link
You are correct when you say that PCIe devices are capable of auto-negotiating their link speeds; however, not all devices will allow for negotiated speeds of only x1. This includes most video cards, which will allow themselves to train to x16, x8, and x4 speeds but not x1. They are flexible to the extent possible, but nowhere does the PCIe specification require that that all devices support all speeds...after all, cards that make use of an x8 mechanical interface are obviously incapable of x16 speeds, too...smeister - Friday, February 1, 2008 - link
What's with the memory reference voltage?On the specification page (pg 2)
Memory Reference Voltage Auto, 0.90V ~ 1.25V
It should be half the DDR3 memory voltage
1.5V x 0.5 = 0.75V, so should be: Auto, 0.75V ~ 1.25V
kjboughton - Friday, February 1, 2008 - link
If you want half of 1.50V then leave it on 'Auto'...regardless, the lowest manually selectable value is 0.90V.DBissett - Thursday, January 31, 2008 - link
I can't find it now, but a couple of days ago I found this X48 board listed on MSI's website along with an X48C which would take either DDR3 or DDR2. Would be great to be able to use it now with DDR2 and upgrade to DDR3 when the prices get sane and it becomes clear why DDR3 is better.Dave
feraltoad - Thursday, January 31, 2008 - link
almost always recommend replacing the thermal interface material (TIM)You state to replace teh TIM for the PWM and Chipset heatpipe coolers. I have a question regarding that. I have a IP35 pro, and I bought a new case. I thought now might be a good time to replace my pushpins with bolts, but I am hesitant about removing the thermal pad. I know that a direct contact to the heatpipe cooling system will result in better heat transfer, but I am afraid of shorting something out. Is it safe to have the cooler setting directly on the PWM? Does the pad also function as an insulator? I can live with a bit higher temps, but I can't live with killing my MOBO. Anyone's comments with some experience on this would be greatly appreciated.
ButterFlyEffect78 - Thursday, January 31, 2008 - link
I get 9631mb/s on my Nvidia EVGA 680i chipset at only 750mhz ddr2 with 4-4-3-5 1T.And my brother who owns an Intel P35 Foxconn Mars board gets 9132mb/sec at 950mhz ddr2 with 5-5-5-18 2t.
So what is the point on moving to ddr3 when it offers no performance gains in memory bandwidth even at a whopping 1600mhz. Is it just me who thinks Cas7 is wayyyy too high to even consider to push ddr3 to the market right now?
I believe this is what only Intel wants so it can make AMD look old just like how they forced AMD a few years ago to make AM2 boards that only supported dd2.
HotBBQ - Thursday, January 31, 2008 - link
You cannot directly compare CAS latency across DDR revisions."Consider the latency ratings of the three most recent memory formats: Upper-midrange DDR-333 was rated at CAS 2; similar-market DDR2-667 was rated at CAS 4 and today's middle DDR3-1333 is often rated at CAS 8. Most people would be shocked to learn that these vastly different rated timings result in the same actual response time, which is specifically 12 nanoseconds." - Tomshardware
Mondoman - Friday, February 1, 2008 - link
Actually, you can compare the latency pretty directly across DDR technologies, as shown in your example. 2 clocks at DDR-333 = 4 (twice as fast) clocks at DDR2-667 = 8 (four times as fast) clocks at DDR3-1333.tayhimself - Thursday, January 31, 2008 - link
Please include stability testing. Who cares if you can get 1-5% more performance via exotic tweaks. Lets make sure that the board doesnt lock up when overclocked and laden with RAM by doing some stress testing. And make the stress testing transparent. These reviews are not as useful as or TR reviews for this reason.ATWindsor - Friday, February 1, 2008 - link
And also test if the product supports other things than graphic-cards in the PCIe-slots, a card like this begs for it.kjboughton - Thursday, January 31, 2008 - link
We will be including this type of information and much, much more in our upcoming X38/X48 motherboard round-up. As we mentioned in the review, this article is meant to provide you an early look at the layout, features, specifications, interesting BIOS options and a quick preview of any overclocking results. Stay tuned, we're confident we will address the concerns you brought to day in much more detail in just a short time.Vikendios - Thursday, January 31, 2008 - link
And please let us know how Nvidia cards work in SLI under Intel chipsets, not only under Nvidia's chipsets.I am particularly interested in twinned 8800 GT, since AnandTech called them "The only cards that matter".
OzoZoz - Thursday, January 31, 2008 - link
We all know that nVidia does not "certify" this Intel chipset to run SLI, but does that mean it won't work? I agree with Vikendios: I would like to see how SLI performs on these Intel-based motherboards.JarredWalton - Thursday, January 31, 2008 - link
I know of someone at a hardware site that was threatened with a lawsuit if they showed SLI performance on a non-NVIDIA system. (I don't know if those threats are still being sent around, but it wouldn't surprise me.) At present, the only way to make SLI work on a non-NVIDIA chipset requires a hack.Hacked drivers, but the latest drivers use some sort of encryption I believe so cracking them breaks the DMCA. I don't even know if anyone can break the encryption, and the last hacked drivers I heard about are quite old, XP only GeForce 7xxx or earlier only, and probably won't work with many modern games.
The other approach that might work would be to hack your BIOS so that it identifies itself as an nForce chipset. I don't know exactly what would be required for the ID string, or if it would work properly afterwards.
Note that SLI works on stuff like SkullTrail and PM945 (i.e. http://www.anandtech.com/mobile/showdoc.aspx?i=307...">in my Alienware m9750 review) because there's an nForce 100 bridge chip in use. nForce 100 is the precursor to the nForce 200 that's used to provide 780i with dual PCI-E 2.0 slots.
SoBizarre - Thursday, January 31, 2008 - link
It seems that the author of this article is very much “into” memory stuff. I have a little suggestion. Why don't you consider writing kind of “Everything about motherboards & RAM” guide. You could cover some practical aspects which are NEVER addressed by reviewers. For example: On motherboard supporting up to 8GB of RAM (like the one reviewed today), what is the limiting factor for RAM amount? Is it electrical(?) design of PCB, or is the address space limitation of chipset (BIOS)? Because if the BIOS can not address more than 8GB of memory, memory remapping will not help and you just can't have 8GB of RAM available to your (64bit) OS. Is that the case? Personally I don't run Virtual Machines nor do I have other reasons for installing 8GB of RAM , but other people do. Besides, it would be nice to just KNOW.smeister - Friday, February 1, 2008 - link
What's with the memory reference voltage?On the specification page (pg 2)
Memory Reference Voltage Auto, 0.90V ~ 1.25V
It should be half the DDR3 memory voltage
1.5V x 0.5 = 0.75V, so should be: Auto, 0.75V ~ 1.25V
Orthogonal - Thursday, January 31, 2008 - link
[quote]Adding options for tRD (MCH Read Delay) and a couple other key memory timings will go a long way improving the already good memory latency time.[/quote]I would hope that everyone in the industry read your article on the Asus X48 board with adjustable tRD to realize how important this will be to the enthusiast community. If you keep pushing, I imagine most of them will capitulate