Pages: 1 ... 29 30 [31] 32 33 ... 36
<p class="description"><font size="2">Nvidia GeForce 9800 GTX got pretty ambiguous reviews because of slight priority over Nvidia GeForce 8800 GTS 512MB and new name. However, as we have expected, pre-overclocked modifications of this graphics accelerator started appearing in the market. One of these cards was tested in our lab.<br /> <br /> <a href="http://www.xbitlabs.com/articles/video/display/zotac-gf9800gtx-amp.html">Read more...</a></font></p>

0 Comments
We doubt anyone will argue that ASUSTeK Computer products can be easily distinguished from the competitors’ solutions. Today we will talk about ASUS graphics card on an Nvidia GPU and will find out if anything makes it special.

Read more...

0 Comments

In the previous article devoted to the new ATI architecture we revealed its significant potential. Now it time for us to move from theory to practice and check how well it will perform in contemporary games.

Read more...

0 Comments

We have been waited for new graphics architecture from ATI for a long time, especially since the previous ATI Radeon HD generation couldn’t compete successfully against high-performance Nvidia solutions. Will ATI Radeon HD 4800 prove up to everyone’s expectations? Let’s try answering this question from the theoretical prospective and also check out the first graphics accelerator based on the new ATI RV770 chip – Power Color Radeon HD 4850 512MB.

Read more...

0 Comments
<font size="2">The past few weeks have been incredibly tumultuous, sleepless, and beyond interesting. It is as if AMD and NVIDIA just started pulling out hardware and throwing it at eachother while we stood in the middle getting pegged with graphics cards. And we weren't just hit with new architectures and unexpected die shrinks, but new drivers left and right. </font> <p itxtvisited="1"><font size="2">First up was GT200, which appeared in the form of the GeForce GTX 280 and GeForce GTX 260. Of course, both of those can be paired or tri-ed (if you will), but with two cards requiring at least a 1200W PSU we're a bit worried of trying three. Then came the randomness that was the accidental launch of the Radeon HD 4850 (albeit with no architectural information) and only a couple hours later we first heard about the 9800 GTX+ which is a die shrunk higher clocked 9800 GTX that is now publicly announced and will be available in July.</font></p> <p itxtvisited="1"><font size="2">And now we have the other thing we've been working on since we finished GT200: RV770 in all it's glory. This includes the 4850 whose performance we have already seen and the Radeon HD 4870: the teraflop card that falls further short of hitting its theoretical performance than NVIDIA did with GT200. But theoretical performance isn't reality, and nothing can be done if every instruction is a multiply-add or combination of a multiply-add and a multiply, so while marketing loves to trot out big numbers we quite prefer real-world testing with games people will actually play on this hardware.</font></p> <p itxtvisited="1"><font size="2">But before we get to performance, and as usual, we will want to take as deep a look into this architecture as possible. We won't be able to go as deep with RV770 as we could with GT200, as we had access to a lot of information both from NVIDIA and from outside NVIDIA that allowed us to learn more about their architecture. At the same time, we still know barely anything about the real design of either NVIDIA or AMD's hardware as they prefer to hold their cards very close.</font></p> <p itxtvisited="1"><font size="2">This won't work long term, however. As we push toward moving compute intensive applications to the GPU, developers will not just want -- they will need low level architectural information. It is impossible to properly optimize code for an architecture when you don't know exact details about timing, latency, cache sizes, register files, resource sharing, and the like. While, this generation, we have decidedly more information from NVIDIA on how to properly program their architecture, we still need more from both AMD and NVIDIA.<br /> <br /> Priced at $299 the Radeon HD 4870 is clocked 20% higher and has 81% more memory bandwidth than the Radeon HD 4850. The GPU clock speed improvement is simply due to better cooling as the 4870 ships with a two-slot cooler. The memory bandwidth improvement is due to the Radeon HD 4870 using GDDR5 memory instead of GDDR3 used on the 4850 and 3870; the result is a data rate equal to 4x the memory clock speed or 3.6Gbps. The Radeon HD 4870 and 4850 both use a 256-bit memory bus like the 3870 before it (as well as NVIDIA's competing GeForce 9800 GTX), but total memory bandwidth on the 4870 ends up being 115.2GB/s thanks to the use of GDDR5. Note that this is more memory bandwidth than the GeForce GTX 260 which has a much wider 448-bit memory bus, but uses GDDR3 devices.<br /> <br /> The use of GDDR5 enabled AMD to deliver GeForce GTX 260 class memory bandwidth, but without the pin-count and expense of a 448-bit memory interface. GDDR5 actually implements a number of Rambus-like routing and signaling technologies while still remaining a parallel based memory technology, the result is something that appears to deliver tremendous bandwidth per pin in a reliable, high volume solution. </font></p> <p itxtvisited="1"><font size="2">AMD most likely took a risk on bringing GDDR5 to market this early and we do expect NVIDIA to follow suit, AMD is simply enjoying the benefits of jumping on the GDDR5 bandwagon early and getting it right, at least it seems that way. It wouldn't be too far fetched to imagine a 55nm GT200 die shrink with a 256-bit GDDR5 memory interface, it should allow NVIDIA to drop the price down to the $300 level (at least for the GTX 260). </font></p> <p itxtvisited="1"><font size="2">As we mentioned in our Radeon HD 4850 Preview, both the Radeon HD 4870 and 4850 now support 8-channel LPCM audio output over HDMI. AMD just sent over 8-channel LPCM drivers for the Radeon HD 4870 so we'll be testing this functionality shortly. As we mentioned in our 4850 preview:</font></p> <blockquote itxtvisited="1"> <p itxtvisited="1"><font size="2">&quot;All of AMD's Radeon HD graphics cards have shipped with their own audio codec, but the Radeon HD 4800 series of cards finally adds support for 8-channel LPCM output over HDMI. This is a huge deal for HTPC enthusiasts because now you can output 8-channel audio over HDMI in a motherboard agnostic solution. We still don't have support for bitstreaming TrueHD/DTS-HD MA and most likely won't anytime this year from a GPU alone, but there are some other solutions in the works for 2008.&quot;</font></p> </blockquote> <p itxtvisited="1"><font size="2">The Radeon HD 4870 is scheduled for widespread availability in early July, although AMD tells us that some cards are already in the channel. Given that the 4870 relies on a new memory technology, we aren't sure how confident we can be that it will be as widely available as the Radeon HD 4850 has been thus far. Keep an eye out but so far the 4850 has been shipping without any issues at $199 or below, so as long as AMD can get cards in retailers' hands we expect the 4870 to hit its $299 price point.<br /> <br /> <a href="http://www.anandtech.com/video/showdoc.aspx?i=3341">Read more...</a></font></p> <p itxtvisited="1"><font size="2"><!-- google_ad_section_end --></font></p>

0 Comments
<span class="content" itxtvisited="1"><font size="2">&nbsp;A very smart man at Intel once told me that when designing a microprocessor you can either build a new architecture, or move to a smaller manufacturing process, but you don't do both at the same time. The reason you don't do both is because it significantly complicates the design, validation and manufacturing processes - you want to instead limit the number of variables you're changing in order to guarantee a quick ramp up and good yields of your silicon.</font> <p itxtvisited="1"><font size="2">NVIDIA followed this rule of thumb with the GT200, building its &quot;brand new&quot; (or at least significantly evolved) architecture on a tried-and-true 65nm process instead of starting at 55nm. Despite AMD building both RV670 and the new RV770 GPU on TSMC's 55nm process, NVIDIA hadn't built anything on a smaller than 65nm process, including the 1.4 billion transistor GT200.</font></p> <p itxtvisited="1"><font size="2">Shortly after the GT200 launched, AMD &quot;responded&quot; with its Radeon HD 4850, a cheap card by comparison, but a far more interesting one from a practical performance standpoint. Priced at $199 and selling for as little as $170, the Radeon HD 4850 managed to invalidate most of NVIDIA's product line. In response, NVIDIA dropped the price of its GeForce 9800 GTX to $199 as well and introduced one more card: a $229 GeForce 9800 GTX+.</font></p> <p itxtvisited="1"><font size="2">Originally we thought the GTX+ was a silly last minute afterthought as it looked like nothing more than an overclocked 9800 GTX. While its clock speeds are higher, it also happens to be the very first 55nm NVIDIA GPU.&nbsp;<br /> <br /> The core clock went up 9.3%, shader clock went up 8.6% and memory clock stayed the same. The clock speed bumps are marginal and by far the more interesting aspect of the chip is how much less power it consumes thanks to its 55nm process, which thanks to AMD should be quite mature by now.<br /> <br /> <a href="http://www.anandtech.com/video/showdoc.aspx?i=3340">Read more...</a></font></p> </span>

0 Comments
<font size="2"><span class="content"> <p>Tucked away in our NVIDIA GT200 review was a bit of gold. Elemental Technologies has been developing, in CUDA, a GPU-accelerated H.264 video transcoder.</p> <p>If you've ever tried ripping a Blu-ray movie you'll know that just a raw rip of just one audio and one video stream can easily be over 20 - 30GB. I've been doing a lot of this lately for my HTPC and even without 8-channel audio tracks, my ripped movies are still huge (Casino Royal was around 27GB for the 1080p video track and 5.1-channel english audio track). On a massive screen, you'll want to preserve every last bit of information, but on most displays you could actually stand to compress the video quite a bit.</p> <p>Using the H.264 codec (or the open-source x264 version), it's very easy to preserve video quality but reduce file size down to the 8 - 15GB range - the problem is that it requires a great deal of processing power to do so. Transcoding from a H.264 encoded Blu-ray to a lower bitrate H.264/x264 can often take several hours, if not over a day for a very high quality re-encode on a fast dual or quad-core system. </p> <p>Right now transcoding Blu-ray movies isn't exactly at the top of everyone's list, but using H.264/x264 you can significantly reduce file sizes on any video. x264 is the new DivX and its usefulness extends far beyond just ripping HD movies. Needless to say, its use isn't going to increase unless encoding using the codec gets faster.</p> <p>Elemental Technologies has been working on a technology they called RapiHD, which is a GPU-accelerated H.264 video encoder and the consumer implementation of RapiHD is a software application called BadaBOOM (yes, that's what it's actually called, there's even a video).</p> <p>RapiHD and thus BadaBOOM are both CUDA applications, meaning they are written in C and compiled to run on NVIDIA's GPUs. They won't work without a CUDA-enabled GPU (GeForce 8xxx, 9xxx or GTX 280/260) and they won't work on AMD/ATI hardware. </p> <p>Elemental allowed NVIDIA to use a very early beta of BadaBOOM in its GT200 launch, which meant we got access to the beta. We could only transcode up to 2 minutes of video and we weren't given access to any options, we could only choose a vague output format and run the encode. <br /> </p> <p><a href="http://www.anandtech.com/video/showdoc.aspx?i=3339">Read more...</a></p> </span></font>

0 Comments
<font size="2"><span class="content"> <p>It's been one of those long nights, the type where you don't really sleep but rather nap here and there. Normally such nights are brought on by things like Nehalem, or NVIDIA's GT200 launch, but last night was its own unique flavor of all-nighter.</p> <p>On Monday, AMD had a big press event to talk about its next-generation graphics architecture. We knew that a launch was impending but we had no hardware nor did we have an embargo date when reviews would lift, we were at AMD's mercy. </p> <p>You may already know about one of AMD's new cards: the Radeon HD 4850. It briefly appeared for sale on Amazon, complete with specs, before eventually getting pulled off the site. It turns out that other retailers in Europe not only listed the card early but started selling them early. In an effort to make its performance embargoes meaningful, AMD moved some dates around.</p> <p>Here's the deal: AMD is launching its new RV770 GPU next week, and just as the RV670 that came before it, it will be available in two versions. The first version we can talk about today: that's the Radeon HD 4850. The second version, well, just forget that I even mentioned that - you'll have to wait until the embargo lifts for more information there.<br /> <br /> But we can't really talk about the Radeon HD 4850, we can only tell you how it performs and we can only tell you things you would know from actually having the card. The RV770 architectural details remain under NDA until next week as well. What we can tell you is how fast AMD's new $199 part is, but we can't tell you why it performs the way it does. </p> <p>We've got no complaints as we'd much rather stay up all night benchmarking then try to put together another GT200 piece in a handful of hours. It simply wouldn't be possible and we wouldn't be able to do AMD's new chips justice. </p> <p>What we've got here is the polar opposite of what NVIDIA just launched on Monday. While the GT200 is a 1.4 billion transistor chip found in $400 and $650 graphics cards, AMD's Radeon HD 4850 is...oh wait, I can't tell you the transistor count quite yet. Let's just say it's high, but not as high as GT200 :) <br /> </p> </span></font> <p><font size="2">gain, we're not allowed to go into the architectural details of the RV770, the basis for the Radeon HD 4800 series including today's 4850, but we are allowed to share whatever data one could obtain from having access to the card itself, so let's get started. </font></p> <p><font size="2">Running GPU-Z we see that the Radeon HD 4850 shows up as having 800 stream processors, up from 320 in the Radeon HD 3800 series. Remember that the Radeon HD 3800 was built on TSMC's 55nm process and there simply isn't a smaller process available for AMD to use, so the 4800 most likely uses the same manufacturing process. With 2.5x the stream processor count, the RV770 isn't going to be a small chip, while we can't reveal transistor count quite yet you can make a reasonable guess.</font></p> <p><font size="2"><span class="content">That's a 625MHz core clock and 993MHz GDDR3 memory clock (1986MHz data rate). We've got more stream processors than the Radeon HD 3870, but they are clocked a bit lower to make up for the fact that there are 2.5x as many on the same manufacturing process.<br /> </span></font><br /> The rest of the specs are pretty straightforward, it's got 512MB of GDDR3 connected to a 256-bit bus and the whole card will set you back $199. The Radeon HD 4850 will be available next week, and given that we've already received cards from 3 different manufacturers - we'd say that this thing is going to be available on time. </p> <div>&nbsp;<br /> <a href="http://www.anandtech.com/video/showdoc.aspx?i=3338">Read more...</a></div>

0 Comments

One-Point-Four-Billion. That's transistors folks.

The chip is codenamed GT200 and it's the successor to NVIDIA's G80 and G92 families. Why the change in naming? The GT stands for "Graphics Tesla" and this is the second generation Graphics Tesla architecture, the first being the G80. The GT200 is launching today in two flavors.

Let's put aside all the important considerations for a moment and bask in the glow of sheer geekdom. Intel's Montecito processor (their dual core Itanium 2) weighs in at over 1.7 billion transistors, but the vast majority of this is L3 cache (over 1.5 billion transistors for 24MB of on die memory). In contrast, the vast majority of the transistors on NVIDIA's GT200 chip are used for compute power. Whether or not NVIDIA has used these transistors well is certainly the most important consideration for consumers, but there's no reason we can't take a second to be in awe of the sheer magnitude of the hardware. This chip is packed full of logic and it is huge.

If the number of transistors wasn't enough to turn this thing into a dinner plate sized bit of hardware, the fact that it's fabbed on a 65nm process definitely puts it over the top. Current CPUs are at 45nm and NVIDIA's major competitor in the GPU market, AMD, has been building 55nm graphics chips for over 7 months now. With so many transistors, choosing not to shrink their manufacturing process doesn't seem to make much sense to us. Smaller fab processes offer not only the potential for faster, cooler chips, but also significantly reduce the cost of the GPU itself. Because manufacturing costs are (after ramping production) on a per wafer basis, the more dies that can be packed onto a single waffer, the less each die costs. It is likely that NVIDIA didn't want to risk any possible delays arising from manufacturing process changes on this cycle, but that seems like a risk that would have been worth taking in this case.

Instead, GT200 is the largest die TSMC has ever fabbed for production. Quite a dubious honor, and I wouldn't expect NVIDIA to really see this as something of which to be proud. Of course, that doesn't mean we can't be impressed with the sheer massiveness of the beast.

And what do we get from all these transistors? Moving up from 690M transistors of the original G80 and 754M transistors in G92 to the 1.4B transistors of GT200 is not a small tweak. One of the major new features is the ability to processes double precision floating point data in hardware (there are 30 64-bit FP units in GT200). The size of the register file for each SP array has been doubled. The promised ability of an SP to process a MAD and a MUL at the same time has been enhanced to work in more cases (G80 was supposedly able to do this, but the number of cases where it worked as advertised were extremely limited). And the number of SPs has increased from 128 on G80 to 240 with GT200. To better understand what all this means, we'll take a closer look at the differences between G80 and GT200, but first, the cards.

Read more...

0 Comments
Time has come for us to witness another round between professional graphics cards currently offered by AMD and Nvidia companies. We tested contemporary professional solutions from ATI FireGL and Nvidia Quadro FX series in CAD/CAM applications. It was also interesting to see what a popular gaming graphics card can do in the same type of applications. Read our review for details.

Read more...

0 Comments
Pages: 1 ... 29 30 [31] 32 33 ... 36