Mesaje recente

Members
Stats
  • Total Posts: 17,786
  • Total Topics: 1,234
  • Online today: 178
  • Online ever: 340
  • (Yesterday at 00:10)
Users Online
Users: 0
Guests: 97
Total: 97

Pages: 1 ... 21 22 [23] 24 25 ... 123

There are a lot of mainboards out there with a variety of onboard controllers and an extensive list of functions. Of course, there are simpler and even more affordable mainboards, but perfection like Gigabyte GA-Z77-D3H is a very rare occurrence among them.

Read more...

0 Comments

Let’s take a closer look at the most efficient single-array tower cooler out there designed for six-core Intel processors.

Read more...

0 Comments
<p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn&rsquo;t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel&rsquo;s solution was to give graphics away for free.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel&rsquo;s integrated graphics was never particularly good. Intel didn&rsquo;t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn&rsquo;t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you&rsquo;re giving away for free never made a lot of sense.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn&rsquo;t interesting enough at the time. Intel&rsquo;s GPU leadership needed another approach.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel&rsquo;s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn&rsquo;t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn&rsquo;t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;">Looking at the past few years of Apple products, you&rsquo;ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel&rsquo;s, Apple&rsquo;s GPU desires didn&rsquo;t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There&rsquo;s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel&rsquo;s roadmap. That&rsquo;s how we got Intel&rsquo;s HD 3000. And that&rsquo;s how we got here.</span></span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="background-color: rgb(255, 255, 255);"><span style="color: rgb(0, 0, 0);"><span style="font-size: small;"><a href="http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested" target="_blank">Read more...</a></span></span></span></p>

0 Comments
<p><span style="color: rgb(0, 0, 0);"><span style="font-size: small;"><span style="background-color: rgb(255, 255, 255);"><span style="font-family: Arimo, sans-serif; line-height: 21px; -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px;">The Water 3.0 revision of Thermaltake's closed loop cooling line is, in an interesting turn of events, an opportunity for us to essentially test the stock, traditional versions of Asetek's closed loop cooler products. Companies like Corsair, NZXT, and Thermaltake (among others) will often take the existing radiator, pump, and waterblock loop and give it their own spin, either by including special software, adding fan headers, or just using higher quality fans to differentiate their products. We've been able to test the CoolIT versions of the 120mm and 240mm radiator loops thanks to Corsair, but the Asetek ones are very popular as well (and in my opinion preferable), and thankfully that's what Thermaltake opted to go with for their third series of closed loop coolers.</span></span></span></span></p> <p><span style="font-size: small;"><a href="http://www.anandtech.com/show/6984/thermaltake-water-30-closed-loop-cooler-roundup" target="_blank">Read more...</a></span></p>

0 Comments

As spring gets ready to roll over to summer, last week we saw the first phase of NVIDIA’s annual desktop product line refresh, with the launch of the GeForce GTX 780. Based on a cut-down GK110 GPU, the GTX 780 was by most metrics a Titan Mini, offering a significant performance boost for a mid-generation part, albeit a part that forwent the usual $500 price tier in the process. With the launch of GTX 780 the stage has been set for the rest of the GeForce 700 series refresh, and NVIDIA is wasting no time on getting to the next part in their lineup. So what’s up next? GeForce GTX 770, of course.

In our closing thoughts on the GTX 780, we ended on the subject of what NVIDIA would do for a GTX 770. Without a new mid/high-end GPU on the horizon, NVIDIA has instead gone to incremental adjustments for their 2013 refreshes, GTX 780 being a prime example through its use of a cut-down GK110, something that has always been the most logical choice for the company. But any potential GTX 770 is far more nebulous, as both a 3rd tier GK110 part and a top-tier GK104 part could conceivably fill the role just as well. With the launch of the GTX 770 now upon us we finally have the answer to that question, and the answer is that NVIDIA has taken the GK104 option.

What is GTX 770 then? GTX 770 is essentially GTX 680 on steroids. Higher core clockspeeds and memory clockspeeds give it performance exceeding GTX 680, while higher voltages and a higher TDP allow it to clock higher and for it to matter. As a result GTX 770 is still very much a product cut from the same cloth as GTX 680, but as a fastest GK104 card yet it is a potent successor to the outgoing GTX 670.

Read more...

0 Comments
<p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">There are two non-negotiables in building a PC these days: the cost of Intel silicon and the cost of the Windows license. You can play with everything else but Intel and Microsoft are going to get their share. Those two relatively fixed costs in the PC bill of materials can do one of two things: encourage OEMs to skimp on component cost elsewhere, or drive the entire ecosystem to supply higher quality components at lower prices. If you&rsquo;ve been following the PC industry for the past decade, I think we&rsquo;ve seen more of the former and less of the latter.</span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">Apple occupying the high-end of the notebook PC space has forced many OEMs to reconsider their approach, but that&rsquo;s a more recent change. What AMD seems to offer is an easier path. AMD will take less of the BoM, allowing OEMs to invest those savings elsewhere - a move Intel will never make. Given how much pressure the PC OEMs have been under for the past few years, AMD&rsquo;s bargain is more appealing now than it has ever been.</span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">With Llano and Trinity, AMD&rsquo;s story was about giving up CPU performance for GPU performance. With Kabini, the deal is more palatable. You only give up CPU performance compared to higher priced parts (you gain performance compared to Atom), and you get much lower power silicon that can run in thinner/lighter notebooks. Typically at the price points Kabini is targeting (sub-$400 notebooks), you don&rsquo;t get pretty form factors with amazing battery life. AMD hopes to change that.</span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">While AMD hasn&rsquo;t disclosed OEM pricing on Kabini (similarly, Intel doesn&rsquo;t list OEM pricing on its mobile Pentium SKUs), it&rsquo;s safe to assume that AMD will sell Kabini for less than Intel will sell its competing SKUs. If Kabini&rsquo;s die size is indeed&nbsp;</span></span><a href="http://www.anandtech.com/show/6977/a-closer-look-at-the-kabini-die" style="box-sizing: border-box; margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; line-height: inherit; vertical-align: baseline; outline: 0px; text-decoration: none; color: rgb(34, 149, 171);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">around 107mm^2</span></span></a><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">, that puts it in the same range as a dual-core Ivy Bridge. AMD can likely undercut Intel a bit and live off of lower margins, but there&rsquo;s one more component to think about: Ivy Bridge needs its PCH (Platform Controller Hub), Kabini does not. As a more fully integrated SoC, Kabini&rsquo;s IO duties are handled by an on-die Fusion Controller Hub. Intel typically charges low double digits for its entry level chipsets, which is money AMD either rolls into the cost of Kabini or uses as a way of delivering a lower total cost to OEMs.</span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">Traditionally, OEMs would take these cost savings and pass them along to the end user. I get the impression that AMD&rsquo;s hope with Kabini is for OEMs to instead take the cost savings and redeploy them elsewhere in the system. Perhaps putting it towards a small amount of NAND on-board for a better user experience, or maybe towards a better LCD.</span></span></p> <p style="box-sizing: border-box; margin: 13px 0px; padding: 0px; border: 0px; font-family: Arimo, sans-serif; font-size: 14px; line-height: 21px; vertical-align: baseline; color: rgb(68, 68, 68); -webkit-text-stroke-color: rgba(0, 0, 0, 0); -webkit-text-stroke-width: 1px; background-color: rgb(246, 246, 246);"><span style="font-size: small;"><span style="color: rgb(0, 0, 0);">As we found in&nbsp;yesterday&rsquo;s article, Kabini does a great job against Atom and Brazos. However, even with double digit increases in performance, Kabini is still a little core and no match for the bigger Ivy Bridge parts. Much to our disappointment, we pretty much never get sent low end hardware for review - so to make yesterday&rsquo;s NDA we had to stick with 17W Ivy Bridge and extrapolate performance from there. In the past day I grabbed an ASUS X501A system, a 15-inch entry-level machine priced in the low $300s. More importantly, it features a 35W Ivy Bridge based Pentium CPU: the dual-core 2020M.</span></span></p> <div><a href="http://www.anandtech.com/show/6981/the-kabini-deal-can-amd-improve-the-quality-of-mainstream-pcs-with-its-latest-apu" target="_blank"><span style="font-size: small;">Read more...</span></a></div>

0 Comments

This mainboard will suit perfectly for a contemporary system, it doesn’t boast anything extraordinary, but also doesn’t have any functionality limitations of any kind. There are more complex and more expensive, as well as simpler and more affordable products out there, while Asus P8Z77-V LK is a regular mainboard, most optional solution, the so-called “golden medium”.

Read more...

0 Comments

As the two year GPU cycle continues in earnest, we’ve reached the point where NVIDIA is gearing up for their annual desktop product line refresh. With the GeForce 600 series proper having launched over a year ago, all the way back in March of 2012, most GeForce 600 series products are at or are approaching a year old, putting us roughly halfway through Kepler’s expected 2 year lifecycle. With their business strongly rooted in annual upgrades, this means NVIDIA’s GPU lineup is due for a refresh.

How NVIDIA goes about their refreshes has differed throughout the years. Unlike the CPU industry (specifically Intel), the GPU industry doesn’t currently live on any kind of tick-tock progression method. New architectures are launched on new process nodes, which in turn ties everything to the launch of those new process nodes by TSMC. Last decade saw TSMC doing yearly half-node steps, allowing incremental fab-driven improvements every year. But with TSMC no longer doing half-node steps as of 40nm, this means fab-drive improvements now come only every two years.

In lieu of new process nodes and new architectures, NVIDIA has opted to refresh based on incremental improvements within their product lineups. With the Fermi generation, NVIDIA initially shipped most GeForce 400 Fermi GPUs with one or more disabled functional units. This helped to boost yields on a highly temperamental 40nm process, but it also left NVIDIA an obvious route of progression for the GeForce 500 series. With the GeForce 600 series on the other hand, 28nm is relatively well behaved and NVIDIA has launched fully-enabled products at almost every tier, leaving them without an obvious route of progression for the Kepler refresh.

So where does NVIDIA go from here? As it turns out NVIDIA’s solution for their annual refresh is essentially the same: add more functional units. NVIDIA of course doesn’t have more functional units to turn on within their existing GPUs, so instead they’re doing the next best thing, acquiring more functional units by climbing up the GPU ladder itself. And with this in mind, this brings us to today’s launch, the GeForce GTX 780.

Read more...

0 Comments

Anand is covering AMD’s latest Kabini/Temash architecture in a separate article, but here we get to tackle the more practical question: how does Kabini perform compared to existing hardware? Armed (sorry, bad pun) with a prototype laptop sporting AMD’s latest APU, we put it through an extensive suite of benchmarks and see what’s changed since Brazos, how Kabini stacks up against Intel’s current ULV offerings, and where it falls relative to ARM offerings and Clover Trail. But first, let’s talk about what’s launching today.

Read more...

0 Comments

Microprocessor architectures these days are largely limited, and thus defined, by power consumption. When it comes to designing an architecture around a power envelope the rule of thumb is any given microprocessor architecture can scale to target an order of magnitude of TDPs. For example, Intel’s Core architectures (Sandy/Ivy Bridge) effectively target the 13W - 130W range. They can surely be used in parts that consume less or more power, but at those extremes it’s more efficient to build another microarchitecture to target those TDPs instead.

Both AMD and Intel feel similarly about this order of magnitude rule, and thus both have two independent microprocessor architectures that they leverage to build chips for the computing continuum. From Intel we have Atom for low power, and Core for high performance. In 2010 AMD gave us Bobcat for its low power roadmap, and Bulldozer for high performance.

Both the Bobcat and Bulldozer lines would see annual updates. In 2011 we saw Bobcat used in Ontario and Zacate SoCs, as a part of the Brazos platform. Last year AMD announced Brazos 2.0, using slightly updated versions of those very same Bobcat based SoCs. Today AMD officially launches Kabini and Temash, APUs based on the first major architectural update to Bobcat: the Jaguar core.

Read more...

0 Comments
Pages: 1 ... 21 22 [23] 24 25 ... 123