Mesaje recente

Members
Stats
  • Total Posts: 17,786
  • Total Topics: 1,234
  • Online today: 201
  • Online ever: 318
  • (17 October 2024, 13:29)
Users Online
Users: 0
Guests: 150
Total: 150

Pages: 1 2 3 [4] 5 6 ... 123
Last week we took a look at NVIDIA’s newest consumer flagship video card, the GeForce GTX 980. Based on the company’s new GM204 GPU, GTX 980 further cemented NVIDIA’s ownership of the performance crown with a combination of performance improvements, new features, and power consumption reductions. Combined with a lower price than the now-dethroned GTX 780 Ti, GTX 980 is an impressive flagship with a mix of attributes that NVIDIA hopes to entice existing 600 and 500 series owners to upgrade to.

Of course even though GTX 980 was cheaper than the outgoing GTX 780 Ti, it is still a flagship card and at $549 is priced accordingly. But as in every GeForce product lineup there is a GeForce x70 right behind it, and for GTX 980 its lower-tier, lower priced counterpart is the GeForce GTX 970. Based on the same GM204 but configured with fewer active SMMs, a slightly lower clock speed, and a lower TDP, GTX 970 fills the gap by providing a lower performance but much lower priced alternative to the flagship GTX 980. In fact at $329 it’s some 40% cheaper than GTX 980, one of the largest discounts for a second-tier GeForce card in recent memory.

For this reason GTX 970 is an interesting card on its own, if not more interesting overall than its bigger sibling. The performance decrease from the reduced clock speeds and fewer SMMs is going to be tangible, but then so is a $220 savings to the pocketbook. With GTX 980 already topping our charts, if GTX 970 can stay relatively close then it would be a very tantalizing value proposition for enthusiast gamers who want to buy in to GM204 at a lower price.

0 Comments
<p><span style="font-size: small;">The launch of Haswell-E ushered in a triumvirate of new technology &ndash; a new CPU line, a new motherboard chipset and DDR4 memory. Today we focus on the new consumer motherboard chipset, X99, with motherboards from all four major manufacturers: the ASUS X99-Deluxe, the GIGABYTE X99-UD7 WiFi, the ASRock X99 WS and the MSI X99S SLI Plus. X99 represents the upgrade over the previous extreme chipset generation, X79, in several key areas in order to align itself better with the mainstream Z97 and Z87 platforms.</span></p> <p><span style="font-size: small;"><a href="http://anandtech.com/show/8557/x99-motherboard-roundup-asus-x99-deluxe-gigabyte-x99-ud7-ud5-asrock-x99-ws-msi-x99s-sli-plus-intel-haswell-e" target="_blank">Read more...</a></span></p>

0 Comments
<p><span style="font-size: small;">It has been twenty years since Corsair's first retail products hit the shelves and the company has undoubtedly come a very long way since then. What started as a small memory manufacturer is now a major global supplier of advanced computer components and peripherals. Today is the dawn of a new era for Corsair, as the company announced the establishment of their own gaming brand. The new division has been christened &quot;Corsair Gaming&quot;, and with the name comes a new department and logo. The focus will be on the development of high performance gaming peripherals.</span></p> <div><span style="font-size: small;"><br /> </span></div> <div> <div><span style="font-size: small;">Alongside the announcement of their new department, Corsair is also releasing several new products, with the much-anticipated RGB keyboards being among them. The company dropped the &quot;Vengeance&quot; series name and the new keyboards are just called by the brand name and model. That means we're now looking at the Corsair Gaming K70 RGB (and not the keyboard formerly known as Vengeance K70 RGB or some variation on that theme).</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">This keyboard has probably had more hype between its announcement and release date than any other keyboard in the history of humankind. Ever since the first demos of the keyboard found their way into pictures and videos back in January, there have been myriad rumors about the capabilities of the keyboard and the new Corsair Utility Engine (CUE) software. Some people even suggested that this is &quot;just a Vengeance K70 with RGB LEDs&quot;, which could not be further from the truth. The truth is that the new Corsair Gaming K70 RGB introduces many new functions and far greater customizability than any previous Corsair mechanical keyboard.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">Today we finally have a chance to go hands-on with the shipping hardware. Join us as we examine the keyboard, its capabilities, and the new CUE software.</span></div> <div><span style="font-size: small;"><br type="_moz" /> </span></div> <div><span style="font-size: small;"><a href="http://anandtech.com/show/8556/corsair-gaming-k70-rgb-mechanical-keyboard-review" target="_blank">Read more...</a></span></div> </div> <div>&nbsp;</div>

0 Comments
<div><span style="font-size: small;">At the risk of sounding like a broken record, the biggest story in the GPU industry over the last year has been over what isn&rsquo;t as opposed to what is. What isn&rsquo;t happening is that after nearly 3 years of the leading edge manufacturing node for GPUs at TSMC being their 28nm process, it isn&rsquo;t being replaced any time soon. As of this fall TSMC has 20nm up and running, but only for SoC-class devices such as Qualcomm Snapdragons and Apple&rsquo;s A8. Consequently if you&rsquo;re making something big and powerful like a GPU, all signs point to an unprecedented 4th year of 28nm being the leading node.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">We start off with this tidbit because it&rsquo;s important to understand the manufacturing situation in order to frame everything that follows. In years past TSMC would produce a new node every 2 years, and farther back yet there would even be half-nodes in between those 2 years. This meant that every 1-2 years GPU manufacturers could take advantage of Moore&rsquo;s Law and pack in more hardware into a chip of the same size, rapidly increasing their performance. Given the embarrassingly parallel nature of graphics rendering, it&rsquo;s this cadence in manufacturing improvements that has driven so much of the advancement of GPUs for so long.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">With 28nm however that 2 year cadence has stalled, and this has driven GPU manufacturers into an interesting and really unprecedented corner. They can&rsquo;t merely rest on their laurels for the 4 years between 28nm and the next node &ndash; their continuing existence means having new products every cycle &ndash; so they instead must find new ways to develop new products. They must iterate on their designs and technology so that now more than ever it&rsquo;s their designs driving progress and not improvements in manufacturing technology.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">What this means is that for consumers and technology enthusiasts alike we are venturing into something of an uncharted territory. With no real precedent to draw from we can only guess what AMD and NVIDIA will do to maintain the pace of innovation in the face of manufacturing stagnation. This makes this a frustrating time &ndash; who doesn&rsquo;t miss GPUs doubling in performance every 2 years &ndash; but also an interesting one. How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market? We don&rsquo;t know, and not knowing the answer leaves us open to be surprised.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">Out of NVIDIA the answer to that has come in two parts this year. NVIDIA&rsquo;s Kepler architecture, first introduced in 2012, has just about reached its retirement age. NVIDIA continues to develop new architectures on roughly a 2 year cycle, so new manufacturing process or not they have something ready to go. And that something is Maxwell.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div> <div><span style="font-size: small;">At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we&rsquo;ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn&rsquo;t have a real answer to GM107&rsquo;s energy efficiency.</span></div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first &ndash; either a 100/110 or 104 GPU &ndash; NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called &ldquo;second generation Maxwell&rdquo;. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.</span></div> </div> <div><span style="font-size: small;"><br /> </span></div> <div><span style="font-size: small;">GM204 itself follows up on the GM107 with everything we loved about the first Maxwell GPUs and yet with more. &ldquo;Second generation&rdquo; in this case is not just a description of the second wave of Maxwell GPUs, but in fact is a technically accurate description of the Maxwell 2 architecture. As we&rsquo;ll see in our deep dive into the architecture, Maxwell 2 has learned some new tricks compared to Maxwell 1 that make it an even more potent processor, and further extends the functionality of the family.</span></div> <div><span style="font-size: small;"><br type="_moz" /> </span></div> <div><span style="font-size: small;"><a href="http://anandtech.com/show/8526/nvidia-geforce-gtx-980-review" target="_blank">Read more...</a></span></div>

0 Comments
<p><span style="font-size: small;">Rosewill is a known brand name in the North American markets. Although they started as a small company mainly focused on marketing budget-friendly products, today they have a large selection of technology-related products, including products that have been designed with advanced users in mind. One such example is their mechanical keyboard series, which stands out from the many non-mechanical keyboards that they also offer. In this capsule review, we will look at the Apollo RK-9100 and the RGB80, two of their most recent mechanical keyboards. Are they worthy successors of the famed RK-9000? We are about to find out.</span></p> <p><span style="font-size: small;"><a href="http://www.anandtech.com/show/8500/rosewill-apollo-rk9100-rgb80-mechanical-keyboards-capsule-review" target="_blank">Read more...</a></span></p>

0 Comments
A decade ago, the 80 Plus program was introduced with the aim of promoting the development of more efficient and environmentally friendly computer power supplies units (PSUs). When it was officially included in the Energy Star 4.0 specification requirements in 2007, the program really took off, with every manufacturer who had not already certified their units sprinting to do so.

In 2008, it was already easy and fairly cheap to produce 80 Plus certified PSUs, making the original 80 Plus program somewhat obsolete, but the original standard was revised to include differing tiers of efficiency, starting with the 80 Plus Bronze, Silver, and Gold certifications. Two more levels, Platinum and Titanium, were introduced later. These "badges of honor" drove the manufacturers to funnel money into research in order to create better and more efficient units, and they significantly helped their marketing departments as well. Today, users can easily find 80 Plus Gold certified units at very reasonable prices for their home computers, and 80 Plus Titanium certified units for servers have already been available for a couple of years.

The race for more powerful and more efficient PSUs continues to this date, as every manufacturer is trying to get ahead of the competition by either developing more efficient units, or by producing cheaper units with the same level of efficiency. Today we will look at Corsair's attempt to show us who's the true king of the hill, as we are going to review the AX1500i, a fully digital 1500 Watts PSU with an 80 Plus Titanium certification and an impressive list of features.

The AX1500i is one of the first 80 Plus Titanium certified consumer PSUs, as well as one of the most powerful units currently available. These facts do help explain the rather insane retail price of $450, perhaps, but there's no question that this is a very niche product. With such a price tag and power output, the AX1500i is intended only for very advanced users and hardcore gamers who are willing to pay as much as a small home/office PC costs just to get the best PSU possible. However, this segment of the market is very demanding as well – does the Corsair AX1500i has what it takes to please such users? We will find out in this review.

0 Comments
Last month AMD held their 30 years of graphics celebration, during which they announced their next Radeon video card, the Radeon R9 285. Designed to be AMD’s new $249 midrange enthusiast card, the R9 285 would be launching on September 2nd. In the process the R9 285 would be a partial refresh of their R9 280 series lineup, supplying it with a new part that would serve to replace their nearly 3 year old Tahiti GPU.

The R9 285 is something of a lateral move for AMD, which is something we very rarely see in this industry. The R9 285’s immediate predecessor, the R9 280 (vanilla) has been on the market with an MSRP of $249 for nearly 4 months now. Meanwhile the R9 285 is not designed to be meaningfully faster than the R9 280 – in fact if you looked at the raw specifications, you’d rightfully guess it would be slower. Instead the R9 285 is intended to serve as a sort of second-generation feature update to R9 280, replacing it with a card at the same price with roughly the same performance level, but with 3 years’ worth of amassed feature updates and optimizations.

To accomplish this AMD has minted a new GPU, Tonga. We’ll go into more detail on Tonga in a bit, but at its core Tonga is in many ways an optimized version of Tahiti. More importantly though, Tonga is also the first GPU in AMD’s next Graphics Core Next architecture revision, which we will come to know as GCN 1.2. As a result, this launch won’t come with a significant shift in AMD’s performance-value, but for buyers it offers an improved feature set for those apprehensive about buying into Tahiti 3 years later, and for enthusiast it offers us a look at what the next iteration of AMD’s GPUs will look like.

0 Comments
Simply put, the new Intel Xeon "Haswell EP" chips are multi-core behemoths: they support up to eighteen cores (with Hyper-Threading yielding 36 logical cores). Core counts have been increasing for years now, so it is easy to dismiss the new Xeon E5-2600 v3 as "business as usual", but it is definitely not. Piling up cores inside a CPU package is one thing, but getting them to do useful work is a long chain of engineering efforts that starts with hardware intelligence and that ends with making good use of the best software libraries available.

While some sites previously reported that an "unknown source" told them Intel was cooking up a 14-core Haswell EP Xeon chip, and that the next generation 14 nm Xeon E5 "Broadwell" would be an 18-core design, the reality is that Intel has an 18-core Haswell EP design, and we have it for testing. This is yet another example of truth beating fiction.

0 Comments

Continuing our coverage of Intel’s 14nm Technology, another series of press events held by Intel filled out some of the missing details behind the strategy of their Core M platform. Core M is the moniker for what will be the Broadwell-Y series of processors, following on from Haswell-Y, and it will be the first release of Intel’s 14nm technology. The drive to smaller, low powered fanless devices that still deliver a full x86 platform as well as the performance beyond that of a smartphone or tablet is starting to become a reality. Even reducing the size of the CPU package in all dimensions to allow for smaller devices, including reducing the z-height from 1.5mm to 1.05 mm is part of Intel’s solution, giving a total die area 37% smaller than Haswell-Y.

Read more...

0 Comments
A dual processor system sounds awesome to the home user but in reality it is almost entirely a professional market. The prosumer has to use Xeons at JEDEC memory speeds and then ensure that the software is NUMA aware, especially if it decides searching for data in the other processor's L3 cache. However now GIGABYTE Server is selling to the prosumer via Newegg, and they sent us the $640 GA-7PESH3 for review.

For most users, a dual processor system affords several issues, aside from the cost. Performance with a 2P system is very dependent on the software in use. With a single processor system, each core can ‘snoop’ into the other core cache in order to see how data is updated. In a 2P system, the latency of talking between the two CPUs is at least an order of magnitude higher. This means that memory accesses can be delayed causing branched and locked code to be slow. In order to get around this, the software has to be NUMA aware, and the majority of regular applications are not.

There is no overclocking with a 2P system, and single thread speeds can be lower. As a result, gaming often sees a hit in performance, as well as basic tasks. The optimal use case scenario, for most software that is not aware of dual processor architecture, is any workload that needs as few memory accesses as possible and is ‘embarrassingly parallel’ such as ray tracing, video editing, virtualization or particular types of scientific compute. 2P motherboards, particularly those built by server teams, also often come with system management tools not seen in the consumer space, allowing users to access their system as it processes data and monitor progress as well.

0 Comments
Pages: 1 2 3 [4] 5 6 ... 123