Xeon Phi: Intel's Larrabee-Derived Card In TACC's Supercomputer
After eight years of development, Intel is finally ready to announce its Xeon Phi Coprocessor, which is derived from the company's work with Larrabee. Although the architecture came up short as a 3D graphics card, it shows more promise in the HPC space.
The Value Proposition Of Xeon Phi: Optimization
Intel came at us with a dual-pronged message during its presentation at the TACC:
- With Xeon Phi, Intel is enabling a 2x or more performance-per-watt improvement in HPC apps compared to the Xeon E5 family.
- Creating Xeon Phi-ready code happens through a similar approach developers use to exploit the Xeon's multiple cores.
The value in the first statement is both simple and powerful. Getting a greater-than 2x performance boost in optimized applications for somewhere between 225 and 300 W is not bad, particularly when you consider a pair of Xeon E5s is typically going to have a 190 to 300 W thermal ceiling, too. But because the Xeon Phi is a PCI Express-based card, you can also use more than one per pair of Xeon E5s. So long as the scaling is there, you can use two, three, or even four in a server with Xeon CPUs in it. The resulting combination can give you a lot more performance for a given amount of rack space, more performance under a defined power budget, or comparable performance using far less space, power, and cooling.
Then again, almost any step forward in manufacturing or architecture should yield gains in those same metrics. Xeon Phi's big advantage is tied to the development effort needed to harness the hardware's potential. Intel is hoping that programming for CUDA and OpenCL is still in its infancy, and that the idea of using familiar languages and tools is a compelling advantage able to generate support from the software community. Intel's position is strong, if it's able to maintain competitive performance. A unified programming model allows developers to exploit host processors and co-processors via x86, minimizing the time it takes to generate optimized code.
One of the questions we put to Intel during the Xeon Phi Q&A was, "If you sell these accelerators for $2000 or more, how will the next generation of college students learn to write code able to exploit this hardware?" I received two responses. First, Xeon Phi will not be available at retail, so Intel is working to arm universities with hardware resources. The second answer was more intriguing. If a budding programmer has a Core i3 or better CPU, they can already learn the programming model. Remember, most of the gains you get from a many-core architecture come simply from optimizing code for parallelism.
Intel doesn't care if you program for a Core i3, a Xeon E5, or a Xeon Phi. The company simply wants code written for multi-core x86 architectures. Back in the 1990s when I was doing computer science, we didn't have multi-core CPUs in our desktops. Nowadays, this is the model that will proliferate going forward.
I would also encourage Intel to get the Xeon Phi chips with manufacturing defects (resulting in lower usable core counts) into the hands of students. The name of the game is big data analytics, and that field is open to innovation from today's generation. For some at the University of Texas at Austin, they can look forward to access to the TACC and its Xeon Phi-equipped Stampede supercomputer.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Current page: The Value Proposition Of Xeon Phi: Optimization
Prev Page Intel Xeon Phi Performance Next Page TACC's Stampede Supercomputer: Xeon Phi In The Field-
mocchan Articles like these is what makes me more and more interested in servers and super computers...Time to read up and learn more!Reply -
wannabepro Highly interesting.Reply
Great article.
I do hope they get these into the hands of students like myself though. -
ddpruitt Intriguing idea....Reply
These X86 cores have the uumph to run something a little more complex than what a GPGPU can. But is it worth it and what kind of effort does it require. I'd have to disagree with Intel's assertion that you can get used to it by programming for an "i3". Anyone with a relatively modern graphics card can learn to program OpenCL or CUDA on there own system. But learning how to program 60 cores efficiently (or more) from an 8 core (optimistically) doesn't seem reasonable. And how much is one of these cards going to run? You might get more by stringing a few GPUs together for the same cost.
I'm wonder if this is going to turn into the same time of niche product that Intel's old math-coprocessors did. -
CaedenV man, I love these articles! Just the sheer amounts of stuffs that go into them... measuring ram in hundreds of TBs... HDD space in PBs... it is hard to wrap one's brain around!Reply
I wonder what AMD is going to do... on the CPU side they have the cheaper (much cheaper) compute power for servers, but it is not slowing Intel sales down any. Then on the compute side Intel is making a big name for themselves with their new (but pricy) cards, and nVidia already has a handle on the 'budget' compute cards, while AMD does not have a product out yet to compete with PHI or Tesla.
On the processor side AMD really needs to look out for nVidia and their ARM chip prowess, which if focused on could very well eat into AMD's server chip market for the 'affordable' end of this professional market... It just seems like all the cards are stacked against AMD... rough times.
And then there is IBM. The company that has so much data center IP that they could stay comfortably afloat without having to make a single product. But the fact is that they have their own compelling products for this market, and when they get a client that needs intel or nvidia parts, they do not hesitate to build it for them. In some ways it amazes me that they are still around because you never hear about them... but they really are still the 'big boy' of the server world. -
A Bad Day esrevermehReply
*Looks at the current selection of desktops, laptops and tablets, including custom built PCs*
*Looks at the major data server or massively complex physics tasks that need to be accomplished*
*Runs such tasks on baby computers, including ones with an i7 clocked to 6 GHz and quad SLI/CF, then watches them crash or lock up*
ENTIRE SELECTION IS BABIES!
tacoslavei wonder if they can mod this to run games...
A four-core game that mainly relies on one or two cores, running on a thousand-core server. What are you thinking? -
ThatsMyNameDude Holy shit. Someone tell me if this will work. Maybe, if we pair this thing up with enough xeons and enough quadros and teslas, we can connect it with a gaming system and we could use the xeons to render low load games like cod mw3 and tf2 and feed it to the gaming system.Reply -
mayankleoboy1 Main advantage of LRB over Tesla and AMD firepro S10000 :Reply
A simple recompile is all thats needed to use PHI. Tesla/AMD needs a complete code re write. Which is very very expensive .
I see LRB being highly successful. -
PudgyChicken It'd be pretty neat to use a supercomputer like this to play a game like Metro 2033 at 4K, fully ray-traced.Reply
I'm having nerdgasms just thinking about it.