Project Larrabee: How Intel’s First Attempt at GPUs Failed

Project Larrabee: How Intel’s First Attempt at GPUs Failed

Intel Logo at CES 2023Hannah Stryker / How-To Geek

Intel’s Arc lineup of graphics cards is now for sale, and they’re Intel’s first commercial dedicated graphics cards. But did you know that it’s not actually the first time Intel toyed with GPUs? Here’s what happened with Project Larrabee, Intel’s first attempt at making a GPU.

How Did Project Larrabee Work?

Although Intel’s goal was to produce a GPU, the approach it took was way different from a regular GPU. The company announced the ins and outs of Project Larrabee in 2008, during SIGGRAPH, and the company intended to do things differently than NVIDIA and AMD/ATI. How? It all came to its architecture.

Larrabee was more CPU-like than most graphics cards. You can think of it as a sort of hybrid between a GPU and a multi-core CPU. It used the same x86 instruction set as desktop CPUs, with certain extensions exclusive to the platform. It had the cache hierarchy and the architecture of a CPU, and the texture sampling hardware and rendering capabilities of a GPU. It touted features such as a fully programmable pipeline as selling points for Larrabee, whereas regular GPUs only had partially programmable pipelines.

A prototype of a Larrabee card that was sold on eBay. leodanmarjod

The result was a strange, general-purpose processor, or co-processor, using the x86 architecture that Intel pioneered, that could be used as a graphics card and could also perform general-purpose computation where a CPU would normally be a better fit. The best of both worlds, basically. Its hybrid setup would also allow it to do things way ahead of the curve, such as real-time ray tracing, which wasn’t really seen on desktop consumer GPUs until the launch of NVIDIA’s RTX cards in 2018. It was not developed by Intel’s integrated graphics team, but rather by a separate team at Intel.

The description makes it sound like everything but a graphics card, and we’re well aware of that, but Intel did intend to release a consumer GPU with this technology at heart at one point. In 2009, it was claimed that Larrabee prototypes were on par with the Nvidia GTX 285, which many saw as a red flag given the high promises Intel had been making until that point. Sadly, by 2009, Intel decided to call it quits on its intention of releasing a GPU. So what went wrong?

Why Intel’s Project Larrabee Failed

We don’t really know what catastrophic event made Intel shelve the project. However, many people blame the failure on delays in development — Intel intended to release a consumer GPU by 2009-2010, and as 2009 went by, it became increasingly clear that it wouldn’t happen, at least not in time.

It’s said that disappointing performance figures also prompted Intel to never actually release this as a GPU. Because of how it worked, it also meant that Larrabee lacked hardware for things such as buffering or clipping, which were all done in software. Basically, Intel’s CPU-GPU hybrid wasn’t performing as well in graphics tasks as purpose-built GPUs were.

Ultimately, the actual reasons behind the killing of this product are only known by Intel — the company publicly blamed it on delays in development. It’s likely that, at some point in the development process, Intel saw how things were shaping up and decided that maybe it wasn’t the best idea to release this, at least as a GPU. Project Larrabee wasn’t completely killed — only Intel’s intentions to release a consumer GPU with it were.

Ultimately, the technology, and what Intel learned while making it, were repurposed into something else.

What It Turned Into: Xeon Phi

Intel

Intel re-applied its newly gained knowledge and, while it didn’t release a GPU, it made something else — the Xeon Phi range of processors and co-processors. Since the cards happened to be really good at handling software and tasks that ran on x86 processors, Intel decided to just go with that and cut the graphics processing part entirely. As such, Xeon Phi was born.

Initially, Intel released these as co-processors — PCI Express cards that were separate from the regular CPU. Shortly after, it opted to also release them as standalone CPUs rather than just co-processors. The last CPUs to come out with the Xeon Phi branding were equipped with up to 72 cores, and unlike regular CPUs, which come with regular hyperthreading that gives you two threads for every core, these chips came with four-way hyperthreading, giving you a whopping four threads for every core. These 72-core CPUs also had 288 threads as a result.

Xeon Phi saw use in specialized applications, as well as supercomputers such as the Tianhe-2. But they were very specialized parts for very specific computing needs — not the same thing as a regular Xeon server CPU by a long shot.

How Intel Reignited Its GPU Efforts Years Later

Intel’s initial efforts to release a dedicated GPU with this technology came to an end, but it wasn’t the end of Intel’s ambitions in the GPU space. The company would eventually realize that the old-school approach was better, and decide to instead work on making really good graphics cards the tried and true way. This led Intel to announce, in 2018, its intention to put out a discrete GPU on the market by 2020.

Eventually, it did fulfill that promise, as Intel released the Xe DG1 graphics card in 2020, following up on it with the Intel Arc range of gaming-capable cards in 2022. So you could say things actually played out well for Intel in the end.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : How To Geek – https://www.howtogeek.com/896521/project-larrabee-how-intels-first-attempt-at-gpus-failed/

Exit mobile version