I remember fondly the AMD K6/2 architecture. It was the CPU of a ultra-budget priced Compaq Presario laptop that got me through graduate school back in the day.
Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.
Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.
This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.
Those sellers never disappeared; although I'm from not from Paraguay, the situation is familiar. These days they're selling desktops built on 10+ year old Xeons which you can buy for dirt cheap on AliExpress, installed on frankenstein motherboards from noname Chinese manufacturers which are desktop-oriented, but take server processors. The graphics card is something old like RX480, and comes from being run into the ground by years of crypto"currency" mining, then resoldered on a new board, also often developed by Chinese manufacturers you've never heard of.
Graphics cards especially are very unreliable and frequently die within a few months of purchase. But when you can buy a whole PC for the price of one modern videocard, many don't have a choice.
The idea that GPU chips can be "run into the ground" by years of crypto mining or AI workloads has been debunked pretty thoroughly by now. The hardware is quite resilient, it doesn't really fail at a higher rate.
I remember those Cyrix chips well. We had a little shop where we would assemble boxes to spec. And hey, a 486 is a 486, we reasoned. They were cheap, ran cool, and just about as fast as the others.
The Pentium III does sound like a good chip choice for a retro cyberpunk story. Like they said in “The Matrix”, 1999 was the peak of human civilization.
(586 became Pentium, so 686 would be the Pentium Pro/II microarchitecture.)
Ah the pentium, aka 5-ium due to the penta- prefix. It is actually a nod from 4 to 5, but Intel wanted some cool name, and they decided penta + premium would sound cool, hence pentium.
But still, internally we call it i586, because that's the way it is. so is Pentium MMX which I reckon is called i686.
Yes, and Intel had actually lost a legal attempt to stop people using the numbers (I can't remember if this was earlier in the 486 era or if it was something they tried first with 586).
But marketing was a large part of the reason that they started caring so much at that particular time. The Pentium line was the first time Intel had marketed directly to the end users¹² in part as a response to alt-486 manufacturers (AMD, Cyryx) doing the same with their products⁴ like clock-doubled units compatible with standard 486/487 targetted sockets (which were cheaper and, depending on workload, faster than the Intel upgrade options).
--------
[1] this was the era that “Intel Inside (di dum di dum)” initially came from
[2] that was also why the FDIV bug was such a big thing despite processor bugs³ being, within the industry, an accepted part of the complex design and manufacturing process
[3] for a few earlier examples: a 486 FPU bug that resulted in what should have been errors (such as overflow) being silently ignored, another (more serious) one in trig functions that resulted in a partial recall and the rest of that line being marked down as 486SX units (with a pin removed to effectively disable the FPU), similarly an entire stepping of 386 chips ended up sold as “for 16 bit code only”, going further back into the 8-bit days some versions of the 6502 had a bug or two in handling things (jump instructions, loading via relative references) that straddled page boundaries (which were mitigated in code by being careful with code/data alignment, no recalls, just errata published)
I remember when IBM was upset that various companies were calling their 80286 computers "<Brandname> AT" like the IBM AT ("advanced technology"). But you can't trademark a preposition!
If I am correct, the Pentium Pro was the first "out of order" design. It specialized in 32-bit code, and did not handle 16-bit code very well.
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Because of the branding change, history will remember the Pentium (P5). It was really the Pentium
Pro (P6) that put Intel leaps ahead on x86 microarchitecture, a lead they’d hold with only a few minor stumbles for two decades.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
The major stumble being having to cross licence AMD for the x64 opcode design thus ensuring at least two players in the field (and due to how it's going only two).
They also started to slip behind AMD in the Pentium 4/NetBurst era, but got their footing back with Core (a more direct descendant of the P6 than the Pentium 4!)
Around the same time, but I’d classify as separate stumbles.
> The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
Por qué no los dos? If "-ium" makes nerds think of an element name, and others of a premium product, all the better. I'd bet both of these interpretations were listed in the original internal marketing presentation of the name...
> but Intel wanted some cool name, and they decided penta + premium would sound cool, hence pentium
some say that they tried to add 486 with 100 and the result had some numbers after the comma, that's why they named it pentium (yes, i know about the FDIV bug)
I was a poor kid building computers in the mid to late 90's. I tried everything I could NOT to use a true Pentium. My first build (coming from an upgraded Compaq 386DX) was an AMD 486 "DX4". I had a Diamond Stealth PCI VGA card and 16mb of DRAM. After that I tried a 233Mhz Cyrix 6x86. That chip was garbage. I had to run some software pentium emulation to get Cubase to run. I went 300Mhz Celeron after that. That was my first time trying the new SDRAM! After that I FINALLY got a legit Pentium III 400Mhz! I could go on and on as this is a lovely walk down memory lane and there's been some fun dips back into AMD Athlon/Ryzen/etc.
> I tried a 233Mhz Cyrix 6x86. That chip was garbage.
Those chips were excellent value for mostly integer work, but had incredibly poor floating-point performance which was a problem for gamers as the 3D era was really getting going around that time. I had one, it did me good service for a few years.
Yeah, I was all about recording music/running the first iteration of software synths. I was a Graphic Design major at that time so Photoshop/Illustrator/QuarkXpress were my jam. Those suprisingly didn't run that bad - in real Graphic Design, no one used Eyecandy (the reason everything on the web in 1998 had drop shadows, outter glows, lens flares) so rendering "3D" rarely came into play.
I remember not believing my friend when he said that the OS and the games were inside the computer and didn't need to be loaded up via a floppy disk. That was my first time seeing a hard drive.
Another interesting episode "after the 486" was the switch from 32 bit to 64 bit, where Intel wanted to bury the ghost of the 8086 once and for all and switched to a completely new architecture (https://en.wikipedia.org/wiki/IA-64), while AMD opted to extend the x86 architecture (https://en.wikipedia.org/wiki/X86-64). This was probably the first time that customers voted with their feet against Intel in a major way. The Itanium CPUs with the new architecture were quickly rechristened "Itanic" and Intel grudgingly had to switch to AMDs instruction set - that's the reason why the current instruction set still used by all "x86" CPUs is often referred to as AMD-64.
What I find interesting is that Intel engineers actually designed their own 64-bit extension, somewhere along the same lines as AMD64.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Around the time of K8 being released, I remember reading official intel roadmaps announced to normal people, and they essentially planned that for at least few more years if not more they will segment into increasingly consumer-only 32bit and IA-64 on the higher end
They were trying to compete with Sun and IBM in the server space (SPARC and Power) and thought that they needed a totally pro architecture (which Itanium was). The baggage of 32-bit x86 would have just slowed it all down. However having an x86-64 would have confused customers in the middle.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
> It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Well, TBH it wasn't all FUD - hanging on to x86 eventually (much later) came back to bite them when x86 CPUs weren't competitive for tablets and smartphones, leading to Apple developing their own ARM-based RISC CPUs (which run circles around the previous x86 CPUs) and dumping Intel altogether.
It is interesting how so much of the speculation in those days was about how x86 was a dead end because it couldn’t scale up, but the real issue ended up being that it didn’t scale down.
If this is true or not I don't know, but I worked on a project with an HP employee and we were talking about the Itanium. At some point the HP guy goes "You know we more or less designed that thing, right?"
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
Pentium marketing was next level. You could buy plushies of Intel workers in bunny suits. The first IMAX movie I went to was called "The Journey Inside", and it was basically a big ad for the Pentium.
I always wondered if some of that was to offset the negative publicity from the FDIV bug in the early Pentiums.
The years when Pentium came was a bit of an shitshow. As the article said, there were 7 companies producing 486 processors but after that the market was mostly Intel, AMD and little Cyrix. Then came socket-A vs. slot-A etc. Now looking back it seems like there was lot of changes in short period of time.
Things started progressing so fast in mid nineties that brand new top of the line computer was being matched in performance by low end offerings 2 years later. Lasted up to late 2000.
December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
> December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
Ah, I remember the days of Intel's fabs doing “too good” a job and many more chips passing tests for faster use being produced than expected. To fulfil orders for the slower chips some of these better batches were marked down and sold as slower units, so if you were lucky you could really push the overclocking and get yourself a performance bargain. You also needed a good motherboard and quality RAM to pull it off reliably, of course.
Sillyrons is what we used to call the massively overclocked Celerons. At Uni a friend of mine made a good bit of pocket money from selling an optimisation service, for people who didn't feel confident playing with such settings themselves.
Fun thing is with a tiny bit of manipulation you can run a P3 tualatin at 1.33ghz via a slot adapter and some pin disablement and some voltage mods (or if you had the right adapter a jumper) in a motherboard which came with a low tier P2 or even earlier. So without replacing your Asus P2B from very early 1998 well up to mid 00s with astonishing performance gains, that motherboard had a massive lifespan in the right hands. Mine is still running with a new voltage regulator to this day.
On other hand not being hopelessly outdated in a relatively short time does have perks. It is cheaper to not have to update constantly and still getting decent performance.
Fun fact: Bonnel Atoms (D510 etc) were not affected by the meltdown vulnerability that plagued every Pentium processor since the 1995 Pentiums. These Atoms use purely in-order execution engines which kinda makes them supercharged 486s.
Pentium were the first superscalar x86 from intel, but were still in-order. Pentium-Pro (a completely different microarchitecture) was the first OoO intel x86 microarchitecture.
Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.
Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.
This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.
Graphics cards especially are very unreliable and frequently die within a few months of purchase. But when you can buy a whole PC for the price of one modern videocard, many don't have a choice.
https://aliexpress.com/w/wholesale-intel-xeon-processors.htm...
https://aliexpress.com/w/wholesale-motherboards-xeon.html
https://aliexpress.us/w/wholesale-amd-radeon-rx-580.html
[1] https://liam-on-linux.livejournal.com/49259.html
(586 became Pentium, so 686 would be the Pentium Pro/II microarchitecture.)
But still, internally we call it i586, because that's the way it is. so is Pentium MMX which I reckon is called i686.
> The name invoked the number five, but was completely trademarkable, unlike the number 586.
But marketing was a large part of the reason that they started caring so much at that particular time. The Pentium line was the first time Intel had marketed directly to the end users¹² in part as a response to alt-486 manufacturers (AMD, Cyryx) doing the same with their products⁴ like clock-doubled units compatible with standard 486/487 targetted sockets (which were cheaper and, depending on workload, faster than the Intel upgrade options).
--------
[1] this was the era that “Intel Inside (di dum di dum)” initially came from
[2] that was also why the FDIV bug was such a big thing despite processor bugs³ being, within the industry, an accepted part of the complex design and manufacturing process
[3] for a few earlier examples: a 486 FPU bug that resulted in what should have been errors (such as overflow) being silently ignored, another (more serious) one in trig functions that resulted in a partial recall and the rest of that line being marked down as 486SX units (with a pin removed to effectively disable the FPU), similarly an entire stepping of 386 chips ended up sold as “for 16 bit code only”, going further back into the 8-bit days some versions of the 6502 had a bug or two in handling things (jump instructions, loading via relative references) that straddled page boundaries (which were mitigated in code by being careful with code/data alignment, no recalls, just errata published)
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
Around the same time, but I’d classify as separate stumbles.
https://en.wikipedia.org/wiki/Tomasulo's_algorithm
Took a while until transistor budgets allowed it to be implemented in consumer microprocessors.
https://news.ycombinator.com/item?id=38459128
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
some say that they tried to add 486 with 100 and the result had some numbers after the comma, that's why they named it pentium (yes, i know about the FDIV bug)
Those chips were excellent value for mostly integer work, but had incredibly poor floating-point performance which was a problem for gamers as the 3D era was really getting going around that time. I had one, it did me good service for a few years.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Bob Colwell talks about this (and some of the x86 team vs Itanium team drama) in his quora answer and followup comments: https://www.quora.com/How-was-AMD-able-to-beat-Intel-in-deli...
But this market segmentation idea just seems absolutely insane to me in a way I’ve never had anyone satisfactorily explain.
It requires Intel to voluntarily destroy the commodity economics that put their CPUs on a rocket ship to domination.
It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
I always wondered if some of that was to offset the negative publicity from the FDIV bug in the early Pentiums.
It was annoying as it seemed every computer ad needed to play it, not just intel ads.
December 1998 $85 Celeron 300A handily beating June 97 $594 Pentium 233 MMX, not to mention overclocked one matching 1998 $621-824 Pentium 2s.
January 2002 $120 Duron 1300/Celeron 1300 beating 2000 $1000 Athlon 1000/Pentium 3 1000-1133
June 2007 $40 Celeron 420 overclockable out of the box from stock 1.6 to 3.2GHz beat best $1000 CPUs of year 2005 (FX-57, P4 EE).
Same goes for Graphic chips starting around 1998/9.
Ah, I remember the days of Intel's fabs doing “too good” a job and many more chips passing tests for faster use being produced than expected. To fulfil orders for the slower chips some of these better batches were marked down and sold as slower units, so if you were lucky you could really push the overclocking and get yourself a performance bargain. You also needed a good motherboard and quality RAM to pull it off reliably, of course.
Sillyrons is what we used to call the massively overclocked Celerons. At Uni a friend of mine made a good bit of pocket money from selling an optimisation service, for people who didn't feel confident playing with such settings themselves.
But the time since 2020 feels much faster again. It's scary! But it's exciting.