As the title suggests, this particular incoherent rambling will focus on computers based on two different types of instruction set architecture: x86(-64) architecture (Intel/AMD) and the ARM architecture (Qualcomm, Apple, etc). It will focus mainly on the basic design of processors on both ISAs as well as it's uses today and a little speculation into the future.
I got interested in this topic when writing my previous one on AMD Fusion, the x86 and ARM topic is indirectly linked with the AMD Fusion topic as AMD's APU could be how x86 based processors compete with ARM in low power devices.
Anyways, I will try to be slightly more coherent with this rambling. This will be less speculation and more informational (is that a real word?). Educational is a better one. Yeah, let's stick with educational. More coherent already.
You previously said, ARM and x86 are both instruction set architectures. Now what does this actually mean?
Spoiler Alert, click show to read:
An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O. An ISA includes a specification of the set of opcodes (machine language), and the native commands implemented by a particular processor.
Instruction set architecture is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Computers with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs.
Wikipedia's definition on an Instruction Set Architecture (will be called ISA from now on).
Read the full article here.
To put it very simply, the ISA functions as a bridge between software and hardware. Software being an application, hardware being the CPU. In a way, it's almost like a language or a framework, in that both processor and software are able to communicate and carry out intended tasks through a set of functions that both understand.
I won't pretend to be very well versed in the intricacies of ISAs or computer architecture and how it relates to carrying out instructions. I once had a group assignment to design a very simple & basic MIPS based processor.
I can tell you wholeheartedly that I never, ever, will consider a career in CPU architecture design. Ever. I felt like a moron half the time whereas the other half of the time, I merely felt dumb. Suffice it to say, it was the hard work of others that got me over the line.
So... you're not very smart? Umm... great, but now what on earth is x86(-64) and ARM?
The most snarky, and least helpful answer to give would be that they are the designated names of ISAs upon which certain processors are built upon.
Yes. That will satisfy all curiosity.
Seriously though, it is that in a nutshell. In terms of conversation, x86 (x86-64 is the 64-bit version of x86, it is backwards compatible) and ARM refers to either the ISA itself, or in broad reference to processors that are built on said ISAs.
ARM is short for Advanced RISC Machine. RISC itself is design philosophy (reduced instruction set computing) for building processors that essentially boils down to "simpler is better". There are specific elements of RISC based ISAs and processors that make it distinct from others, but the main idea is to emphasise simplicity over complexity, which is in constrast to CISC (complex instruction set computing), which is the design under which Intel built the 8086 computer, the first x86 processor.
x86 as previously mentioned, was built under the CISC design.
The way the differing designs work is in how they are built to take instructions. x86 processors (CISC) are built under the assumption that hardware will always be more powerful than software, so the aim is to increase the complexity of the ISA in order to maximise functionality and reduce the amount of total instructions given. This gives developers the opportunity to create more complex (and supposedly, better) applications and software.
ARM is built on an opposing philosophy. Simplicity is better than complexity. Because the design of RISC based processors are simplier, they are also faster clock for clock as it takes less time to calculate each instruction. The downside to this design is that they require much more instructions to carry out the same task. This can also impact on functionality as the less complex design limits what developers are able to create on RISC based ISAs.
To use an analogy, think of two calculators (ironically, calculations are typically built using RISC based processors ).
One calculator can add and subtract. This is the ARM processor based on the RISC design.
One calculator can add, subtract, multiply and divide. This is the x86 processor based on the CISC design.
A developer writes a program to find the answer of 3x3.
The x86 based processor goes: 3 ... x ... 3 ... = ... 9.
The ARM based processor goes: (1+1+1)+(1+1+1)+(1+1+1)=3+3+3=9
A simply example that doesn't correctly convey the difference between RISC + CISC designs, and consequently, how x86 and ARM based processors work. But it illustrates the point well enough.
The x86 adds complexity and functionality to reduce the number of instructions, the ARM emphasises simplicity to increase the speed at which the instructions are carried out.
So why the big deal now?
As I have said before, ARM processors are used mainly in low end, low power devices. This includes most Smartphones and basically all tablet PCs.
The big deal with this is that despite ARM processors and architectures being available for a number of years, very recently, the functionality of ARM processors and ARM based products are catching up to the functionality provided by x86 processors, at least for the average consumer. The rise of the smartphone, the PDA and in particular, the iPhone and iPad have given the average consumer the power required to do various tasks such as recording and sending pictures and videos of a decent quality, watching videos on the internet, playing games, running complex applications, things that before, had typically required the power of an x86 based system. An x86 based system such as the desktop or laptop computer.
Spoiler Alert, click show to read:
A die photo of Apple's A4 ARM processor core used for certain iPad and iPhone models.
A die photo of Intel's Sandy Bridge 2600k processor.
As you can see in the photos, Intel's processor is not only bigger, but is of a higher density and is more complex in it's design.
Recently, when announcing the iPad 2, Apple claimed that it was a post-PC world. Essentially, Apple who utilise ARM processors claimed victory over the x86 manufacturers and developers (Intel, Microsoft, AMD). While the claims may be overstated (and some might say, downright imaginary), it does raise and interesting point: for the past few decades, it has been hardware innovation that has driven software innovation, at what point will improvements in hardware not see an improvement in software, and at which point will hardware be developed to cater to software instead of the other way around?
Unfortunately, that's my next topic to ramble about, so I won't discuss that in depth here.
So, what's next in the immediate future for ARM and x86(-64)?
Well, while Apple has essentially declared the PC obsolete, the technological advances in both ARM and x86 processor development seem to be picking up for both from unexpected sources.
Whereas Apple and Intel (along with Microsoft) have been the "standard" bearers of their specific mediums, the innovation in both ARM design and x86 comes from AMD (who despite implementing the best working 64-bit incarnation of x86, play second fiddle to Intel) and Nvidia (who are a discrete graphics card company branching into high performance computing).
As I had previously gone through in my prior rambling, AMD have decided to go with a design philosophy that embraces integration and system on a chip designs, much like Apple's designs, except with an x86 based processor and focusing on low power solutions that provide excellent performance per watt ratios with long battery life.
Essentially, AMD wants to beat larger companies like Intel by branching out and diversifying their products and beating Apple at their own game with AMD Fusion. They are apparently getting some much needed help beating Intel & Apple from an unexpected source: Apple themselves.
Spoiler Alert, click show to read:
AMD CFO Emilio Ghilardi took to the stage and gave attendees a brief look at the range of products that partners would be bringing to the market. Tucked in among the likes of Sony and Lenovo was a slide proudly displaying a pair of iMacs alongside a Mac Pro desktop. Ghilardi didn't take the time to give any details, but did preface the presentation by saying that we should expect "products that you have never seen before from AMD".
They have decided to continue on the route of high performance computing. For a while, Nvidia have given special focus to the HPC market, much more than AMD and have a superior line of products when it comes to using the raw processing power of hardware to accelerate applications. Note that this doesn't really mean graphics and games in the typical sense, but Nvidia's non-Geforce products have been put to good use.
Spoiler Alert, click show to read:
Nvidia Corp. this week announced that its Tesla 2000-series compute accelerators now power three of top five supercomputers on the planet. The announcement shows that customers in the high-performance computing market are rapidly embracing heterogeneous computing model and take advantage of graphics processing units that deliver high horsepower amid moderate power consumption.
3 out of 5 supercomputers, each with thousands of Nvidia Teslas? Not bad. Especially since a single Tesla costs over 1000 smackaroos. Read the full article here.
So, to that end, Nvidia have announced Project Denver (which I also mentioned in my AMD Fusion rambling). Nvidia is set to design and manufacturer an ARM based processor to compete in the high performance computing field. Zero details have been released on the actual design, specs or anything of note. All we know so far is that Nvidia is building an ARM based processor for the HPC market and they're calling it Project Denver.
So is the significance of this?
It could potentially mean that ARM processor design and x86 processor design will be able to successfully branch into sectors of the market where they have typically been weak commercially. x86 will compete in the low power, mobile market (smartphones, tablets, etc) if AMD Fusion and Intel Atom prove to be a success, likewise, with Microsoft recently announcing that Windows8 will include ARM compatibility, there could be implications for x86 based companies, particularly if Nvidia's ARM Project Denver turns out to be a success. If other companies like Qualcomm and Apple also want to get into the arena, we could easily see a world where both x86 or ARM are able to coexist in numerous markets, or a world where one of them gets completely crushed by the other.
Now, I have little else to say on this matter, and it's a shame I'm not more illuminating on this issue, or the ones surrounding it. Speculation on this area from me is likely to be wild, uninformed and completely incorrect.
However, I'd like to leave a small stream of good articles that I did read if you're interested in the x86/ARM dilemma, particularly if you're interested in what might happen in the future. Now, I don't necessarily agree with the opinions presented in the articles and some of the speculation included in the links have already been proven wrong (or correct), but they are still good reads nonetheless if you wish to learn a little more on the issue.
Re: The incoherent ramblings of Mr. Crusty: x86 & ARM
Originally Posted by mrcrusty
Lol, Quantum computing is still a far way off. But I still have my hopes for asynchronous CPUs in the coming decades.
I read in another article as well, that they were hoping to have large corporations using them in a couple of years. Of course, it will take an additional 4-6 years to get them to any point of efficiency and stability, and then an addition 5 years or so before super expensive ones go mainstream, so I doubt people like me and you will have them in the next 15-20 years.