Recent

Author Topic: For those want to understand ASM, I found this YOUTUBE link.  (Read 2610 times)

jamie

  • Hero Member
  • *****
  • Posts: 5145
For those want to understand ASM, I found this YOUTUBE link.
« on: December 03, 2021, 04:43:40 pm »

https://www.youtube.com/watch?v=L1ung0wil9Y


I have a pretty good background with ASM but I found this video very educating that starts at the Clang level up to the current AVX 3 instructions and how it all works and how to use ASM lines etc.

Its a long video so sit back ..
The only true wisdom is knowing you know nothing

munair

  • Hero Member
  • *****
  • Posts: 781
  • compiler developer @SharpBASIC
    • SharpBASIC
Re: For those want to understand ASM, I found this YOUTUBE link.
« Reply #1 on: December 03, 2021, 11:10:38 pm »
Thanks for the tip.
keep it simple

SymbolicFrank

  • Hero Member
  • *****
  • Posts: 804
Re: For those want to understand ASM, I found this YOUTUBE link.
« Reply #2 on: December 04, 2021, 01:58:59 am »
If you really want to know how it works. build your own 1 or 2 bit ALU on a breadboard :)

The first assembly opcodes were just activation pulses for AND gates that enabled specific actions. Like, enable a buffer (copy a byte from the address bus to the ALU), or add two bits and store the result (0, while one of the inputs are 1 means Carry) in another bit. All bits are flip-flops and all actions are carried out by a sequence of AND, OR, XOR and NOT gates.

The first improvement was a translation table (multiplexer), so you could make a list of all the useful combinations from all the possible gate activations (a long row of levers) and discard the useless ones. Some architectures went about that very methodological, to make sure each possible opcode was used and useful, others just added more and more bytes, with a sparse distribution, to make decoding easy and assure backward compatability.

The next phase was to add everything and the kitchen sink. Which made things enormously complex and not very efficient, because the amount of bytes that needed to be read for a single execution kept on growing.

So, the phase after that was consolidation: split everything up into different, simplified execution units (RISC, FPU, MMU, DMA) and run them in sequence as well as parallel (pipelining). But that didn't decrease the amount of bytes needed. At this point we even see the very wide instruction set architectures (which went some way back to every bit being a lever). RAM was comparatively fast at this point. But it couldn't keep up. So caching was needed.

The next steps were out-of-order execution (multiple pipelines), giving all those distinct pipelines their own, unique assembly language, and ultimately virtualizing everything. In short: nowadays, what you see isn't even close to what you get, or what happens behind the screens. A CPU is an amalgam of very many, quite different execution units that most of the time... are idle, to save on heat production.

Because, if all those execution units were working at full power at any one time, it would produce more power than your frying pan, and your computer would quickly turn into slag. And probably burn your house down. If your power supply was up to the task. No central heating needed anymore :)

 

TinyPortal © 2005-2018