No, it isn't. It's a combination of various code-points. Let's get back in time: when there was only ASCII (and EBCDIC, but let's forget that) to display "á" you had to send: "a"+#08+"'" (apostrophe). That's obviously one "character" or "glyph" but it needs three code-points. See the difference?
You're mixing up different things: keys you press on the keyboard, key events, code pages and the final resulting char.
Also note that in some languages a "single character" may in fact be itself composed of various glyphs; canonical example:
ᄀᄀᄀ각ᆨᆨ
That's what Unicode calls a "grapheme cluster", very basically a multi-glyph single-character. Or viceversa. Or whatever ... the terminology starts failing once we come this far 
Yes, it's a mess.
By the way, diacritics are glyphs as well. You can have separate diacritics without any other glyphs. You are aware of that, right?
Words fail me. Literally.
Everything that makes up a single glyph, chars, code points, other glyphs, all goes together to form a single
HOWEVER YOU WANT TO CALL IT. As long as we agree, that those bytes make up a single symbol with a unique interpretation, and if you remove parts of that, the meaning of the sequence it is a part of changes.
Again, it is a big mess, designed by committees. No engineers were involved.
And the amount of applications that handles Unicode correctly is 0 (zero). Simply because it is vast, ambiguous and a moving target. And the retainers don't know what they're doing.
I'm done.