Are you trying just to undermine whatever I am saying?
Not at all. I was just pointing out that software has some things which are assumed to be easy (sequential evaluation with conditional jumps as examples) with which "mathematics" is unhappy. This is particularly in evidence with APL- invented by a mathematician- which is very much given to "let's take the Sin() of this and multiply it by..." for cases where most "real" programming languages would say "skip this if even".
So while it might be strictly correct to describe a set as a sorted array without duplication (and I'm sure that Russell&Whitehead devotes a chapter to that), it's not necessarily helpful in a programming context. Which I'm afraid implies that notation which might make sense in a strict mathematical context might not be helpful in "real World" programming.
MarkMLl