Forum > General

Base of string and dynamic array types

<< < (14/14)

PascalDragon:

--- Quote from: Thaddy on July 05, 2024, 02:15:49 pm ---
--- Quote from: PascalDragon on July 04, 2024, 09:59:07 pm ---To be fair, in C, a char is a signed type.

--- End quote ---
That depends on the compiler. Ansi C has three distinct char types:
1. char (this depends on the compiler if it is signed or not! It is not in the language specification)
2. signed char, explicit
3. unsigned char, explicit.
You can find out the undecorated char type's sign in your compiler by including limits.h and looking for CHAR_MIN.
To be fair... ::)
On intel it is usually signed, on arm it is usually unsigned.
--- End quote ---

Okay, granted, I confused that with int which always is equivalent to signed int (and on MSVC, which is the main compiler I use at work, it indeed defaults to signed char independant of the platform) :-[


--- Quote from: BrunoK on July 05, 2024, 04:17:00 pm ---Some bugs linger on const aVariable:String reference counting in method calls but we have to do with it. At least that's how some our eminent Lazarus developers on have decided to do to get things going.

--- End quote ---

What do you mean here? Also for low level stuff the Lazarus developers have nothing to do with it.

VisualLab:

--- Quote from: Laur on July 04, 2024, 09:35:43 pm ---in c char is byte by definition, but in pas a char is abyte - smething undefined like an atom in a quanum physics,
 where electrons are some mystical particles - probabilistic waves around protons...

the same: photon it's a point particle, but waving somewhat thus it has non null length.

--- End quote ---

You're talking nonsense. Programming languages and quantum physics are completely different things. You confused the size of the variable with how the content of this variable should be understood. A variable can be 1 byte in size but store: an alphanumeric character (Char), an unsigned integer number (Byte), a signed integer number (ShortInt) or a Boolean value (Boolean). In Pascal, a problematic name is "Byte". That's why aliases like "Int8" or "UInt8" are much clearer than "Byte" and "ShortInt". However, in C there is a mess, confusion between the size and the type of content (char). Why the hell does a variable storing alphanumeric characters need a sign? For nothing. It's just a bad design that wasn't corrected in advance and it stayed that way.


--- Quote from: Laur on July 04, 2024, 09:35:43 pm ---This is just the key of a fantastic world, because any word is pretty fantastic for ignorant... in world - any world. :)

--- End quote ---

Words have their original meaning, but people sometimes try to use them in new applications. Sometimes they do it thoughtlessly and senselessly. And a mess ensues. Unfortunately, in C it is much larger than in Pascal.

VisualLab:

--- Quote from: Laur on July 04, 2024, 10:19:13 pm ---in math and computers exists only numbers... nothing more.

symbol is always some number: A = 64, ect... in some context of courese - the human context.

and double is a floating number... 53 bits of prec + 10exp + 1 sign = 64 bits.

what is: 1.25 in bits, bytes or chars? :)

--- End quote ---

This is a truism. But not entirely true either.

There are many more different objects in mathematics (matrices, vectors, functions, etc.). And the numbers are just one of many.

In IT, yes, numbers are important. But what's more important than that is what these numbers represent. Bare numbers in computer science were only relevant for a short time. But people have been treating these numbers (and even more often their "packages": arrays, structures, objects) as content for a long time. If only numbers were important in computer science, there would be no current software.

Martin_fr:

--- Quote from: Laur on July 04, 2024, 10:19:13 pm ---in math and computers exists only numbers... nothing more.

symbol is always some number: A = 64, ect... in some context of courese - the human context.

and double is a floating number... 53 bits of prec + 10exp + 1 sign = 64 bits.

what is: 1.25 in bits, bytes or chars? :)

--- End quote ---

If we go down that route, then the above is wrong too.

In todays mainstream computers, there exists (only) "bits". A concept with 2 states, represented by electronic circuits.
Those bits are organized in groups, and patterns of those bits can ("have the option of") be interpreted as numbers.

There exists also circuits to act on those bits, and translate certain patterns into other patterns by performing logic operations. Some of those circuits do this in a way that the results matches the results of mathematical operations on the numerical interpretation of those bit patterns.

Feel free to take the above apart. It is (by all probability) riddled with inaccuracies and even mistakes.

SCNR
;) ;)

MarkMLl:
Right, this has gone on /far/ too long. Locked.

MarkMLl

Navigation

[0] Message Index

[*] Previous page

Go to full version