MarkMLl and Warfley and 440bx thank you very much for the enlightening information.
A little off topic:
I have a lot of questions in my mind, one of them is;
Does the running speed of threads change according to the language they are written in?
So is it possible that a thread written in a different language needs sleep, while a thread written in another language does not need sleep?
Hypothetically: if a program were written in a reincarnation of BASIC which guaranteed that it behaved the same as some particular model of 8-bit home computer, then it would be reasonable for this to be implemented by intercalating a sleep() after every few emulated opcodes.
Apart from that... I'd be surprised if the language per se made any difference. I'd be surprised if the OS or the CPU said "Aha, I can see this is a non-standard calling convention so I'm going to insert execution delays"... and then I'd go looking for an alternative platform, since that almost certainly indicated something dodgy going on. And in that extreme case, while the OS might insert sleep(t) I'd not expect the CPU to, since in the general case the CPU is thread-agnostic (special cases: Sun "Niagara", Intel "Hyperthreads", and similar now implemented by others... but I'm not sure whether the CPU can act as a scheduler there other than as dictated by the cache being filled).
Again hypothetically: there have been attempts at various dataflow languages and there was also Mystic Pascal, many if not all of which had some measure of multithreading implicit to their implementation and which /might/ have needed explicit delays to allow background activities to complete. However since that would not have been implemented by the OS (they generally predate the concept of OS-supported threads significantly) it would be strictly incorrect to say that sleep() was being used. Ditto for coroutines in Modula-2 etc.
Finally, as a counterexample, there have been cases where development environments recognised that they were being asked to compile a benchmark such as- notoriously- the Sieve of Eratosthenes, and messed about with the code generation to make the result look good. Also there are languages such as APL which can do some quite surprising things to improve apparent efficacy... note that I'm not saying efficiency here since it is a regrettable fact that people who promoted them were oblivious to real-World constraints like finite memory.
Hope that isn't too vague. I suggest that further discussion might be better continued in the "Other" topic.
Later, rereading:
"So is it possible that a thread written in a different language needs sleep, while a thread written in another language does not need sleep?"
Strictly speaking, there is no reason why code written in one language should be faster or slower than equivalent code written in a different language. However different /implementations/ may behave very differently, e.g. if one is highly-optimised native code while another is purely interpretive.
And if the question really is "is an *explicit* sleep() ever needed to get threads to synchronise", the unavoidable answer is "if that is the case you're doing something badly wrong".
However, "doing it right" might involve various synchronisation or IPC (Inter-Process Communication) facilities provided by the runtime library or the OS, and they will almost certainly have *implicit* invocations of sleep() internally.
Alternatively, if the question is about e.g. implementing a communications protocol where a fragment in Pascal might look like this:
const
turnaround= 50;
begin
SerSetRTS(serHandle, true);
Sleep(turnaround); // Artificial pacing
try
...
while not (ignoreCts or SerGetCts(serHandle)) do
Sleep(10);
...
finally
SerDrain(serHandle);
Sleep(turnaround); // Artificial pacing
SerSetRTS(serHandle, false);
Sleep(turnaround) // Artificial pacing
end
end { TSerialThread.sendMessage } ;
I think it would be incorrect to re-implement it in e.g. interpreted BASIC without doing /something/ to ensure that the turnaround time mandated by the protocol was observed... even if the current BASIC implementation and platform weren't fast enough to make it an issue (remember the number of programs with timing that went wildly wrong when the IBM AT came out?).
MarkMLl