@scribly
I can't see what you mean. I've attached an image with the output. Left is no sleep, right is with sleep.
As you see left is reading out the old value many times while right it reads more evenly
I cannot comment on the screenshot you posted because you didn't post the code that produced it. The modified example I posted, shows very clearly that sleep has little to no effect on thread switching in that case.
Also, I have to mention writeln, which uses locks to make sure the output buffer isn't accessed at the same time by other threads and when the buffer does go full eventually will cause a full sleep till the buffer is processed(while holding the lock), so not sure if you want to use that to show of threading or ever use in anything that's cpu intensive
There will not be a "full sleep" until the buffer is processed. When both threads send text to conhost.exe (which is a separate process), one thread will be blocked until the request to write from the other thread is complete but, that doesn't imply a thread switch to the other thread in the same process.
I agree that the writeln has an effect on how the results are presented but, the writeln(s) are not what causes the scheduler to decide which thread gets to run next. As long as a thread has not used up its time slice, it will, generally speaking because there are other threads running in the system, keep running until it uses it up.
Post the program that produced that output.
@marcov
Then it shouldn't either not be running (in case it only polled),
That's what "sleep" is for, to "not run", to let the scheduler know that the thread doesn't need any clock cycles.
or it should block to the next time it should actually do something.
When the thread is awaken, it will do something. In the meantime, it doesn't use up system resources and scheduler clock cycles with synchronization objects.
Sleep(n) is not just saying that you don't need cycles now, but at the same time a request to get them in "n".
No, that is not correct. Sleep is not a request to get clock cycles after n milliseconds have elapsed. The thread may, or may not, get clock cycles after n milliseconds. This is the reason why sleep is considered by many to be "inaccurate", because of the mistaken belief that after the time has elapsed the scheduler will run it.
My reasoning is that THAT should be avoided if possible.
What should be avoided is to cause the scheduler to waste clock cycles on a thread that doesn't need them. That's what sleep(n) does.
Objects can be reused. For the variation in overhead I'd like a reference. Afaik in both cases it is a scheduler lock on a condition.
Yes, synchronization objects can definitely be reused. The overhead is simple, a mutex causes a ring transition from ring3 to ring0 and requires scheduler clock cycles to determine if the mutex is signaled and then additional clock cycles once it's been signaled to find which thread(s) was/were waiting for the object to be signaled.
If you want to measure the best case overhead, simply code a loop that waits and releases a mutex about a 1,000,000 times. That will give you an idea of a mutex cost. Do the same thing for critical sections and Sleep(1) (shorten the loop for that last one:)) then compare the results (and pay attention to the CPU consumption in all cases.)
There are always exceptions, and that was never denied. We are talking about the general case here. (and even then there is a lot possible with waitmultiplemessage variants)
Marco, the point was that using Sleep is a good thing, and not something to be avoided. A thread should let the processor know when it doesn't need attention. The thread that definitely should NOT be calling "sleep" is the thread that pumps messages.
As far as the general case, I'm inclined to believe it is more of synchronization between threads within the same process and not different processes. For threads within the same process, polling using sleep produces code that is simpler, isn't subject to deadlocks and, uses less CPU. That said, it can be a bit slower than using some sort of synchronization, in those cases critical sections are usually a good option. A combination of TryEnterCriticalSection and Sleep often produces excellent results.
Afaik mutexes that are cross-apllication (global) in nature are fairly slow. Non named ones used within one process afaik aren't.
I haven't tested the performance of unnamed mutexes but, if this is correct
https://stackoverflow.com/questions/1666653/are-mutexes-really-slower then unnamed mutexes are still kernel objects with all the associated overhead of a named mutex.