For monolithic applications you are right, but I'm talking about distributed ones. It doesn't consist of just the one process.
it doesn't? so no one start the distributed client on the external computers? how many clients are running?
Yes, you do need a server on each computer.
If you call a method, does it run in the context of the calling thread, or in the one of the instance? What does "Self" refer to?
As far as I understand, a method is a procedure with a hidden parameter that parameter is "self". The value of the "self" parameter is the thread of the instance (if I understand you correctly) and the code is always executed in the context of the calling thread.
Yes. And that's why you shouldn't access then without locking them up front. Because everyone can do that, all at the same time.
I think that depends on your definition of "simpler to handle". Less code and complexity, certainly.
I'm talking about the code here yes.
Does it differentiate between blocked, idle and processing threads? Because those make a large difference. And you imply the use of a thread manager.
it tracks idle threads only, it assumes that if a thread is not idle is busy. d I never had any blocked threads as far as I know that is.
Blocked threads are the best ones: they're waiting on events or other I/O. They take little CPU time, code or maintenance.
Then again, it is really hard to make a functional thread that isn't just waiting more than 50% of the time on slow resources (mostly memory).
Well, you can avoid it by not using queues but sockets to communicate.
no you did not avoided it, you simple changed the guarding mechanism, if the thread is to busy to check for incoming socket communication it will not receive any.
True. But "efficient" in threading is often equal to "fully decoupled". Any interaction slows thing down quite a bit. Sockets are completely decoupled, lock-free queues are nice, but 95% of the code is in preventing the shared access to fuck up things.
Yes, I agree. That's my main problem with servlets, or threads crashing the main process, and so the whole, distributed application. Which can be huge, and doing thousands of things at the same time.
If it is only the one thread/process, there is no recovery possible.
Well to guard against all exception is as simple as adding a try except block that encompasses the complete code of your execute method and does not raise any exception for what ever reason. Is that too much?
Well, for starters: that does work in free pascal most of the time, but not in C++.
But exceptions are the same as "on error goto handler". They make it hard to figure out the program flow. Especially if they can happen in child threads.
Mostly, because it is hard to enforce a wrapper that prevents crashing in all cases. Because, as you said, you have to put the try..except religiously around anything.
Every means of communication that is not enclosed by mutexes or through sockets increases the risk of crashing the application when shit happens.
Yes and no. There are atomic operations that do not need guards and there are operation specifically designed to be atomic, like the interlockedXXXXXX procedures. Those do work both under guard and with out a guard with out probles,
Yes. I would love it if there was an atomic variant for each operation. But, there's only a few really atomic operations.
Most "atomic" operations are like this:
Lock; // this is a global action over all processors and cores that often requires flushing pipelines and caches
DoSomething(Avalue);
Unlock; // Well, continue with what you were doing when you have repaired the damage
There are a few really atomic ones, but they are hard to use.
On a regular desktop computer, there are thousands of threads active at any one time. Of which most (more than 90%) are blocked and waiting on an event (I/O, mostly), and a few hundred that are executing code ("running").
But, most of them are waiting the majority of their time as well. On slow memory (hundreds of cycles), on a mutex lock or cache invalidation, or even an extremely slow (millions of cycles) disk access.
It is really hard to make a thread run at full speed for an extended time.
So, you actually want to count the total CPU time those running threads spend. And only the OS knows that. Your application doesn't.
You are overthinking it. even the cpu has to wait for the bus to become available its out of your hands its out of the OS hands too. take a step back and see the forest.
If you have many threads that return results by locking, incrementing and unlocking a single variable, you just serialized your application.
Most multi-threading applications I see lock
EVERYTHING. That makes them excessively slow.
If you use sockets for accessing the services, you also don't need to write all of them in the same language. Or run them all on the same kind of CPU and OS.
true, then again you add a level of indirection and an order of complexity and it has nothing to do with the processing it self.
I don't agree: message queues are not servers. There is no memory barrier or exception prevention when accessing them. They use owned pointers.
It seems that "message queues" hits some kind of wall, try message brokers instead.
I know how they work.
Making things more complex won't solve the underlying problems, it will only hide them from people who don't understand.
Yes, on Linux it is probably faster as long as the binary containing the code is already loaded into main memory.
same as threads on windows.
Absolutely not, they are vastly different.
You can send a message to either a window or a thread, both of which have to be active, have a valid handle and message queue. And they should be on the "active" list of the scheduler (ie. scheduled to get CPU time). Otherwise, you won't get the message.
isn't that a global requirement? your client has to be active, non blocking and has open the socket/door combination to accept communication as well why is this a problem? they are the minimum requirements in everything why it is a problem in queues suddenly?
Blocking is completely acceptable, terminated and removed isn't.
It's all about the context where the code executes.
Well, they're not the same
I guess I do not see the difference, sorry.
Yes. I'm not sure I can explain it.
I'll probably just have to build and use it, no matter what anyone else thinks.
Then again, that is the problem: people don't like change, so why would they use it?