Stupid me, just 28 works! Still no received data from "server".
Your code fails here for a few reasons. The port number 28 returns Permission denied. I used 1234 with ludp and it worked.
Also, the example uses the callback OnReceive to handle listening events.I don't work with classes and methods... I don't know how to use it.
repeat begin Sock.getmessage(tmp); writeln(Sock.getmessage(tmp)); sleep(100); end; until false;
You can't use ports <1024 (or maybe <=, can't remember) on a unix-like OS including Linux without having root privilege. Not sure whether there's a corresponding restriction on Windows etc.
Why are you calling getmessage() twice per loop iteration? Get rid of the 1st one.
Or, use the OnReceive event, like Derek suggested.
I don't work with classes and methods... I don't know how to use it.
Can someone help me on this? I need a straigthforward way (if possible) to receive udp data.
if you don't want learn Object Pascal, then why are you asking for help?
Quoteif you don't want learn Object Pascal, then why are you asking for help?
Is it wrong to use plain Pascal instead of Object Pascal?
Considering that you're using lnet you are using Object Pascal. So if you want to use the components correctly you must use the mechanisms and concepts provided by Object Pascal (which includes event handlers).
QuoteConsidering that you're using lnet you are using Object Pascal. So if you want to use the components correctly you must use the mechanisms and concepts provided by Object Pascal (which includes event handlers).
Can these handlers be used in my example without creating classes?
Before switching to lnet, I was testing with BlckSock (synapse?).So why did you want to switch to LNet?
Can these handlers be used in my example without creating classes?You can try to use a simple class with class function which handles your data. Then you wouldn't need to create an instance but can use the @TDummy.MyReceive() function directly. Not sure if that would work. I prefer Synapse
So why did you want to switch to LNet?I read that lnet was asynchronus, thats why.
If you want asynchronous reading of sockets you will always end up with classes and/or threads. That would also be the case for Synapse because it by itself is 'blocking' (the class is even called TBlockSocket in Synapse). You can make your it asynchronous by using it in threads.QuoteSo why did you want to switch to LNet?I read that lnet was asynchronus, thats why.
Use select on the socket to check if there is any data. But true asynchronous execution requires your application to be built around that. It's not something you can just "toogle on" using a library. Either you have threads that fire events or you use something like AsyncNet (https://github.com/Warfley/AsyncNet/) which builds on STAX (https://github.com/Warfley/STAX/tree/master/src)QuoteSo why did you want to switch to LNet?I read that lnet was asynchronus, thats why.
After reading the comments on this thread, it seems lnet is no-go for me. Shall I start a new thread about sockets?
The easiest way to receive data via UDP is simply using the base socket unit which is provided by the fpc without any additional packages:
The easiest way to receive data via UDP is simply using the base socket unit which is provided by the fpc without any additional packages
Because I think using sockets for small things (like a small UDP client or server) is way to complicated right now. But it defenitely needs completion, as it just contains the basic socket functions
The socketError will give you more information when your fpbind() fails.
An invalid argument was supplied.
This error is returned of the socket s is already bound to an address.
The rather hackish way I'd do it would be something like
Did you also try my same Sender code I gave you?QuoteThe socketError will give you more information when your fpbind() fails.
this is the problem! it returns 10022 error. The code is running ON THE "SERVER" and the ip of that "server" is 1.1.1.2. %). Also, no firewall.
Is the code you posted previously the exact code that results in this error?
Did you also try my same Sender code I gave you?
The error is not the bind but the recvfrom, and the error here is because thanks to some weird quirk of WinSock, recvfrom does not like it if the address is located on the stack (or unaligned memory in general).The code I gave didn't use fprecvfrom but fprecv.
The error is not the bind but the recvfrom, and the error here is because thanks to some weird quirk of WinSock, recvfrom does not like it if the address is located on the stack (or unaligned memory in general).
The quick solution to this is to put the address and length on the heap:
What would be the point if he could get this running using recv when at some point he will need recvfrom probably anyway?Because the fromadress was already known. It's always the server.
a client loop that sends continiously a message (to an ip,port),
and a server loop that listens (receives) for any messages
Ps, I am noob at networking.
Side note: I am using 2 windows 7 vms, ips are 1.1.1.1 and 1.1.1.2, no firewall.
You can just use fprecv without the fromaddr like I showed in my example.
Did you try my exact code yet?
Because the fromadress was already known. It's always the server.At least one of us has a misunderstanding here, because I understood it as if he is currently writing the server, and the server probably needs to use recvfrom to identify the client
Besides that... If my code also generates a 10022 error there is something else wrong and it has nothing to do with recvfrom. If it works we can expand it with recvfrom if neededI could reproduce this error on my machine, and as I already stated, the problem with stack pointers for the memory is something that is known for the winapi for quite some time (and took me a many hours to find out when I first encountered it).
Received: 11:46:49 192.168.2.11
Received: 11:46:49 192.168.2.11
Received: 11:46:49 192.168.2.11
Received: 11:46:49 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:50 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
Received: 11:46:51 192.168.2.11
err: 0
11:56:52
The problem 1 is that the server keeps receiving even after I stop client
The client sends with sleep(10) and the server receives (YES, its working!!!) with sleep(100).1) UDP also has a buffer so it's usual you would end up with additional message if your sender is faster than the receiver.
The problem 1 is that the server keeps receiving even after I stop client, and 2 the received messages stop at 10 (displays only 1 and for 11+ displays nothing). The +5 was just for check, no other reason.
You need to do a SetLength(buff, UDPPackLen); before the fprecvfrom to make room for the string AND a SetLength(buff, MessageSize); to cut the buffer/string to the correct received size after the fprecvfrom.
What do you mean?The client sends fast and the server receives slower (seems to receive slower because of sleep(100)); as rvk said, its a buffer.
2) You have a SetLength(buff, MessageSize); on line 45. But MessageSize is the result of the previous fprecvfrom !!!
So the MessageSize could be 0. You need to do a SetLength(buff, UDPPackLen); before the fprecvfrom to make room for the string AND a SetLength(buff, MessageSize); to cut the buffer/string to the correct received size after the fprecvfrom.
The thing with the buffer, does it have anything to do with synchronous/asynchronous? I thought that udp packets, if you don't receive them in time, you loose them.No. Buffering is normal. I'm not sure how large the buffer for udp socket is on Windows but there are also some other buffers at play. Buffer in the network adapter for example.
The problem with the buffer is this: I wanted to make the client send message 1 to the server, then when the server received the message 1, send a message 2 to client, and again when the client received the message 2 send a new message 3 to server and so on. A ping-pong as fast as the connection (hops, latency) can be (2 vms using their dedicated virtual adpater should be quite fast).Then why are you using UDP and not TCP connection?
The problem 1 is that the server keeps receiving even after I stop client, and 2 the received messages stop at 10 (displays only 1 and for 11+ displays nothing). The +5 was just for check, no other reason.That is the nature of UDP. UDP is connectionless, meaning you just receive datagrams, no matter from which client and what the client state is, and also it is unreliable, meaning that data can get lost.
Then why are you using UDP and not TCP connection?I am going to try the ping-pong. I want to use udp instead of tcp because I want to have as low lag as possible. What I am going to do if a packet is lost is up to me to make or not a custom ack/resend.
ou can simulate it by sending an UDP message back and only send the next message when you get the ACK (acknowledgement) back but that will slow down things.Does udp return ack by itself?
BTW. You increase the I after sending but before printing to screen so you actually didn't send that last printed number.AGAIN, correct!!!
If you want to know when the client disconnects, and you require reliable transmission, you shouldn't use UDP
Does udp return ack by itself?No, that's the whole point of UDP.
No, that's the whole point of UDP.That's what I thought, I missinterpreted what you wrote before.
No return message and no guarantee someone receives the message.
The upside is that you can also send an UDP message to the whole network (broadcast) without knowing any of their IPs. Receivers can answer that they received it (via return UDP message or TCP connection request).I am aware of that.
Back to the buffer thing, let's say I want the packets to be droped if the server does not receive them in time. How to do that?Send a timestamp with the UDP message from the sender and ignore it on the receiving end if it's too old. That would mean that both computers need to be exact in timesettings.
Timestamp will add small latency to the program. Receive them in time = receive when the datagram has arrived at the destination and the program reads it. I might have wrong assumptions on how things work. I thought that a udp datagram arrives, if you read it fast, then good, else the packet is gone/replaced by the next one. I set the server to sleep(1000), I still received all the datagrams.If you are requiring these timing guarantees that the delay of checking a timestamp is crucial, then forget about working with the socket api all together. You either need to use user-space networking, or write your code as a kernel module.
The upside is that you can also send an UDP message to the whole network (broadcast) without knowing any of their IPs. Receivers can answer that they received it (via return UDP message or TCP connection request).
There is more than just UDP and TCP, there are things in between
TCP is a reliable ordered stream of data. UDP is for transmitting unreliable unordered packages. There is also RDM (Reliable Delivered Messages), which sends acks and has therefore delivery guarantees, but does not guarantee order, or SEQPACKET (Sequenced Packet), which also ensures ordering and is like TCP also connection based, but unlike TCP it does not provide a stream of data but only single packets, and therefore is more lightweight. Depending on how much guarantees you need and also how much overhead they imply, they can be useful choices as in-between options.
If you are requiring these timing guarantees that the delay of checking a timestamp is crucial, then forget about working with the socket api all together. You either need to use user-space networking, or write your code as a kernel module.This goes deep into the rabbithole...
Also you will probably run into scheduling problems as you don't use a real-time operating system
With whole network I meant if course internal network. For example an internal messaging system.The upside is that you can also send an UDP message to the whole network (broadcast) without knowing any of their IPs. Receivers can answer that they received it (via return UDP message or TCP connection request).
Careful there. Some of this might be OS-specific, but generally speaking UDP broadcasts don't go through routers unless there's a special proxy or similar.
Timestamp will add small latency to the program.This could be as little as 4 bytes. So not really any problem considering the UDP is at least package size large.
I thought that a udp datagram arrives, if you read it fast, then good, else the packet is gone/replaced by the next one. I set the server to sleep(1000), I still received all the datagrams.That's just how it works. An UDP package can even arive seconds later depending on network circumstances, buffering and OS priority.
True but in that case you do need to handle receiving messages faster than the sender is sending them. You are not doing that.QuoteThen why are you using UDP and not TCP connection?I am going to try the ping-pong. I want to use udp instead of tcp because I want to have as low lag as possible. What I am going to do if a packet is lost is up to me to make or not a custom ack/resend.
That's just how it works. An UDP package can even arive seconds later depending on network circumstances, buffering and OS priority.
If that's unacceptable you can send a few bytes extra (it really doesn't take any more space in a package) or leave the concept of UDP by behind.
True but in that case you do need to handle receiving messages faster than the sender is sending them. You are not doing that.
You can just flush the buffer after receiving a message. In that case you (might) loose messages but you are more certain the message you do get are more recent.
Of course the messages are very small, but unless there is something wrong with the code examples, it seems that udp is not behaving as udp %)No, it's not behaving like your notion of UDP.
For the sake of the original post regarding the lnet and the handles, If someone can make an example without using classes, it would be welcome, for I know that you can write things with many different ways.Like it was already said... you can't use lnet without classes.
Asynchronous programming is a form of parallel programming that allows a unit of work to run separately from the primary application thread. When the work is complete, it notifies the main thread (as well as whether the work was completed or failed). There are numerous benefits to using it, such as improved application performance and enhanced responsiveness.https://stackify.com/when-to-use-asynchronous-programming
But I'm curious. You say you want asynchronous communication... Do you know what that means?
Now you have a program with just one flow. Line by Line. You can't achieve asynchrony that way.
(Note that this isn't real asynchronous communication though. You're just misusing the library to force synchronous communications.)I hope you mean ..misusing the library to force asynchronous communications..
Did you look at the example ludp.pp in \config\onlinepackagemanager\packages\lnet\examples\console\ludp ??Yes, but I find it "difficult" because of the classes. Your latest example seems quite easier to understand and use (and more compact)
It has a complete server/client udp example with a TLUDPTest class.
As your programs get more advanced you eventually will need to go to procedures and classes.QuoteDid you look at the example ludp.pp in \config\onlinepackagemanager\packages\lnet\examples\console\ludp ??Yes, but I find it "difficult" because of the classes. Your latest example seems quite easier to understand and use (and more compact)
It has a complete server/client udp example with a TLUDPTest class.
as this works (even if it might have problems that I don't see in this stage of the testing):Your first example works but is very dangerous. You have a pointer to some variable that doesn't exist anymore and can get overwritten at any time.
Also, I tried various ways to get the host but no success (it can be implemented inside the meagges)Isn't the ip address of the sender in aSocket.PeerAddress in the OnReceive event?
Isn't the ip address of the sender in aSocket.PeerAddress in the OnReceive event?I was trying to add it in your first of the examples.
In the pointer test this *seems* to not be the case; even adding sleep it gives output.I can guarantee that this will go wrong at some point.
The solution is obvious as you said, but less compact/neat (thats what I was trying to avoid). Is there any "control" over lifetime?No. If you define a local variable... that's just what it is... local. You shouldn't access it outside of that procedure.
I just try to avoid the main "var" hell that I am getting every time that I start a project.That's why you can work with procedures and functions.
You are (again) correct. The local variable lives for one repeat loop, for loop, even if you copy-paste the lines multiple times, then it zeroes in my pointer test. %) I just try to avoid the main "var" hell that I am getting every time that I start a project.
That's why you can work with procedures and functions.Very neat example! 8).
Look at my last example (added it in an edit) passing sock as function result.
You can put everything in the main loop in a separate procedure main and call it right after the main-begin. Then you have zero global variables.
I think you need to explain what you mean by 'main "var" hell' in this contextmany vars that are common for procedures/functions.
I think it's worth throwing in here that Pascal's idea of local variables is shared by most languages, so please don't feel that the Pascal language (or the FPC implementation) is being obstructive.absolutelly not!
There might possibly be a way to make a local variable externally visible by making it static and returning a pointer, but that would be excruciatingly bad practice so please don't ask me to explain further :-)
Global variables have an undeservedly poor reputation, but that is mainly because novice programmers tend to overuse them. In practical terms, even the Delphi and Lazarus IDEs use them heavily despite there being preferable workarounds.
what could possibly go wrong?? :DThat example should work fine.
So sock := TLUDPL.Create will create a pointer to a TLUDPL instance.Interesting! I never thougt of it as pointer.
I hope I explained this a bit clearly
psock:=sock; does not work because sock is acting as a normal variable. Even casting it as pointer, psock:=pointer(sock); is not working.. Do I miss something here?What type is psock?
@rvk,Yes, it's the same as in the example I posted here
Before trying your other examples, is it possible to get the host (sender) address on the last example I posted?
it's not working.Ok. I'm not sure if it works with just the getmessage.
With sleep(1) the program acts like having sleep(500) for exampleHow fast is the sender sending?
In that case you need to show both sender and receiver code.the program is both sender/receiver.
Sleep(n) does not mean "sleep n milliseconds" it means "sleep at least n milliseconds". It basically tells the scheduler, take me of the CPU and don't reschedule me for at least this amount of time.Correct.
The thing is, the scheduler is slow, really slow, and takes at least like 30 millis to reschedule.It's not about the scheduler being slow (it isn't), it's about what else is happening in the system.
So if you write Slee(1) there is practically no difference to writing Sleep(30) (more like Sleep(50) or even more).That is not the case at all. There is a _big_ difference between Sleep(1), Sleep(30) and Sleep(50). This is really easy to test.
So if your program has a loop that takes only a couple of millis to execute, and then there is a Sleep(1), the amount of time it goes to sleep will be orders of magnitude greater than your execution time, resulting in an extremely low CPU usage.It will be greater, usually about 30% but, definitely _not_ "orders of magnitue greater".
The special case here is Sleep(0) because this means "Sleep at least 0 milliseconds", which is interpreted by the OS as a yield, i.e. "if there are other processes waiting to get CPU time, give it to them, otherwise let me continue".That's what the documentation says but, the reality is very different. Easiest way for a process to hog the CPU is to be in a loop of Sleep(0) and, suddenly there no other processes in the system that have any work to do. Sleep(0) is something that should _never_ be used. The documentation has been wrong since the days of NT4 and it's quite likely still wrong for Win11 (It's factually testable to be wrong up to and including Win7 SP1).
So what happens here is that if there are no other processes waiting, you won't get of the CPU and therefore stay at 100% usage,and that happens _even_ if there are other processes waiting. Easiest way to mess Windows up is to have as many threads as there are cores running high priority sleep(0) loops. Don't take my word for it, _try it_!!.
It will be greater, usually about 30% but, definitely _not_ "orders of magnitue greater".Wrong!
That's what the documentation says but, the reality is very different. Easiest way for a process to hog the CPU is to be in a loop of Sleep(0) and, suddenly there no other processes in the system that have any work to do. Sleep(0) is something that should _never_ be used. The documentation has been wrong since the days of NT4 and it's quite likely still wrong for Win11 (It's factually testable to be wrong up to and including Win7 SP1).Wrong!
Easiest way to mess Windows up is to have as many threads as there are cores running high priority sleep(0) loops.No one was talking about high priority processes, but just for fun, i set the priority of one of the sleep processes to high, and guess what still 0% cpu utlilization. You are just plainly wrong.
Don't take my word for it, _try it_!!I don't take your word for it, I know that you are wrong, I did try it, and it shows exactly what I said.
startTime := GetTickCount64;That's how Aristotle concluded that the heavier an object is the faster it falls, dropping a rock and a feather.
you really think that the tick count gets updated every CPU clock tick ? (just in case, the answer is _no_)No but it is accurate enough in the range of several milliseconds. But sure, let's use a more accurate clock:
Here is a piece of advice for anyone who's readingWritten by the person who was provably wrong about everything so far...
18 milliseconds, factor of 9, still far away from 30% and around an order of magnitude.I see you like to measure things using a rubber-band, got to admit, good method to get whatever number you want.
So I can't reproduce your results.Use the little program I posted and you'll be able to reproduce the results. It's that simple.
Use the little program I posted and you'll be able to reproduce the results. It's that simple.Using your exact code, just wrapping some QueryPerformanceCounter around to measure time:
just wrapping some QueryPerformanceCounter around to measure time:Obviously, you're messing things up somewhere because you should get the same results I get.
Obviously, you're messing things up somewhere because you should get the same results I get.
With sleep(0) or no sleep at all, it is very very very fast. Insert even the lowest possible, sleep(1), the program runs very very very slow. %)I didn't really find it very very slow :)
I didn't really find it very very slow :)
Can you give an estimate in seconds how long 1000 or 10.000 ping pongs take with my last example?
Using my last example (I currently work on this "itteration" as I get the grip of it), without sleep I get 10000 ping-pongs every ~4 seconds. Using epiktimer systemsleep(1) I get 10000 every ~210 seconds.Ok, I see what you mean.
ip: 172.18.144.1
1 = send or 2 = listen? 2
ping from 192.168.2.11 0
1000 ping from 192.168.2.11 78
2000 ping from 192.168.2.11 78
3000 ping from 192.168.2.11 78
4000 ping from 192.168.2.11 62
5000 ping from 192.168.2.11 63
6000 ping from 192.168.2.11 62
7000 ping from 192.168.2.11 79
8000 ping from 192.168.2.11 93
9000 ping from 192.168.2.11 94
10000 ping from 192.168.2.11 78
11000 ping from 192.168.2.11 78
press enter to quit
ip: 172.18.144.1
1 = send or 2 = listen? 2
ping from 192.168.2.11 0
1000 ping from 192.168.2.11 78
2000 ping from 192.168.2.11 62
3000 ping from 192.168.2.11 63
4000 ping from 192.168.2.11 62
5000 ping from 192.168.2.11 78
6000 ping from 192.168.2.11 63
7000 ping from 192.168.2.11 62
8000 ping from 192.168.2.11 78
9000 ping from 192.168.2.11 63
10000 ping from 192.168.2.11 78
press enter to quit
ip: 172.18.144.1
1 = send or 2 = listen? 2
ping from 192.168.2.11 0
1000 ping from 192.168.2.11 29157
2000 ping from 192.168.2.11 29375
3000 ping from 192.168.2.11 29890
4000 ping from 192.168.2.11 30094
5000 ping from 192.168.2.11 30844
6000 ping from 192.168.2.11 30750
7000 ping from 192.168.2.11 31000
8000 ping from 192.168.2.11 30984
9000 ping from 192.168.2.11 30797
10000 ping from 192.168.2.11 30734
11000 ping from 192.168.2.11 30813
press enter to quit
I'm sorry if reality doesn't conform to your expectations, maybe you are working on some ancient windows version, but these results are consistent with my experience for at least the past 10 years as well as my theoretical knowledge of the topicI don't think Win7 SPI qualifies as "ancient" and, it's not about my expectations, it's about how things work. Sleep(n) is obviously not exact but, it is much more precise than what you're getting. Neither my computer nor my version of Windows is "special" and, it works quite nicely over here. Sleep(n) is within 30% of n.
Is it possible to do something like a custom sleep that waits x number of cpu cycles,In user mode, Sleep(n) is the closest thing to that.
or do something like the idle proccess of windows,The system idle process is not a real process.
or wait for something without polling (something else consuming cpu cycles)??there sure is. WaitForSingle/MultipleObject, allows a process to wait until an object is signaled and, no cpu is consumed during the wait.
(Forgive the vagueness..)No problem, you're forgiven. Your penance is two hello-worlds and 3 binary searches.
Quote from: prodingus on Today at 01:13:30 amsleep (tries to) measure time, regardless of cpu frequency, unless I am wrong. I am not asking to pause for x amount of time, but x amount of cpu cycles (don't know if its possible though).
Is it possible to do something like a custom sleep that waits x number of cpu cycles,
In user mode, Sleep(n) is the closest thing to that.
Quote from: prodingus on Today at 01:13:30 amI am aware that its not a real process.
or do something like the idle proccess of windows,
The system idle process is not a real process.
Quote from: prodingus on Today at 01:13:30 amAny (easy to understand) reference on this? Is this, as Warfley said, similar to blocking sockets?
or wait for something without polling (something else consuming cpu cycles)??
there sure is. WaitForSingle/MultipleObject, allows a process to wait until an object is signaled and, no cpu is consumed during the wait.
Sock.OnReceive := @TMyDummy.OnReceive; gives Error: Incompatible types: got "<class method type of procedure(TLSocket) of object;Register>" expected "<procedure variable type of procedure(TLSocket) of object;Register>"If you use DELPHI mode you don't need to put @ before an eventname to assign it to a event-variable. In FPCObj mode yoy DO need to put @ in front of the event-proc during assignment.
sleep (tries to) measure time, regardless of cpu frequency, unless I am wrong. I am not asking to pause for x amount of time, but x amount of cpu cycles (don't know if its possible though).You're not wrong. There is no way to pause/wait for a specifiable "x" number of CPU cycles.
Any (easy to understand) reference on this?The MSDN documentation on it is reasonably easy to understand and there is even an example (In C of course.)
f you use DELPHI mode you don't need to put @ before an eventname to assign it to a event-variable. In FPCObj mode yoy DO need to put @ in front of the event-proc during assignment.
You first need to examine... do you want or need DELPHI mode?
(You did put it in one of your examples, so I used it in mine and didn't put @ in front)
If you still ave mode DELPHI at the top, remove it or remove the @.
it runs about the same as mine :oI tried it from a VM to host on the same machine.
I test them VM to VM in their own vnet. Have you compiled my code and run it?I'll try tomorrow (sleeping now ::) )
Neither my computer nor my version of Windows is "special" and, it works quite nicely over here. Sleep(n) is within 30% of n.I don't know whats wrong with your windows 7 version, I tested it on my Windows 10 pc, and an windows xp as well as a windows 2000 vm (I don't have Win 7 lying around so I can't test that) so unless there was something very special in Win7, this seems to be the default behavior of the NT kernel since it's inception. Maybe there is something special on your version.
or wait for something without polling (something else consuming cpu cycles)??This is what I mentioned earlier, to use blocking calls. A blocking networking call will pause the process/thread until. I don't know much about lnet so I don't know what exactly callaction does, but when you just call a recv or recvfrom on a socket, the OS will put the process to sleep until some message arrived and then wake it up and return.
Neither my computer nor my version of Windows is "special" and, it works quite nicely over here. Sleep(n) is within 30% of n.I don't know whats wrong with your windows 7 version, I tested it on my Windows 10 pc, and an windows xp as well as a windows 2000 vm (I don't have Win 7 lying around so I can't test that) so unless there was something very special in Win7, this seems to be the default behavior of the NT kernel since it's inception. Maybe there is something special on your version.
I test them VM to VM in their own vnet. Have you compiled my code and run it?Your code from same machine/host to same machine/host (running side by side).
ip: 172.18.144.1
1 = send or 2 = listen? 2
9999 ping from 192.168.2.11 *7297
19999 ping from 192.168.2.11 *735
29999 ping from 192.168.2.11 *765
39999 ping from 192.168.2.11 *735
49999 ping from 192.168.2.11 *734
59999 ping from 192.168.2.11 *719
69999 ping from 192.168.2.11 *703
79999 ping from 192.168.2.11 *734
press enter to quit
ip: 172.18.144.1
1 = send or 2 = listen? 2
9999 ping from 192.168.2.80 *8766
19999 ping from 192.168.2.80 *2156
29999 ping from 192.168.2.80 *2094
39999 ping from 192.168.2.80 *1906
49999 ping from 192.168.2.80 *2141
59999 ping from 192.168.2.80 *2156
69999 ping from 192.168.2.80 *2375
79999 ping from 192.168.2.80 *2203
press enter to quit
It will depend heavily on the CPU load of the overall system, the tick quantum, the number of cores, the number of CPUs, the way the cache is disposed between CPUs and cores, and any hints the OS can glean from the chipset relating to the technology used to access physical memory (i.e. DIMM type and so on). All of those will be taken into account when the OS decides how to schedule ready and almost-ready threads.The scheduler is usually something that is quite stable, for example if you look at the linux kernel git, you can see that the main functionalities of the scheduler wasn't really touched in 5-15 years. And modern day schedulers all work quite similar, they divide the time in time slices of fixed length, where only the selection of the process is different. If you call sleep(n) with n>0 then you forfit the rest of your time slice, meaning that if you sleep at completely random positions in your program your sleep will on average be at least half the timeslice. In the examples above the sleep was not at random points in the program but the program was basically nothing but sleeps, so as soon as it got resheduled after just a few operations it will have gone sleeping again, so which is why the minimum sleeping time is around the size of one timeslice.
In addition, it is not only unsafe to assume that the scheduler will behave consistently across Windows versions, but it's also unsafe to assume that two copies of Windows of the same apparent version but with a different installation history (e.g. one started off with a slightly older DVD but was then updated) will behave the same.
I test them VM to VM in their own vnet. Have you compiled my code and run it?As a recommendation, if you want to test how networking applications behave on a real network, I can recoomend Containernet for Linux with Docker. There you can setup a number of docker containers (which will then run your application) and define networking properties between them, e.g. which bandwith, delay, jitter and packet loss should happen between them. But sadly not available for testing windows software
I don't know whats wrong with your windows 7 version, I tested it on my Windows 10 pc, and an windows xp as well as a windows 2000 vm (I don't have Win 7 lying around so I can't test that) so unless there was something very special in Win7, this seems to be the default behavior of the NT kernel since it's inception. Maybe there is something special on your version.There is nothing wrong with my Windows version but, since you insisted so much, I re-ran the tests (multiple times), here are my results:
If you are not getting the results I am getting then, the conclusion is, there is something not quite right in your Windows configuration. I assure you, there is absolutely nothing unusual in my Win7 SP1 installation. It's as plain vanilla as it gets.Just tested it on my laptop, which is the most freshly installed windows I have (as I usually only use linux on it), I never changed any os level configuration on this. Same results. I also reran the vm tests multiple times after restarting the vms
So I've now conducted tests on 4 machines, 2 virtual, two physical, accross 4 different windows versions, all report the same results. It cannot be my configuration that is wrong, I always get the same consistent results accross all configurations, and they are also consistent with theory.I wouldn't be so sure of that. Sleep's behavior is something I've been testing from Windows version to Windows version since, at least, Win2000 and, it's been very consistent. The only times I get inconsistencies is when Windows is running in a VM and, after rebooting, Sleep works as expected (and as it should, I might add.)
I test them VM to VM in their own vnet. Have you compiled my code and run it?Your code from same machine/host to same machine/host (running side by side).Quoteip: 172.18.144.1
1 = send or 2 = listen? 2
9999 ping from 192.168.2.11 *7297
19999 ping from 192.168.2.11 *735
29999 ping from 192.168.2.11 *765
39999 ping from 192.168.2.11 *735
49999 ping from 192.168.2.11 *734
59999 ping from 192.168.2.11 *719
69999 ping from 192.168.2.11 *703
79999 ping from 192.168.2.11 *734
press enter to quit
From VM to VMQuoteip: 172.18.144.1
1 = send or 2 = listen? 2
9999 ping from 192.168.2.80 *8766
19999 ping from 192.168.2.80 *2156
29999 ping from 192.168.2.80 *2094
39999 ping from 192.168.2.80 *1906
49999 ping from 192.168.2.80 *2141
59999 ping from 192.168.2.80 *2156
69999 ping from 192.168.2.80 *2375
79999 ping from 192.168.2.80 *2203
press enter to quit
So, infrastructure also has something to do with it.
But did you notice the CPU doesn't go beyond 25% without any delay or sleep in this code.
(Anyway... the sleep-thing is being discussed between these posts ;) )
So, infrastructure also has something to do with it.Don't assume that connections on the same machine work the same as connections accross machines (or to be more precise on the same network interface vs different network interfaces). Basically when you send data from one network interface to itself, there is no reason for the OS to do all the networking, all the wrapping in IP packages and sending it across the NIC is not necessary. So it can easiely be that the OS will simply put the messages directly into the buffer of the target application, circumventing the whole network interface.
OS is also part of the infrastructure.So, infrastructure also has something to do with it.Don't assume that connections on the same machine work the same as connections accross machines (or to be more precise on the same network interface vs different network interfaces).
Don't assume that connections on the same machine work the same as connections accross machines (or to be more precise on the same network interface vs different network interfaces). Basically when you send data from one network interface to itself, there is no reason for the OS to do all the networking, all the wrapping in IP packages and sending it across the NIC is not necessary. So it can easiely be that the OS will simply put the messages directly into the buffer of the target application, circumventing the whole network interface.
On my previous message (perhaps I was faster writing than thinking), does the full cpu/tread utilization beed caused by the repeating of pSock.CallAction ?It could. I'm not sure how that function looks at the moment but if it just checks for communication and doesn't yield processing power, then your program just takes all the cpu (it would effectively be a
I was under the assuption that vms with their own vnet were going to have the lowest latency.How a VM works is basically it provides a virtual network interface on the vm, when the VM writes to that virtual interface, the packets will be read out by the virtualisation software, and, in this configuration, be transmitted to the virtual network interface that the virtualisation software creates on the host. From there the messages are handled by the OS, which sees the target being another virtual network interface, the one of the second VM, and send them to it, which will trigger the virtualization software to take these packages and put them into the virtual network interface on the second virtual machine, where you can read them out.
And about the CPU utlization with sleep(1), I know the sleep discussion went a bit of the rails, but the core message what I wanted to get across is, sleep(1) sleeps several MS, so if your your workload between the sleeps is very short (and handling a message is pretty much no effort at all), then the ratio between workload and sleeptime, which is the CPU load, will be extremely low. Try rather than sleeping every iteration, only sleep after like 1000 iterations. This way you can balance the sleeping against the worktime
The question is, if you slow down the sender, does the CPU load get down?Running the "sender" on the first vm with sleep(1) and the "receiver" at the other vm, the receiver is still @ 100% cpu.
About the CPU utilization issues, I edited my last post with some more stuff about it, but you where so much faster in posting that you might not have seen it:
Quote
And about the CPU utlization with sleep(1), I know the sleep discussion went a bit of the rails, but the core message what I wanted to get across is, sleep(1) sleeps several MS, so if your your workload between the sleeps is very short (and handling a message is pretty much no effort at all), then the ratio between workload and sleeptime, which is the CPU load, will be extremely low. Try rather than sleeping every iteration, only sleep after like 1000 iterations. This way you can balance the sleeping against the worktime
I don't know much about lnet, but I guess that internally it will use blocking calls. So the reason why the CPU load is at 100% is, that every time it is called there is data in the buffer.It seems that this is not the case.
I rerun mine and your version in a laptop too. Same VM setups, exact copies (only one thread/cpu). That 25% of yours, are you on a quad core/thread? Yours vs mine results the same speed both in laptop and pc (PC ryzen, laptop 2nd gen i5).I'm on Windows 10 Pro 64 bit, an old Dell XPS8500 with Intel Core I7 3770 - 3.4 Ghz - 4 Cores - 8 Threads.
Also on the blocking sockets thing, I get the feel that this is a "wrong" way to reduce cpu usage; it's just me of cource and my personal taste when wrighting a program.It is the only correct way if you don't want to be at 100% CPU load. There are two ways to use sockets, either blocking or nonblocking. Blocking sockets will reduce CPU usage as much as possible, as every time there is no data, it will sleep until data is there. When you are doing a sleep loop where you check if data is there and if not go to sleep, you basically build the same thing just with extra steps.
PS: as a little rant on the side, I must say LNet is absolutely terrible, there is no documentation, the method names give absolutely no indication what they are doing (e.g. CallAction, this doesn't call any actions, it handles events, why isn't it named HandleEventQueue or something meaningful), and from reading the code I had absolutely no idea that it is either non blocking, or threaded. You look at this code and have no idea what it does.
Also the same code you wrote using Lnet, could be writting in probably half that code using the raw socket API. So you are using a library that has throusands of lines of code, so you can write more code... I thought the L stands for lightweight...
This is something I have observed with many networking libraries that try to make low level protocols (i.e. TCP and UDP) easier, in the end they are just way more complicated than the raw socket API. This sort of design works well for high level protocols like SMTP or HTTP, but by trying to make low level protocols easier, they just made using them harder
/rant over
It is random when and if the program woud "hang"; actually I thing the "hang" is accumulation of sleep(1) from both sides.
QuoteIt is random when and if the program woud "hang"; actually I thing the "hang" is accumulation of sleep(1) from both sides.
Scratch that, I wrote again before giving a second thought: If sleep accumulation was happening, then the cpu usage would drop, but it doesn't.
And about the CPU utlization with sleep(1), I know the sleep discussion went a bit of the rails, but the core message what I wanted to get across is, sleep(1) sleeps several MS,Not in a properly running Windows installation. That statement is simply incorrect.
In a Windows installation that is functioning as it should, the Sleep(n) measurements should be very similar to the ones I posted.Remind me again, on how many different machines and windows versions have you tested this? Because as far as I remember I tested 5 different configurations on different hardware, some VMs, some not and all showed the opposite of what you where claiming...
There is nothing wrong to work with non-blocking sockets as long as you do it properly. Most of the servers do it. Yes, that requires more deep knowledge, but what else doesn't requires it?Usually non blocking socjets are used to serve many sockets on one thread to have more scalable server. The problem here is that non blocking polling still requires going over every socket to see if they have any data I.e. O(n). For this, at least on Linux you should rather use epoll, which is event based and therefore functions in O(1), and then use blocking calls afterwards, that said if you really need every bit performance you use epoll with a timeout of 0 (I.e. non blocking) in a loop to avoid context switchens, and also der the sockets non blocking because this optimises the usage of Pool on large buffers, but this is just necessary for extremely utilized servers. For everything else non nlocking should be enough
And if you continue talking stuff without any evidence behind it, I will start calling you what you are, a liar. At some point my good faith is overI posted the numbers I get. That's beyond evidence, it's _fact_.
Like how are you as someone who has never used LNet supposed to know any of that? Even if I ctrl click on the Identifier to look into CallAction, all I see is:That's because you didn't look any further.Where FEventer.CallAction is an empty virtual method.
procedure TLUdp.CallAction; begin if Assigned(FEventer) then FEventer.CallAction; end;
ILComponent.CallAction - Method to eventize the component
This method is used to "eventize" the activity in given component.
It ensures that all network events are noticed and acted upon.
Indy is not perfect, but thats at least easy to understand and no ambigous "CallAction" or "Disconnect" or other things whose names make 0 sense in this context.The need for calling CallAction is because TS insisted on implementing this in a program without TApplication. I'm sure if you wanted to do this with Indy components you would have the same problem and would need to call some kind of event-handler to make sure it could do its thing.
Also note that you need to only call CallAction on one of the protocols (or directly on the eventer) once per iteration (if you’re on non-visual lnet of course), otherwise you’re just calling the same thing multiple times (Protocol.CallAction calls Eventer.CallAction)https://lnet.wordpress.com/usage/handles-eventers-and-events/
I posted the numbers I get. That's beyond evidence, it's _fact_.I don't doubt that you are getting these results, I doubt that this is the default behavior. Because from what you have written tell you only got these results consistently on one machine, and got similar results, but not consistently on a vm on this machine, and did not get these results on another (win10) vm on this machine.
Yes, I know. I also wrote an app or two.QuoteThere is nothing wrong to work with non-blocking sockets as long as you do it properly. Most of the servers do it. Yes, that requires more deep knowledge, but what else doesn't requires it?Usually non blocking socjets are used to serve many sockets on one thread to have more scalable server.
*snip*
That's because you didn't look any further.I'm not saying it's impossible to find out, but if you have to dive deep into the source code of a library, it's a bad library.
I also took a look and saw FEventer.CallAction pointed to an abstract method. But that's because TLUDP doesn't implement that abstract method but another one (not behind my computer now so I can't check the name). If you follow the correct one you end up in an event which uses fpSelect (like y.ivanov already mentioned). Synapse and Lazarus sockets etc work the same way (with the option of implementing several different method of FEventer.CallAction.
I didn't look very hard for documentation but shooting down code just by looking at the code and not understanding it, is wrong. In that case you can say the same thing about Lazarus and FPC on low level (with their multiple use of inc files which makes stepping through the source code that much harder than in Delphi). There is also very little documentation about the way it is fine there.Yes I also find that the fpc and lazarus documentation are quite lacking, in fact I think this is the biggest problem with it. Every time you have to look into the source is annying and a failure of documentation, and every time it is more than ctrl click away (because of virtual methods), it basically doesn't exist for some users. The gold standard of course is if the code speaks for itself (which of course is not always possible), and if it doesn't, there should be enough documentation to resolve this.
BTW The description in the docs for CallAction which explains the name.Then why isn't it named "HandleNetworkEvents" or "ReactToNetworkEvents", or anyhting that conveys this message? Sure handling events involves calling some actions to handle those events, but this would be like saying "moving my eyes" instead of "reading". This reminds me of some very old projects of mine, where I was really lazy and called functions just "DoTheThing", it was good for a joke, and that I knew the code base in and out so this wouldn't be a problem, but when coming back after a few years, I had no idea what was happening in the code.QuoteILComponent.CallAction - Method to eventize the component
This method is used to "eventize" the activity in given component.
It ensures that all network events are noticed and acted upon.
The need for calling CallAction is because TS insisted on implementing this in a program without TApplication. I'm sure if you wanted to do this with Indy components you would have the same problem and would need to call some kind of event-handler to make sure it could do its thing.In the general flow Indy is not directly comparable (I just brought it up because of the naming and structure, not necessarily that it does the same thing in the same workflow), because in Indy handles non blocking sockets differently (basically it does not try to be smart as LNet but behaves exactly the sockets API would, i.e. the actual calls are non blocking, not that there is some magic CallAction function). If you use the event based Server component, it will execute the listening in a thread and then, if you want to handle the incomming events on the mainthread (default behavior) you must check the main threads eventQueue with CheckSynchronize. This is in workflow similar to CallAction, but there are two main differences, first the event handling is completely done with the general RTL functionality, that is likely to be known already, rather than introducing something specific to that library. But more importantly, it's not named CallAction (I can't stress enough how bad this name is).
Edit:And the more you dive into the source, the more you do something that should not be required to use a library. Having to consort the manual/documentation is bad, and usually a sign of bad structure, having to consult the source is worse. I shit a lot about Indy, but just by trial and error it is usally quite easy to find out how to do things. It is not perfect, for example to find out that the TUDPServer uses threads you also must look into the source, but at least the naming and structures make sense (e.g. UDP is not a connection).
And the more you dive into the source, the more you understand.
@prodingusI'm not TS but yes, it does work a lot better.
Does my suggestion solved your issue? BTW You should set the timeout for both sides, i.e. for the listener too, otherwise it will bump CPU to 100%
That's why Indy would be considered a "bad library" too, according to the same standards (although you are correct in that it does things differently).That's because you didn't look any further.I'm not saying it's impossible to find out, but if you have to dive deep into the source code of a library, it's a bad library.
I also took a look and saw FEventer.CallAction pointed to an abstract method. But that's because TLUDP doesn't implement that abstract method but another one (not behind my computer now so I can't check the name). If you follow the correct one you end up in an event which uses fpSelect (like y.ivanov already mentioned). Synapse and Lazarus sockets etc work the same way (with the option of implementing several different method of FEventer.CallAction.
The CallAction method of all “Connections” is a method which “eventizes” the whole connection. It is valid only in non-visual lNet. Whenever you call this method, all sockets of given connection are checked for status updates (eg. if something can be received, if I can send…) and appropriate callbacks are fired. You need to call this method periodicly to assure functionality. You need to CallAction periodicly if you want to know if you can receive. (get)https://lnet.wordpress.com/
I've got an issue making this recently - using such a server component in Linux works pretty well, but when I tried to port that daemon in Windows as a TServiceApplication found that CheckSynchronize doesn't work as expected.
*snip*
If you use the event based Server component, it will execute the listening in a thread and then, if you want to handle the incomming events on the mainthread (default behavior) you must check the main threads eventQueue with CheckSynchronize. This is in workflow similar to CallAction, but there are two main differences, first the event handling is completely done with the general RTL functionality, that is likely to be known already, rather than introducing something specific to that library.
*snip*
I don't doubt that you are getting these results, I doubt that this is the default behavior. Because from what you have written tell you only got these results consistently on one machine, and got similar results, but not consistently on a vm on this machine, and did not get these results on another (win10) vm on this machine.You should measure your language a little more carefully. It also seems that you should learn a little about Windows and how an O/S scheduler works.
I tested 3 VMs as well as native on 2 different machines, and I always got the exact same results everywhere. So it is quite clear that your behavior cannot be the normal windows behavior, as you only get the results consistently on one specific setup, while I get the other results consistently on all setups.
And this is why I call you a liar, calling your observed behavior the default behavior, when you know that the other behavior can be observed consistently accross many machines both virtual and physical, means that you are saying something that you know cannot be true, by definition a lie.
As I expected.@prodingusI'm not TS but yes, it does work a lot better.
Does my suggestion solved your issue? BTW You should set the timeout for both sides, i.e. for the listener too, otherwise it will bump CPU to 100%
Putting in Sock.Timeout := 10000; makes it go down to 50% cpu in my VM (1CPU) without any sleep-lines.
As far as VMs, as I hope you've been able to figure out, they _emulate_ hardware and no matter how good the emulation is, it's not perfect. I don't expect VMs to operate exactly the same as real hardware but, apparently you do.Please stop talking it is embarrassing how little you know about the subjects you talk about. Virtualization is not emulation: https://www.dell.com/en-us/blog/emulation-or-virtualization-what-s-the-difference/
[...]
That you don't seem to understand and that your non-VM machines are not working properly, is _your_ problem and, your problems don't make anyone a liar.
Please stop talking it is embarrassing how little you know about the subjects you talk about. Virtualization is not emulation: https://www.dell.com/en-us/blog/emulation-or-virtualization-what-s-the-difference/
A virtual machine runs on your actual hardware, the processes run on your real CPU and use your real memory, it does not emulate the hardware as you claim. The fact that you get this simple difference wrong shows that you have absolutely no idea about computing at all. I'm sorry but everything you said in this thread so far is factually and provably wrong. You try to lecture me and don't even know the very basics of computing.
That's why Indy would be considered a "bad library" too, according to the same standards (although you are correct in that it does things differently).Yes and no. Indy get's away with a lot more because it is much better structured. With Indy I usually got most things I wanted by must creating a component, and then pressing ctrl+space to get code completion and just look at the names of the functions and properties. For example to create an UDP server, I never created a UDP server with indy, it took me like 10 minutes to learn this from scratch by just looking at the code completion, no need to look in the documentation nor the source.
Same goes for everything in Lazarus and FPC.
DescriptionAnd here is the same description for FPC:
Executes a method call within the main thread.
Synchronize causes the call specified by AMethod to be executed using the main thread, thereby avoiding multithread conflicts. The AThread parameter associates the caller thread.
For static methods, you can associate AMethod with any thread using the AThread parameter. Also, you can use nil/NULL as AThread parameter if you do not need to know the information for the caller thread in the main thread.
In the current implementation, the Synchronize method can use associated thread information to wake-up the main thread on Windows platforms.
If you are unsure whether a method call is thread-safe, call it from within the Synchronize method to ensure that it executes in the main thread.
Execution of the current thread is suspended while the method executes in the main thread.
Warning: Do not call Synchronize from within the main thread. This can cause an infinite loop.
Note: You can also protect unsafe methods using critical sections or the multiread exclusive-write synchronizer.
An example of when you would want to use Synchronize is when you want to interact with either a VCL or a FireMonkey component. Use an in-place anonymous method to solve the problem of passing variables to the method you want to sychronize:
That specific use of "virtual machine" is comparatively recent. From the time that IBM introduced VM up until some of the later 68Ks etc., it was common to refer to "virtualised hardware", and on the '386 there was "Virtual 8086 mode". At the same time, "virtual machine" was still synonymous with "interpreter", possibly augmented with JIT translation, as used by the P-System, the PARC implementation of Smalltalk and so on."recently" in this context means just 30-40 years old. From the 70 years of computing history, it's about half of that period. The current understanding of VM is closer to the introduction of the Personal Computer than it is to today.
It's only recently that people who are convinced that computing begins and ends with PC have redefined "Virtual Machine" to be the same as "virtualised hardware", and even there there are grey areas depending on precisely what happens when e.g. direct access of a network device is attampted.
... why is a UDP socket a connection and has methods like disconnect?I was under the impression Indy also has Connect and Disconnect for TIdUDPClient :D
Arguing something like this is a very cheap opt out, and I don't let something like this count. I don't want to be lectured by someone about how modern computers work who 1. either doesn't know anything about computers, or 2. whose knowledge of computers is stuck 30 years ago.
Maybe you are right and I shouldn't call him a liar, but at the very least he ist trying to lecture me on things he clearly has no idea about
Maybe you are right and I shouldn't call him a liar,You had an epiphany, it might be worth marking the date on the calendar.
trying to lecture me on things he clearly has no idea aboutTrying to educate you obviously failed and, lecturing you apparently did too.
Maybe you are right and I shouldn't call him a liar,You had an epiphany, it might be worth marking the date on the calendar.trying to lecture me on things he clearly has no idea aboutTrying to educate you obviously failed and, lecturing you apparently did too.
Trying to educate you obviously failed and, lecturing you apparently did too.Sorry, I don't need education from someone who clearly doesn't know what a VM is. You know the truely sad thing is, before writing a post, I usually try to double check if what I am writing is correct, I get my old university scripts, some books I have around, google for documentation or other information and always try the things out in code. Meanwhile you blast out things like a VM is an emulator, which is wrong after just two seconds of googling.
(Clears throat) Please don't make things worse. I think this subthread could usefully wind down.You're right about that. I'm not making things better but, he really asked for it and, continues asking for it.
MarkMLl
Sorry, I don't need education from someone who clearly doesn't know what a VM is.Neither you nor anyone else for that matter.
You know the truely sad thing is, before writing a post, I usually try to double check if what I am writing is correct,In that case, I suggest you start triple checking because double checking isn't doing the trick.
Meanwhile you blast out things like a VM is an emulator, which is wrong after just two seconds of googling.There is a little problem with that belief of yours. The problem is this: have you tried running a VM inside a VM ?... just in case you haven't, current VM software can't do it. Any ideas why by any chance ? Can you explain why most VM software can't do that ?
I have a question, you where wrong with all of your statements in this thread, but let's focus on the last one about vms and emulation, because there is no way you can argue against that you where completely wrong ther. Doesn't it bug you that you are so easiely wrong on such things? Why can't you accept that someone else might know more about a topic than you?
Something like the claim that a vm is an emulator should have never happend if you had at least some rigorous approach to verifying your claims before making them.A VM emulates a machine, some of that emulation is done with hardware assistance from the CPU and some of it is done in software.
I'm just curious if you don't have any drive to at least try to be right, at least onceI've been right more than once and the screen shot I posted, which you requested, proves it. Look it up again if you need to.
There is a little problem with that belief of yours. The problem is this: have you tried running a VM inside a VM ?... just in case you haven't, current VM software can't do it. Any ideas why by any chance ? Can you explain why most VM software can't do that ?You must be kidding right? This is possible, vmwares ESXi virtualizer can do this for ages, virtualbox is also easiely capable of doing that and even Microsofts HyperV is capable of doing that since 2016: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization
A VM emulates a machine, some of that emulation is done with hardware assistance from the CPU and some of it is done in software.No it executes code on the real hardware in an emulated environment. And as the program measuring sleep time does not interface with the environment (i.e. network, external devices, etc.) it doesn't matter that they are emulated. It is the real NT kernel running on a real CPU using real memory.
I've been right more than once and the screen shot I posted, which you requested, proves it. Look it up again if you need to.Well said in a post where you where again completely and utterly wrong, and could have found that out with just 1 second of googling
No it executes code on the real hardware in an emulated environment.And what environment is it that it is emulating ?... it isn't emulating Windows because that has to be installed.
And what environment is it that it is emulating ?... it isn't emulating Windows because that has to be installed.Peripheral hardware such as network adapter, disk, display, etc. You know all the stuff that isn't part of a core computer. Why would I be talking about windows? You must truely know nothing about computers if you would consider this being a vritual environmnet.
It would be good too because this thread isn't about your religious beliefs about the scheduler and/or VMs. That would be convenient, no more litter.All I'm doing is pointing out how wrong you are. I don't care that you know so little about computing, but the worst thing that could happen is that someone would read your posts and assume you are anything but a fool who doesn't know anything you are talking about, so every time you write something stupid, like that you cant create a vm inside a vm, which is factually wrong, I need to point out how wrong you are.
Also it's funny that you are completely wrong with your gotcha questions so you just silently drop these points. Why can't you just accept that you were wrong, like you were with everything else in this thread?Of course... and the Democrats should admit Biden stole the U.S election and Ukraine attacked Russia.
So you say you weren't wrong with nested VMs? I'm curious how you deny reality for this case.Well... a little googling brought this up:
Nested virtualization is not automatically offered as a feature and this is also true for various third party virtualizers. For example while the VirtualBox virtualizer has existed for years, the ability to run VirtualBox inside VirtualBox using Intel CPUs was only offered as a feature in v6.1 released in 2020. [2] This demonstrates that extra code is required for this functionality and that also implies a greater attack surface.Apparently, you are right on that one, as of 2020 (kind of recent) it is possible to run a VM in another VM (quite likely because CPU manufacturers have been adding circuitry in order to enable that.) I seriously doubt that ability is available on CPUs 5 years or older.
Afaik VMWare already could do this already longer, but the nested VM would not be accelerated. There are several virtualization related extensions, and through the years more and more have become nestable.My copy of VMware is from 2015 and it does not allow nesting but, it may also be because I'm running an i860.
Apparently, you are right on that one, as of 2020 (kind of recent) it is possible to run a VM in another VM (quite likely because CPU manufacturers have been adding circuitry in order to enable that.) I seriously doubt that ability is available on CPUs 5 years or older.As I have written, Windows supported this feature in HyperV since 2016, so for 6 years, which is more than 5 years. So you are wrong... again
You got one right... good for you but, that doesn't mean the VM isn't emulating functionality in software. Far from it.So you say that because some things are emulated, this means a VM is an emulator. Guess what, on you normal windows system you also have emulated environments. E.g. virtual network interfaces, like the ones created by your VM host software. Or guess what a virtual filesystem like a network drive is: According to your same logic this means that also a native windows installation is just an emulator.
As far as VMs, as I hope you've been able to figure out, they _emulate_ hardware and no matter how good the emulation is, it's not perfect. I don't expect VMs to operate exactly the same as real hardware but, apparently you do.In the context of a program that is only running on the CPU, and does not interact with any peripheral hardware. And this is not emulated, simple as that. Therefore you are just dead wrong here.
My copy of VMware is from 2015 and it does not allow nesting but, it may also be because I'm running an i860.I think you need EXSi virtual machines, I don't know if it supports HyperV nesting
As I have written, Windows supported this feature in HyperV since 2016, so for 6 years, which is more than 5 years. So you are wrong... againFor someone who pretends to know so much, you sure make statements that are rather flimsy. It's _not_ just a matter of the O/S, Windows or other, it's much more a matter of having virtualization supported in hardware, that is by the CPU itself.
For someone who pretends to know so much, you sure make statements that are rather flimsy. It's _not_ just a matter of the O/S, Windows or other, it's much more a matter of having virtualization supported in hardware, that is by the CPU itself.Have you read the link that I posted above? There are no hardware requirements. Any Intel CPU that is VT-x capable can do it. So you could even use older Intel CPUs, even the Intel Atom form 14 years ago. Thats more than 5.
the Sock.CallAction will call fpSelect() and as long there is a Sock.Timeout > 0 it will behave as expected.Correct.
Default is Sock.Timeout = 0 and because of that the CPU utilization will be 100% unless a sleep(x) added (but then everything will slow down).
Does my suggestion solved your issue? BTW You should set the timeout for both sides, i.e. for the listener too, otherwise it will bump CPU to 100%
Yes I also find that the fpc and lazarus documentation are quite lacking, in fact I think this is the biggest problem with it. Every time you have to look into the source is annying and a failure of documentation, and every time it is more than ctrl click away (because of virtual methods), it basically doesn't exist for some users. The gold standard of course is if the code speaks for itself (which of course is not always possible), and if it doesn't, there should be enough documentation to resolve this.AMEN to that too!
There are no hardware requirements. Any Intel CPU that is VT-x capable can do it.No hardware requirements, except VT-x. nice!
I for one would prefer that you didn't use insults like that in open discussion. It's not necessary, it lowers the tone of the forum, and it definitely doesn't help resolve any of OP's issues.
@y.ivanovIt doesn't work that way. It is a timeout. After that time the OS (fpSelect) will just give up waiting and the control will be transferred back to the calling (your) process again. That is, the CallAction will return with no data processed. In the case data appears earlier - the control will be transferred back without further waiting.
Timeout did the trick! Now the cpu is @ ~15-20%.Quotethe Sock.CallAction will call fpSelect() and as long there is a Sock.Timeout > 0 it will behave as expected.Correct.
Default is Sock.Timeout = 0 and because of that the CPU utilization will be 100% unless a sleep(x) added (but then everything will slow down).QuoteDoes my suggestion solved your issue? BTW You should set the timeout for both sides, i.e. for the listener too, otherwise it will bump CPU to 100%
yes, but as posted here too, https://forum.lazarus.freepascal.org/index.php/topic,13053.0.html (https://forum.lazarus.freepascal.org/index.php/topic,13053.0.html), every other value >1 sould add more delay,but it doesn't! It's like I getting alwasys the minimum delay.
*snip*
I would also like just a liitle bit more lower tone, but agreed-to-disagree disagreements are usefull for pointing things out. ;)Agree to disagree gives undue credit to another oppinion that it is something that is disagreeable. So far 440bx made one factually incorrect statement after another. This is no point of contention, this is something that needs to be pointed out. And when he tries to lecture me, even though he is grossly incorrect (and I have given counter examples or links with additional information to every single claim he made) than I am going to ridicule him for that. Again if anyone reads this, they shouldn't get away thinking "oh this is a contentious issue", because it isn't. One is right, the other one is making a fool out of themselves bringing up point after point which is just completely and factually wrong.
The communications are not so simple as one might expect and that is because they inherently involve multiprocessing issues, there is at least two participants and that is in fact multiprocessing. Even when it seems like from one side it isn't.QuoteI for one would prefer that you didn't use insults like that in open discussion. It's not necessary, it lowers the tone of the forum, and it definitely doesn't help resolve any of OP's issues.
I would also like just a liitle bit more lower tone, but agreed-to-disagree disagreements are usefull for pointing things out. ;)
As far for the network thing, I will try to use something else, like Warfley's wrapper or raw sockets, but my time is limited. There are many things to learn, I was't expecting networking to be so demanding, and to my surprise sleep/delay seems to be only good for delaying, well, I don't know, just message outputs (?), but surelly not main execution loop.
The only thing I give him is that the results he gets for the sleep duration on his machine are indeed interesting, and if he wasn't so stubborn, and demanding that this is the only correct behavior and all my other machines are just broken if I don't get the same results, maybe one could find out why this is the case and why it behaves so differently than on other machines.You couldn't have said it better!! ;D
Personally I think the attitude "my oppinion is correct and everything that doesn't conform to it must be wrong" is just nothing that brings anyone forward
It doesn't work that way. It is a timeout. After that time the OS (fpSelect) will just give up waiting and the control will be transferred back to the calling (your) process again. That is, the CallAction will return with no data processed. In the case data appears earlier - the control will be transferred back without further waiting.Then why the amount to be specified in the timout()?
Because normally you would have more in your program flow than only the reading of communication. Screen handling, countdown label, other buttons and other tasks (sending messages for example), etc. etc.QuoteIt doesn't work that way. It is a timeout. After that time the OS (fpSelect) will just give up waiting and the control will be transferred back to the calling (your) process again. That is, the CallAction will return with no data processed. In the case data appears earlier - the control will be transferred back without further waiting.Then why the amount to be specified in the timout()?
Ah sorry I forgott how ignorant you are. Of course I meant "No *additional* hardware requirements".but, of course...
You're getting really pathetic.
ENOUGH, both of you.I agree with you... more than enough.
But (!!) if I run on computer at least one VM (checked on a Windows 10 notebook) - the results are as @440bx wrote.This is actually really interesting, I can't reproduce this locally with my VMs (I'm using VMWare Player without Hyper-V), if you are using microsofts Hyper-V it could be that Windows is using a different scheduling strategy when hyper-v is used, while when I am using VMWare with the vSphere hypervisor, windows doesn't know about that (directly) and proceeds as usual. If I have some time I will try it with Hyper-V (I usually don't use Hyper-V because I am using some VM softwares that do not support it, and they don't work if Hyper-V is enabled) but as it requires multiple reboots to turn on and off, I can't do it right away
After shutting down the VMs the results back to the values given by @Warfley.
Results on attachements.
Have a nice day.
C:\Users\Rik>Clockres.exe
Clockres v2.1 - Clock resolution display utility
Copyright (C) 2016 Mark Russinovich
Sysinternals
Maximum timer interval: 15.625 ms
Minimum timer interval: 0.500 ms
Current timer interval: 1.000 ms
ms : 1 repeat count 1000
Total time: 1.465s
ms : 8 repeat count 1000
Total time: 8.356s
ms : 16 repeat count 1000
Total time: 16.315s
I gather that Warfley has results ranging in 15, 15 and 30 seconds for 1000x 1, 8 and 16 ms sleep, and 440bx has much faster results even with 1 and 8 ms tests. Is that correct?That sounds right.
Is it possible, somehow, that the timer resolution which sleep uses, is switched to 15ms (so every sleep is minimum of 15ms).As you have found out, the timer resolution can be changed but, I don't know under what circumstances Windows decides to change it.
C:\Users\Rik>Clockres.exeThose results look as they should be. That said, it looks like there are common situations where Windows uses a different resolution resulting in values that vary significantly from those.
(MUCH better then the 15, 15 and 30 seconds you were getting)Quotems : 1 repeat count 1000
Total time: 1.465s
ms : 8 repeat count 1000
Total time: 8.356s
ms : 16 repeat count 1000
Total time: 16.315s
So windows changes the resolution for hyper-v. I would guess that it is also dependent on the CPU model, I tested on my Threadripper and my i7 laptop, so two very common CPU models (well threadripper not really but the underlying archtiecture is a Ryzen 1st gen, which is rather common), so I get the "common" result of 16ms.Did you try my NtSetTimerResolution suggestion at the place where you got the long delays (outside of a VM)?
Maybe 440bx has some "exotic" cpu. This would also explain VMs get the same result (both on his side as on mine), as VMs are running on the same CPU, it would be expected that the OS would choose the same timer interval there.
ms : 1 repeat count 1000
Total time: 1.890s
ms : 8 repeat count 1000
Total time: 8.581s
ms : 16 repeat count 1000
Total time: 16.752s
Prior to Windows 10, version 2004, this function affects a global Windows setting. For all processes Windows uses the lowest value (that is, highest resolution) requested by any process. Starting with Windows 10, version 2004, this function no longer affects global timer resolution. For processes which call this function, Windows uses the lowest value (that is, highest resolution) requested by any process. For processes which have not called this function, Windows does not guarantee a higher resolution than the default system resolution.So maybe 440bx is running an version prior to Win10-2004 and some other program already has set TimeBeginPeriod to 1. And because it used to be a global setting the fpc test program benefits from that :D
Putting in this in the example also works to make it faster.Yes, actually the NtXXX functions should not be called at all, these are the internal kernel calls in windows and are neither documented on MSDN nor guaranteed to be API stable. Either ExSetTimerResolution (https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-exsettimerresolution) or TimeBeginPeriod (unit MMSystem if one is searching for it) should be used, which then do the actual kernel call in the background.
TimeBeginPeriod(1);Quotems : 1 repeat count 1000
Total time: 1.890s
ms : 8 repeat count 1000
Total time: 8.581s
ms : 16 repeat count 1000
Total time: 16.752s
Now to find the permanent setting for this :P
O, andI was already wondering why this setting does not affect other processes :)QuotePrior to Windows 10, version 2004, this function affects a global Windows setting. For all processes Windows uses the lowest value (that is, highest resolution) requested by any process. Starting with Windows 10, version 2004, this function no longer affects global timer resolution. For processes which call this function, Windows uses the lowest value (that is, highest resolution) requested by any process. For processes which have not called this function, Windows does not guarantee a higher resolution than the default system resolution.So maybe 440bx is running an version prior to Win10-2004 and some other program already has set TimeBeginPeriod to 1. And because it used to be a global setting the fpc test program benefits from that :D
The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time.Whenever I've paid some attention to the interval, so far, I am yet to see an instance where the thread would sleep for _less_ than the specified time. As a result, I take that claim from MS with a grain of salt (and just about everything else it says about Sleep.)
If you are using Hyper-V VMs than this could be the issue, because this is where I have the exact same behavior when starting a Hyper-V VM.In my Hyper-V Win10, (checking version... 21H2, so later than 2004) I still have long sleep(1) of 15ms.
Maybe with ExSetTimerResolution (thin wrapper around the NT call): https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-exsettimerresolution if SetResolution is false it should return the current resolution
Ah you are right, I confused them with the FunctioNameEx functions.The TimeGetDevCaps only shows the min and max value. Not the current value.
The API for this is https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegetdevcaps
Min 1 / Max 1000000
Min 1 / Max 1000000