In the line demo the debugger steps over the PrepareGraphPoints command immediately. The execution time cannot be extended by adding more data because the linedemo is setup that a click on the "Add" button adds more series, but every series has the same number of points (50,000).
Well, yeah, the total time measured is not "1x 600kpts" but "12x 50kpts", but as everything is nicely linear that doesn't matter too much.
To see thereal effect of PrepareGraphPoints I am adding a dedicated demo which adds 500,000 data to the same single series with each button click. I clicked it 4 times to have 2 millions values in the series and set a breakpoint on the "PrepareGraphPoints" in TLineSeries.Draw. Stepping over the breakpoint in the debugger occurs instantly in the debugger, but in the line "DrawSingleLineInStack" there is a noticeable delay - this is clear, because drawing requires much more operations than just copying data into a buffer.
Wait, am I getting this right. Are you saying it does not behave like this for you?
Youtube ScreengrabIf I completely comment out the DrawSingleLineInStack call, processing 2Mpts still takes 980ms, compared to 1990ms with actual pixel ops (or 6.2s with heaptrc).
Modify the linedemo and comment out the lines in btnAddSeriesClick which set the AxisIndexX and AxisIndexY of the new series s, and you will get faster execution because now GetGraphPoint is not called any more.
Yep, that gets the time split down from 80:20 in the first post to roughly 50:50 like here. But since I need multiple vertical axes with different (but aligned) transforms, that doesn't really help either.
Of course, the greatest impact on drawing speed is the fact that you want to draw hundreds of thousands of data points in the same chart. In time-critical applications, it is more appropriate to draw only the most recent values and let the others scroll out of the viewport.
Hm, switching out the component at runtime is not really an option, but I could/will have to scroll the extent manually during acquisition.
The thing is, I'm not getting all that many values. 1kS/s for about 10 minutes (so that's where the 600kpts come from) which already come in blocks of about 500 from the hardware. But then adding them takes almost no time, processing 500k points only to add 500 new ones does. So while I technically have around 500ms for a refresh, most of the time is spent just copying the same values over and over and over, until it gets to the point where it can't keep up.
Also I'm already wrapping all points adding DisableRedrawing/BeginUpdate, even before I realized how bad the performance gets for longer runs, just because that's the pattern for lists that update something.
Other advice for high-speed line series: Do not show data point symbols, use the default pen (pen styles other than psSolid, widths other than 1, Cosmetic other than true are guarantees for slow drawing in particular on Windows; only changing the pen color is safe).
Thanks for pointing that out, I already found that on the "Fast Line Series" section in the Wiki. Doesn't do much, as drawing isn't really the issue.
Interesting point about Windows at least: GDI actually optimizes pixel fill pretty well if line segments end up not moving the pointer at all (ie. neighboring points map to the same pixel), this can be tested with your demo here by replacing the random points with a constant value. 2Mpts then take 1.4s instead of 1.9s for the normal "every point matters" case. So for data that comes in with sufficiently little noise, naively emitting draw calls without any downsampling is not all that problematic.
And now I've had this post open so long that I tried something. What if PrepareGraphPoints only updates points it knows it needs to update (nb: I haven't implemented invalidation at all, and I'm not sure how to get notifications for a single changed point from the source). Not perfect and probably buggy as hell, but it does give a 3x speedup of the expensive part.
Graph is from the test project modified to add 50k points per step, fired from a timer. t_paint is first paint, t_repaint is second, t_overhead is the difference, aka how much is spent on preparing the new points.
Do you think that sort of idea is worth following?
procedure TBasicPointSeries.PrepareGraphPoints(
const AExtent: TDoubleRect; AFilterByExtent: Boolean);
procedure UpdateRange(ALo, AUp: integer);
var
i: Integer;
begin
if (AxisIndexX < 0) and (AxisIndexY < 0) then begin
// Optimization: bypass transformations in the default case.
if Source.XCount > 0 then
for i := ALo to AUp do
with Source[i]^ do
FGraphPoints[i - FLoBound] := DoublePoint(X, Y)
else
for i := ALo to AUp do
with Source[i]^ do
FGraphPoints[i - FLoBound] := DoublePoint(i, Y);
end else
for i := ALo to AUp do
FGraphPoints[i - FLoBound] := GetGraphPoint(i);
end;
var
newCount, oldCount, shift: Integer;
begin
FindExtentInterval(AExtent, AFilterByExtent);
newCount:= Max(FUpBound - FLoBound + 1, 0);
SetLength(FGraphPoints, newCount);
// Salvage as many points as possible
if (newCount>0) and (FGPLoBound>=0) and (FGPUpBound>=FGPLoBound) then begin
if (FLoBound>FGPUpBound) or (FUpBound<FGPLoBound) then
// no overlap at all
UpdateRange(FLoBound, FUpBound)
else begin
oldCount:= FGPUpBound - FGPLoBound + 1;
if FLoBound > FGPLoBound then begin
// old: [0 1 2 3 4 5 6 7]8 9
// new: 0 1[2 3 4 5 6 7 8]9
shift:= FLoBound - FGPLoBound;
Move(FGraphPoints[shift], FGraphPoints[0],
sizeof(TDoublePoint) * Min(oldCount - shift, newCount));
UpdateRange(FGPUpBound + 1, FUpBound);
end else
if FLoBound < FGPLoBound then begin
// old: 0 1[2 3 4 5 6 7]8 9
// new: [0 1 2 3 4 5 6 7]8 9
shift:= FGPLoBound - FLoBound;
Move(FGraphPoints[0], FGraphPoints[shift],
sizeof(TDoublePoint) * Min(oldCount, newCount));
UpdateRange(FLoBound, FGPLoBound - 1);
UpdateRange(FGPUpBound + 1, FUpBound);
end else begin
// old: 0[1 2 3 4 5 6]7 8 9
// new: 0[1 2 3 4 5 6 7 8]9
UpdateRange(FGPUpBound + 1, FUpBound);
end;
end;
end else
UpdateRange(FLoBound, FUpBound);
FGPLoBound:= FLoBound;
FGPUpBound:= FUpBound;
end;