@kupferstecher
I have been looking through what was presented, and I am still not sure if I can use it.
I have data with an indeterminate number of columns and rows, from multiple sources.
Often these are hardware logs, software logs, or measurements taken over time.
Sometimes I have a time sync, sometimes I need to derive one.
Documentation from manufacturers often has holes, inaccuracies, mistakes, glitches, and what honestly feels like the occasional outright lie.
Currently, for small known or mostly static tables, I use a horribly limited FV in 80 column that recognizes types based on contents, then builds the desktop and components on the fly.
It is every bit as messy and delicate as it sounds.
My other system opens a network connection to mysql front end I wrote, and injects data into tables as needed.
The front end is written in php, and is also a websocket server.
When the data is updated, it tells the connected web browswer(s), and they react accordingly.
The browser(s) use the web socket to request table info, then build a page based on tables with scrolling lists.
This lets me perform several analyses on the data in parallel.
Sometimes leaving old data up as a guide while I recompile extraction tools, then run again with the updates.
It also, is every bit as messy and delicate as it sounds.
It is also a memory and storage hog.
My hope was to combine the two by finally creating a dynamic GUI system without having to write it from scratch in GLUT or some other graphics toolkit.
If need to, I am not worried about it, that is life.
I have more than one use for this however, and had hoped to combine some paid work research with a hobby.
Janus.