Background: I'm building an IP address application using VirtualStringTree / VirtualStringView. The tree can have nodes up to 24 levels deep and I'm limiting the user to no more that 250K nodes (although I would like to expand this to 512K or 1M nodes if possible).
Inserts to the node list can range from one node to 250K nodes at a time. It is very likely that a subset of the inserted data will duplicate existing data nodes, especially as the data set grows.
Current process: I add the new nodes at level 0, flatten the tree to all level 0, sort it, de-duplicate it, then rebuild the tree so all nodes have the proper parent-child relationship. For a large tree or large set of insertions, this takes more than 10 minutes before it is done.
I have a unique key that determines order, but it does not determine level. Level is determined by what other data exists and the level of a node may change as other nodes are added or deleted.
Is there a better process I should use? And a faster process?
Should I process one node at a time, search for a duplicate - if found, handle that, if unique - determine if a portion of the tree needs to be built depending on the new parent-child relationships.
Thoughts? Any experience along this line?