Charting Limits?
Posted: Tue Oct 17, 2017 8:25 pm
Hello Arction Support,
I have implemented the WinForm Charting component and seem to have hit some limits with my data.
It seems that the visible point count is the limiting factor in the product, is that correct?
For instance, when I take the zoom bar chart example and change the point count to 50,000,000, I get out of memory errors.
When I lower it to 30,000,000, It does eventually display the main chart, but the top chart takes a very long time to display (maybe due to the AreaSeries instead of PointLineSeries?). When it does display, there is a huge lag when trying to navigate a large amount of visible points. So, is the visible point count the effective limiting factor in the charts ?
Most of my charts have dozens of Y Axes and Series over several segments, with very good performance up to about 500,000 visible points total. Above that, the interface gets quite sluggish when panning and zooming large amounts of visible points.
Even with large point counts in the series, the navigation performance seems really good, until the visible point count gets really high.
It seems that above certain thresholds, out of memory conditions occur.
Presumably during development, you have tested many various point count scenarios, and it would be great to know just where the limits actually are...
I have large datasets to visualize (22,000,000,000 points and growing, and have a few charts that now have 30,000,000 points /chart (distributed over 20+ series ) that are getting to be unusably slow to navigate, as is... (the charts with < 3,000,000 visible point counts work fine)
So, for these higher point count charts, is the solution to summarize or bin the points to create a lower density series when needing to display large x ranges? Or is there another technique that is easier/better?
I have points that update once a second and the customer wants to see a years worth at a time...
Thanks,
Heather
I have implemented the WinForm Charting component and seem to have hit some limits with my data.
It seems that the visible point count is the limiting factor in the product, is that correct?
For instance, when I take the zoom bar chart example and change the point count to 50,000,000, I get out of memory errors.
When I lower it to 30,000,000, It does eventually display the main chart, but the top chart takes a very long time to display (maybe due to the AreaSeries instead of PointLineSeries?). When it does display, there is a huge lag when trying to navigate a large amount of visible points. So, is the visible point count the effective limiting factor in the charts ?
Most of my charts have dozens of Y Axes and Series over several segments, with very good performance up to about 500,000 visible points total. Above that, the interface gets quite sluggish when panning and zooming large amounts of visible points.
Even with large point counts in the series, the navigation performance seems really good, until the visible point count gets really high.
It seems that above certain thresholds, out of memory conditions occur.
Presumably during development, you have tested many various point count scenarios, and it would be great to know just where the limits actually are...
I have large datasets to visualize (22,000,000,000 points and growing, and have a few charts that now have 30,000,000 points /chart (distributed over 20+ series ) that are getting to be unusably slow to navigate, as is... (the charts with < 3,000,000 visible point counts work fine)
So, for these higher point count charts, is the solution to summarize or bin the points to create a lower density series when needing to display large x ranges? Or is there another technique that is easier/better?
I have points that update once a second and the customer wants to see a years worth at a time...
Thanks,
Heather