[ixpmanager] SFLOW Under Reporting?
Ian Chilton
ian at lonap.net
Mon Jun 26 09:50:38 IST 2023
Hi Nick,
TPS is 0 for a while and then spikes up.... as high as ~40,000 -
obviously when it's doing the flush.
I do have graphs of iops, which I think is the same thing. It's odd,
because nothing looks saturated/struggling.
It does seem to be hitting some kind of limit within the VM though, as I
have since tested it on bare metal and it's noticeably better.... and
the graph smoother now I compare them, which would intimate it's not
struggling. I tried to change some I/O related tweaks on the VM, but it
didn't seem to help.
Flush is taking ~15s on bare metal, down from ~30s on the VM.
Right now we're doing 667G exchange traffic with MRTG, 280G with the
original sflow setup, 317G with the new VM (+subinterfaces fix) and 468G
on bare metal. So still a long way under actual interface traffic, but a
lot better.
The filesystem (XFS) is mounted with noatime,nodiratime.
Do you run in a VM or on bare metal? - what specs?
Have you played with the flush and threads options on rrdcached, or do
you just run with the defaults?
Interestingly, if I increase the number of threads from 4 to 8, the
flush time goes up to ~60s. I guess that makes sense if the VM is I/O
bound, but I would have thought it would be better in hardware, allowing
the script to offload it quicker, where it then gets written to disk in
the background. The flush time & jitter seems to make no notable
difference.
Thanks,
Ian
On 2023-06-25 20:50, Nick Hilliard (INEX) wrote:
> Ian Chilton wrote on 24/06/2023 16:27:
>
>> vdb 5866 14013 16371 0
>> 1654383830 1932780751 0
>
> 5866 looks fairly high for the average tps since boot, but you need
> more granular output than this. You can get a time series sample of
> the 1s tps using e.g.
>
> # iostat --dec=0 -y vdb 1 | grep --line-buffered vdb
>
> I'd check out several hours of this - maybe throw it into a graph and
> see what's going on.
>
> Presumably the partition is mounted with performance tuning options,
> e.g. "noatime,delalloc"?
>
> We use freebsd for our sflow collector - the i/o performance was
> significantly better when we benchmarked it several years ago.
>
> Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.inex.ie/pipermail/ixpmanager/attachments/20230626/3deae555/attachment.htm>
More information about the ixpmanager
mailing list