Merge branch 'master' into next
Signed-off-by: David Ahern <dsahern@gmail.com>
This commit is contained in:
commit
3d9608b923
|
|
@ -1,95 +0,0 @@
|
|||
I. About the distribution tables
|
||||
|
||||
The table used for "synthesizing" the distribution is essentially a scaled,
|
||||
translated, inverse to the cumulative distribution function.
|
||||
|
||||
Here's how to think about it: Let F() be the cumulative distribution
|
||||
function for a probability distribution X. We'll assume we've scaled
|
||||
things so that X has mean 0 and standard deviation 1, though that's not
|
||||
so important here. Then:
|
||||
|
||||
F(x) = P(X <= x) = \int_{-inf}^x f
|
||||
|
||||
where f is the probability density function.
|
||||
|
||||
F is monotonically increasing, so has an inverse function G, with range
|
||||
0 to 1. Here, G(t) = the x such that P(X <= x) = t. (In general, G may
|
||||
have singularities if X has point masses, i.e., points x such that
|
||||
P(X = x) > 0.)
|
||||
|
||||
Now we create a tabular representation of G as follows: Choose some table
|
||||
size N, and for the ith entry, put in G(i/N). Let's call this table T.
|
||||
|
||||
The claim now is, I can create a (discrete) random variable Y whose
|
||||
distribution has the same approximate "shape" as X, simply by letting
|
||||
Y = T(U), where U is a discrete uniform random variable with range 1 to N.
|
||||
To see this, it's enough to show that Y's cumulative distribution function,
|
||||
(let's call it H), is a discrete approximation to F. But
|
||||
|
||||
H(x) = P(Y <= x)
|
||||
= (# of entries in T <= x) / N -- as Y chosen uniformly from T
|
||||
= i/N, where i is the largest integer such that G(i/N) <= x
|
||||
= i/N, where i is the largest integer such that i/N <= F(x)
|
||||
-- since G and F are inverse functions (and F is
|
||||
increasing)
|
||||
= floor(N*F(x))/N
|
||||
|
||||
as desired.
|
||||
|
||||
II. How to create distribution tables (in theory)
|
||||
|
||||
How can we create this table in practice? In some cases, F may have a
|
||||
simple expression which allows evaluating its inverse directly. The
|
||||
Pareto distribution is one example of this. In other cases, and
|
||||
especially for matching an experimentally observed distribution, it's
|
||||
easiest simply to create a table for F and "invert" it. Here, we give
|
||||
a concrete example, namely how the new "experimental" distribution was
|
||||
created.
|
||||
|
||||
1. Collect enough data points to characterize the distribution. Here, I
|
||||
collected 25,000 "ping" roundtrip times to a "distant" point (time.nist.gov).
|
||||
That's far more data than is really necessary, but it was fairly painless to
|
||||
collect it, so...
|
||||
|
||||
2. Normalize the data so that it has mean 0 and standard deviation 1.
|
||||
|
||||
3. Determine the cumulative distribution. The code I wrote creates a table
|
||||
covering the range -10 to +10, with granularity .00005. Obviously, this
|
||||
is absurdly over-precise, but since it's a one-time only computation, I
|
||||
figured it hardly mattered.
|
||||
|
||||
4. Invert the table: for each table entry F(x) = y, make the y*TABLESIZE
|
||||
(here, 4096) entry be x*TABLEFACTOR (here, 8192). This creates a table
|
||||
for the ("normalized") inverse of size TABLESIZE, covering its domain 0
|
||||
to 1 with granularity 1/TABLESIZE. Note that even with the granularity
|
||||
used in creating the table for F, it's possible not all the entries in
|
||||
the table for G will be filled in. So, make a pass through the
|
||||
inverse's table, filling in any missing entries by linear interpolation.
|
||||
|
||||
III. How to create distribution tables (in practice)
|
||||
|
||||
If you want to do all this yourself, I've provided several tools to help:
|
||||
|
||||
1. maketable does the steps 2-4 above, and then generates the appropriate
|
||||
header file. So if you have your own time distribution, you can generate
|
||||
the header simply by:
|
||||
|
||||
maketable < time.values > header.h
|
||||
|
||||
2. As explained in the other README file, the somewhat sleazy way I have
|
||||
of generating correlated values needs correction. You can generate your
|
||||
own correction tables by compiling makesigtable and makemutable with
|
||||
your header file. Check the Makefile to see how this is done.
|
||||
|
||||
3. Warning: maketable, makesigtable and especially makemutable do
|
||||
enormous amounts of floating point arithmetic. Don't try running
|
||||
these on an old 486. (NIST Net itself will run fine on such a
|
||||
system, since in operation, it just needs to do a few simple integral
|
||||
calculations. But getting there takes some work.)
|
||||
|
||||
4. The tables produced are all normalized for mean 0 and standard
|
||||
deviation 1. How do you know what values to use for real? Here, I've
|
||||
provided a simple "stats" utility. Give it a series of floating point
|
||||
values, and it will return their mean (mu), standard deviation (sigma),
|
||||
and correlation coefficient (rho). You can then plug these values
|
||||
directly into NIST Net.
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
lnstat - linux networking statistics
|
||||
(C) 2004 Harald Welte <laforge@gnumonks.org
|
||||
======================================================================
|
||||
|
||||
This tool is a generalized and more feature-complete replacement for the old
|
||||
'rtstat' program.
|
||||
|
||||
In addition to routing cache statistics, it supports any kind of statistics
|
||||
the linux kernel exports via a file in /proc/net/stat. In a stock 2.6.9
|
||||
kernel, this is
|
||||
per-protocol neighbour cache statistics
|
||||
(ipv4, ipv6, atm)
|
||||
routing cache statistics
|
||||
(ipv4)
|
||||
connection tracking statistics
|
||||
(ipv4)
|
||||
|
||||
Please note that lnstat will adopt to any additional statistics that might be
|
||||
added to the kernel at some later point
|
||||
|
||||
I personally always like examples more than any reference documentation, so I
|
||||
list the following examples. If somebody wants to do a manpage, feel free
|
||||
to send me a patch :)
|
||||
|
||||
EXAMPLES:
|
||||
|
||||
In order to get a list of supported statistics files, you can run
|
||||
|
||||
lnstat -d
|
||||
|
||||
It will display something like
|
||||
|
||||
/proc/net/stat/arp_cache:
|
||||
1: entries
|
||||
2: allocs
|
||||
3: destroys
|
||||
[...]
|
||||
/proc/net/stat/rt_cache:
|
||||
1: entries
|
||||
2: in_hit
|
||||
3: in_slow_tot
|
||||
|
||||
You can now select the files/keys you are interested by something like
|
||||
|
||||
lnstat -k arp_cache:entries,rt_cache:in_hit,arp_cache:destroys
|
||||
|
||||
arp_cach|rt_cache|arp_cach|
|
||||
entries| in_hit|destroys|
|
||||
6| 6| 0|
|
||||
6| 0| 0|
|
||||
6| 2| 0|
|
||||
|
||||
|
||||
You can specify the interval (e.g. 10 seconds) by:
|
||||
|
||||
lnstat -i 10
|
||||
|
||||
You can specify to only use one particular statistics file:
|
||||
|
||||
lnstat -f ip_conntrack
|
||||
|
||||
You can specify individual field widths
|
||||
|
||||
lnstat -k arp_cache:entries,rt_cache:entries -w 20,8
|
||||
|
||||
You can specify not to print a header at all
|
||||
|
||||
lnstat -s 0
|
||||
|
||||
You can specify to print a header only at start of the program
|
||||
|
||||
lnstat -s 1
|
||||
|
||||
You can specify to print a header at start and every 20 lines:
|
||||
|
||||
lnstat -s 20
|
||||
|
||||
You can specify the number of samples you want to take (e.g. 5):
|
||||
|
||||
lnstat -c 5
|
||||
|
|
@ -1 +1 @@
|
|||
static const char SNAPSHOT[] = "190924";
|
||||
static const char SNAPSHOT[] = "191125";
|
||||
|
|
|
|||
Loading…
Reference in New Issue