The last community wide document of which we are aware was Stockman's RFC 1404 , ``A model for common operational statistics''. Since that time the Internet environment and has changed considerably, as have the underlying technologies for many service providers such as the NAPs. As a result these specific metrics are not wholly applicable to every service provider but they serve as a valuable starting point. We emphasize that the exact metrics used are not a critical decision at this point, since refinements are inevitable as we benefit from experience with engineering the technologies; what is essential is that we start with some baseline and create a community facility for access and development in the future.
From Stockman :
The metrics used in evaluating network traffic could be classified
into (at least) four major categories:
Some of these objects are part of standard SNMP MIBs; others of private MIBs; others are not possible to retrieve at all due to technical limitations, i.e., measurement of a short term problematic network situation only exacerbates it or takes longer to perform than the problem persists. For example, counts of packets and bytes, for non-unicast and unicast, for both input and output are fairly standard SNMP variables. Less standard but still often supported in private MIBs are counts of packet discards, congestion events, interface resets or other errors. Technically difficult variables to collect, due to the high resolution polling required, include queue lengths and route changes. Although such variables would be useful for many research topics in Internet traffic characterization, operationally collected statistics will likely not be able to support them. For example, one characteristic of network workload is `burstiness', which reflects variance in traffic rate. Network behavioral patterns of burstiness are important for defining, evaluating, and verifying service specifications, but there is not yet agreement in the Internet community on the best metrics to define burstiness. Several researchers [21,22,6] have explored the failure of Poisson models to adequately characterize the burstiness of both local and wide area Internet traffic. This task relies critically on accurate packet arrival timestamps, and thus on tools adequate for packet tracing of the arrivals of packets at high rates with accurate (microsecond) time granularities. Vendors may still find incentive in providing products that can perform such statistics collection, for customers that need fine-grained examination of workloads.
The minimal set of metrics recommended for IP providers in Stockman  were: packets and bytes in and out (unicast and non-unicast) of each interface, discards in and out of each interface, interface status, IP forwards per node, IP input discards per node, and system uptime. All of the recommended metrics were available in the Internet Standard MIB. The suggested polling frequency was 60 seconds for unicast packet and byte counters, and an unspecified multiple of 60 seconds for the others. Stockman also suggested aggregation periods for presenting the data by interval: over 24-hour, 1 month, and 1 year periods, aggregate by 15 minuets, 1 hour, and 1 day, respectively. Aggregation includes calculating and storing the average and maximum values for each period.