As already stated above, network management systems are usually used to monitor and log the behaviour of the network components with the goal of detecting problems before the user does. Therefore, currently the main objective of network management is to provide the network operator with the capability to react as fast as possible to problems in the network.
Considering network technologies used so far, this is a quite reasonable role for network management. If a high collision rate on an Ethernet segment is detected, the required action of the network operator would be to split the problematic segment. There are no possibilities for the NMS to act instead of the operator. This view of Network Management Systems can be called a 'device-oriented view', as the focus is laid on monitoring the correct and best operation of the device.
With ATM a new question arises for network operators: How to juggle with different resource requests, in order to determine the best compromise between safe use of the network (e.g. by allocating peak rates) and efficient use of the bandwidth (e.g by assuming that users won't start using the bandwidth at once)?
The solution to this problem comes down to non-linear optimization (see ). Thus, simulations are performed to test traffic mix and load situations the equipment can cope with and still guarantee the requested QoS (see , chapters 3, 5 and 7). But firstly, it is not possible to include all possible traffic classes and secondly, in the real world these traffic classes usually do not behave exactly according to their statistical model. So, implemented CAC's can only be an approximation to the optimal solution of the non-linear problem and parameters are presented to the network operator to adjust the scheme due to her real world requirements.
In  one of these parameters is the 'robustness' of the system, which allows the operator to select the multiplexing scheme in a ''continuous range from a highly conservative to a very daring '' way. However, it is obvious that no optimal solution can be implemented and an operator is needed to adjust the scheme. This adjustment of the parameters will probably not be static. At certain times a very safe resource allocation scheme could be necessary whilst for other times a more generous strategy would be possible.
Here, self-regulating network management should come into play and relieve the operator from this ongoing adjustment of resource allocation parameters.
This cannot be achieved when using the above depicted 'device view' for management purposes. In this case, certain values of management parameters (i.e. link load, discarded cells on a connection) are considered as critical and the management station (usually the network operator) receives an alarm when the values are exceeded. This would lead her to correct the selected resource allocation scheme. But which values are critical? How to change the presented parameters? When is it possible to be 'generous' and when is it necessary to be conservative? At least it is probable that by the time we receive the alarms we already have problems and would tend to use a more conservative strategy.
So, in attempting to relieve the operator of these decisions the key questions to answer are:
It is not possible to answer these questions when using the 'device view'. The equipment itself does not tell us which values are critical with respect to optimal resource allocation.
For this reason we now define a more abstract task of network management, i.e. to support the quality of an ATM connection for the time it exists. Now we ask -- and try to answer -- which circumstances disturb an ATM connection and what helps to prevent this. Therefore, we are using a 'connection-oriented' view for the definition of management goals.
Before going into a more detailed analysis, certain assumptions regarding the possibilities for a NMS to influence resource allocation have to be discussed.