# Events

## NDSS Symposium 2003

### The 10th Annual Network and Distributed System Security Symposium Catamaran Resort Hotel San Diego, California 6-7 February 2003-Symposium 5 February 2003-Pre-Conference TutorialsPatron Sponsor: National Security Agency

#### List of Accepted Papers

Program: NDSS'03 List of Accepted Papers

Fighting Spam by Encapsulating Policy in Email Addresses

Author:
John Ioannidis
AT&T Labs -- Research

Abstract:

Moderately Hard, Memory-bound Functions

Authors:
Computer Science Dept, University of California

Mike Burrows, Mark Manasse, and Ted Wobber
Microsoft Research

Abstract:

A resource may be abused if its users incur little or no cost. For example, e-mail abuse is rampant because sending an e-mail has negligible cost for the sender. It has been suggested that such abuse may be discouraged by introducing an artificial cost in the form of a moderately expensive computation. Thus, the sender of an e-mail might be required to pay by computing for a few seconds before the e-mail is accepted. Unfortunately, because of sharp disparities across computer systems, this approach may be ineffective against malicious users with high-end systems, prohibitively slow for legitimate users with low-end systems, or both. Starting from this observation, we research moderately hard functions that most recent systems will evaluate at about the same speed. For this purpose, we rely on memory-bound computations. We describe and analyze a family of moderately hard, memory-bound functions, and we explain how to use them for protecting against abuses.

Secure IP Telephony using Multi-Layered Protection

Authors:
Brennen Reynolds
Department of Electrical and Computer Engineering University of California, Davis

Dipak Ghosal
Department of Computer Science University of California, Davis

Abstract:

This paper presents the design and analysis of a multi-layer protection scheme against denial-of-service (DoS) attacks in IP telephony enabled enterprise networks. While there are many types of DoS attacks, we focus on flood-based attacks using application layer and transport layer signaling messages in IP telephony. We design sensors to detect and control these types attacks and consider different location of these sensors in the enterprise network. The algorithm for detecting these attacks is based on the well established non-parametric cumulative sum method. The response to the attack uses standard protocol features of IP telephony to control the number of incoming application and transport layer setup requests. We consider different recovery algorithms and compare their performance using our emulation toolkit. Our results show that the detection algorithm can quickly detect both transport and application layer attacks and is robust against various types of attacks. We also show that with proper choice of sensor parameters, the detection algorithm is effective over a wide range of call volumes.

Efficient Security Mechanisms for Routing Protocols

Authors:
Carnegie Mellon University

David B. Johnson
Rice University

Abstract:

As our economy and critical infrastructure increasingly rely on the Internet, securing routing protocols becomes of critical importance. In this paper, we present four new mechanisms as tools for securing distance vector and path vector routing protocols. For securing distance vector protocols, our hash tree chain mechanism forces a router to increase the distance (metric) when forwarding a routing table entry. To provide authentication of a received routing update in bounded time, we present a new mechanism, similar to hash chains, that we call tree-authenticated one-way chains. For cases in which the maximum metric is large, we present skiplists, which provides more efficient initial computation cost and more efficient element verification; this mechanism is based on a new cryptographic mechanism, called MW-chains, which we also present. For securing path vector protocols, our cumulative authentication mechanism authenticates the list of routers on the path in a routing update, preventing removal or reordering of the router addresses in the list; the mechanism uses only a single authenticator in the routing update rather than one per router address. We also present a simple mechanism to securely switch one-way chains, by authenticating the next one-way chain using the previous one. These mechanisms are all based on efficient symmetric cryptographic techniques and can be used as building blocks for securing routing protocols.

Working Around BGP: An Incremental Approach to Improving Security and Accuracy of Interdomain Routing

Authors:

Geoffrey Goodell
Harvard University

William Aiello, Timothy Griffin, John Ioannidis, Patrick McDaniel, Aviel Rubin
AT&T Labs -- Research

Abstract:

BGP is essential to the operation of the Internet, but is vulnerable to both accidental failures and malicious attacks. We propose a new protocol that works in concert with BGP, which Autonomous Systems will use to help detect and mitigate accidentally or maliciously introduced faulty routing information. The protocol differs from previous efforts at securing BGP in that it is receiver-driven, meaning that there is a mechanism for recipients of BGP {\em UPDATE} messages to corroborate the information they receive and to provide feedback. We argue that our new protocol can be adopted incrementally, and we show that there is incentive for network operators to do so. We also describe our prototype implementation.

Integrating Security, Mobility, and Multi-homing in a HIP Way

Authors:
Pekka Nikander, Jukka Ylitalo, and Jorma Wall

Abstract:

The current trend in mobile networking is towards mobile hosts that have multiple network interfaces, e.g., WLAN and GPRS. However, when the current Internet architecture was originally designed, neither mobility nor multi-homing were considered. In the current architecture an IP address represents both a host’s identity and the host’s topological location. This overloading has led to several security problems, including the so called address ownership problem, making IP mobility and multi-homing unnecessarily hard from the security point of view.

In this paper we show how the Host Identity Payload (HIP), being discussed at the IETF, can be used to simultaneously solve the security problems, and many of the practical problems, related to end-host multi-homing and end-host mobility. Basically, HIP introduces a new cryptographic name space and protocol layer between network and transport layers, breaking the fixed binding between identities and locations. The approach is especially suitable for large open networks, where no pre-existing trust relationships can be assumed. We also report our early implementation experiences.

Access Control based on Execution History

Authors:
University of California at Santa Cruz

Cedric Fournet
Microsoft Research

Abstract:

Security is a major, frequent concern in extensible software systems such as Java Virtual Machines and the Common Language Runtime.

These systems aim to enable simple, classic applets and also, for example, distributed applications, Web services, and programmable networks, with appropriate security expectations.

Accordingly, they feature elaborate constructs and mechanisms for associating rights with code, including a technique for determining the run-time rights of a piece of code as a function of the state of the execution stack. These mechanisms prevent many security holes, but they are inherently partial and they have proved difficult to use reliably.

We motivate and describe a new model for assigning rights to code: in short, the run-time rights of a piece of code are determined by examining the attributes of any pieces of code that have run (including their origins) and any explicit requests to augment rights. This history-based model addresses security concerns while avoiding pitfalls.

We analyze the model in detail; in particular, we discuss its relation to the stack-based model and to the policies and mechanisms of underlying operating systems, and we consider implementation techniques.

In support of the model, we also introduce and implement high-level constructs for security, which should be incorporated in libraries or (even better) in programming languages.

Testing C Programs for Buffer Overflow Vulnerabilities

Authors:
Eric Haugh, Matt Bishop
University of California, Davis

Abstract:

Buffer overflow vulnerabilities are a serious security problem for programs written in C. Traditional testing techniques, such as statement or branch coverage, are not good at uncovering such flaws. A testing method that augments traditional testing techniques can be used to uncover buffer overflow flaws in real programs. This method involves instrumenting the program under test with code that keeps track of memory buffers and checks the arguments to string functions from the C standard library. If this checking finds certain conditions, the program emmits a warning. It does this when executed with normal'' test data as input, rather than input designed to trigger overflow. These warnings indicate the possiblity of a buffer overflow flaw in the program under test. A tool which implements this testing method was evaluated by testing three widely used, open source software packages. This evalutation shows that the tool is useful for finding buffer overflow flaws, that it has a good false positive rate, and compares well with other techniques.

SiRiUS: Securing Remote Untrusted Storage

Authors:
Eu-Jin Goh, Hovav Shacham, Nagendra Modadugu, and Dan Boneh
Stanford University

Abstract:

This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations.

A Comparison of Publicly Available Tools for Dynamic Buffer Overflow Prevention

Authors:
John Wilander
Linkopings univeristet and The National Computer Graduate School in Computer Science (CUGS)

Mariam Kamkar

Abstract:

The size and complexity of software systems is growing, increasing the number of bugs. Many of these bugs constitute security vulnerabilities. Most common of these bugs is the buffer overflow vulnerability. In this paper we implement a testbed of 20 different buffer overflow attacks, and use it to compare four publicly available tools for dynamic intrusion prevention aiming to stop buffer overflows. The tools are compared empirically and theoretically. The best tool is effective against only 50\% of the attacks and there are six attack forms which none of the tools can handle.

Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools

Author:
Tal Garfinkel
Computer Science Department, Stanford University

Abstract:

System call interposition is a powerful method for regulating and monitoring application behavior. In recent years, a wide variety of security tools have been developed that use this technique. This approach brings with it a host of pitfalls for the unwary implementer that if overlooked can allow his tool to be easily circumvented. To shed light on these problems, we present the lessons we learned in the course of several design and implementation cycles with our own system call interposition-based sandboxing tool. We first present some of the problems and pitfalls we encountered, including incorrectly replicating OS semantics, overlooking indirect paths to resources, race conditions, incorrectly subsetting a complex interface, and side effects of denying system calls. We then present some practical solutions to these problems, and provide general principles for avoiding the difficulties we encountered.

Detecting Service Violations and DoS Attacks

Authors:
Ahsan Habib, Mohamed M. Hefeeda, and Bharat K. Bhargava
Purdue University

Abstract:

Denial of Service (DoS) attacks are a serious threat for the Internet. DoS attacks can consume memory, CPU, and network resources and damage or shut down the operation of the resource under attack (victim). The quality of service (QoS) enabled networks, which offer different levels of service, are vulnerable to QoS attacks as well as DoS attacks. The aim of a QoS attack is to steal network resources, e.g., bandwidth, or to degrade the service perceived by users. We present a classification and a brief explanation of the approaches used to deal with the DoS and QoS attacks. Furthermore, we propose network monitoring techniques to detect service violations and to infer DoS attacks. Finally, a quantitative comparison among all schemes is conducted, in which, we highlight the merits of each scheme and estimate the overhead (both processing and communication) introduced by it. The comparison provides guidelines for selecting the appropriate scheme, or a combination of schemes, based on the requirements and how much overhead can be tolerated.

A Virtual Machine Introspection Based Architecture for Intrusion Detection

Authors:
Tal Garfikel and Mendel Rosenblum
Computer Science Department, Stanford University

Abstract:

Today's architectures for intrusion detection force the IDS designer to make a difficult choice. If the IDS resides on the host, it has an excellent view of what is happening in that host's software, but is highly susceptible to attack. On the other hand, if the IDS resides in the network, it is more resistant to attack, but has a poor view of what is happening inside the host, making it more susceptible to evasion. In this paper we present an architecture that retains the visibility of a host-based IDS, but pulls the IDS outside of the host for greater attack resistance. We achieve this through the use of a virtual machine monitor. Using this approach allows us to isolate the IDS from the monitored host but still retain excellent visibility into the host's state. The VMM also offers us the unique ability to completely mediate interactions between the host software and the underlying hardware. We present a detailed study of our architecture, including Livewire, a prototype implementation. We demonstrate Livewire by implementing a suite of simple intrusion detection policies and using them to detect real attacks.

Proxy Cryptography Revisited

Authors:
Anca Ivan, Yevgeniy Dodis
NYU

Abstract:

In this work we revisit and formally study the notion of proxy cryptography. Intuitively, various proxy functions allow two cooperating parties F (the "FBI") and P (the "proxy") to duplicate the functionality available to the third party U (the "user"), without being able to perform this functionality on their own (without cooperation). The concept is closely related to the notion of threshold cryptography, except we deal with only two parties P and F, and place very strict restrictions on the way the operations are performed (which is done for the sake of efficiency, usability and scalability). For example, for decryption (resp. signature) P (F) sends a single message to F (P), after which the latter can decrypt (sign) the message. Our formal modeling of proxy cryptography significantly generalizes, simplifies and simultaneously clarifies the model of "atomic proxy" suggested by Blaze and Strauss[4]. In particular, we define unidirectional and bidirectional variants of our model, and show extremely simple generic solutions for proxy signature and encryption in these models. We also give more efficient solutions for several specific schemes. We conclude that proxy cryptography is a relatively simple concept to satisfy when looked from the correct and formal standpoint.

Proactive Two-Party Signatures for User Authentication

Authors:

Antonio Nicolosi, Maxwell Krohn, Yevgeniy Dodis, and David Mazi`eres
NYU Department of Computer Science

Abstract:

We study proactive two-party signature schemes in the context of user authentication. A proactive two-party signature scheme (\pss) allows two parties---the client and the server---jointly to produce signatures and periodically to refresh their sharing of the secret key. The signature generation remains secure as long as both parties are not compromised between successive refreshes. We construct the first such proactive scheme based on the discrete log assumption by efficiently transforming Schnorr's popular signature scheme into a \pss. We also extend our technique to the signature scheme of Guillou and Quisquater (GQ), providing two practical and efficient \pss s that can be proven secure in the random oracle model under standard discrete log or RSA assumptions.

We demonstrate the usefulness of \pss s (as well as our specific constructions) with a new user authentication mechanism for the Self-certifying File System (SFS)~\cite{mazieres:separating}. Based on a new \pss\ we call 2Schnorr, the new SFS authentication mechanism lets users register the same public key in many different administrative realms, yet still recover easily if their passwords are compromised. Moreover, an audit trail kept by a secure authentication server tells users exactly what file servers an attacker may have accessed---including even accounts the user may have forgotten about.

Efficient Multicast Packet Authentication

Authors:
Alain Pannetrat and Refik Molva
Institut Eurecom

Abstract:

Providing authentication mechanisms for IP-Multicast streams is paramount for the development of large scale commercial multicast content delivery applications. This need is particularly strong for the delivery of real time content, such as live video/audio news events or financial stock quote distribution. However, this turns out to be a quite challenging problem for many reasons. First, the authentication of the multicast data must be verifiable by a potentially very large number of untrusted recipients. Second, since multicast communication protocols are almost always best effort, the authentication mechanisms needs to authenticate received content despite the potential loss of some packets. Finally, the authentication mechanism needs to be efficient enough to cope with real time data and should have a small communication overhead. We propose a new multicast authentication scheme designed to authenticate real time multicast packet streams with a potentially unlimited number of recipients. This scheme provides both integrity and nonrepudiation of origin, and in a majority of situations, it performs with less overhead in bytes per packet than previously proposed practical real time stream authentication schemes.

Efficient Distribution of Key Chain Commitments for Broadcast Authentication in Distributed Sensor Networks

Authors:
Donggang Liu and Peng Ning
Department of Computer Science North Carolina State University

Abstract:

Broadcast authentication is a fundamental security service in distributed sensor networks. A scheme named $\mu$TESLA has been proposed for efficient broadcast authentication in such networks. However, $\mu$TESLA requires initial distribution of certain information based on unicast between the base station and each sensor node before the actual authentication of broadcast messages. Due to the limited bandwidth in wireless sensor networks, this initial unicast-based distribution severely limits the application of $\mu$TESLA in large sensor networks. This paper presents a novel technique to replace the unicast-based initialization with a broadcast-based one. As a result, $\mu$TESLA can be used in a sensor network with a large amount of sensors, as long as the message from the base station can reach these sensor nodes. This paper further explores several techniques that improve the performance, the robustness, as well as the security of the proposed method. The resulting protocol satisfies several nice properties, including low overhead, tolerance of message loss, scalability to large networks, and resistance to replay attacks as well as some known Denial of Service (DOS) attacks.