Internet Society Frontpage

Events Membership
About the Internet Standards
Publications  Public Policy
About ISOC Education

Publications 

Articles of Interest

Internet News: Interview with Fred Baker

March, 2004

Mario Chiari of the Italian magazine Internet News recently interviewed Fred Baker, Chairman of ISOC's Board of Trustees. Fred was asked about how technical and policy issues are affecting the development of the Internet. The following transcript of the interview is reproduced here courtesy of Mario Chiari and Internet News:

MC: Let me start with the usual question on ICANN and DNS governance: are we going to see more TLDs? In particular, IETF people have been quoted as claiming that technical reasons advise against increasing the number of TLDs too much. How much would be too much? Would five, 50, or 500 more TLDs be too much?

FB: At one time, there was a serious proposal within the ICANN discussion to eliminate TLDs as we know them, and replace them with what is now currently the second level domain name. In this proposal, every legal person, which is to say every human and every corporation, might legitimately have its own top level domain name. In the early ARPANET, there was no name system as we have it now. Instead, since there were only a few hundred hosts in the entire Internet, every time someone needed to add a host, they sent an email to SRI to add a name and an address to a file called the HOSTFILE. Once a week, SRI published the new HOSTFILE, and everyone would download it. This fairly quickly ran into operational problems - it was very painful to maintain. So in 1981 (as I recall) Paul Mockapetris designed a new system, which he called the Domain Name System, or DNS. In this system, SRI maintained a few computers that would translate one level of hierarchy, sending name lookups to national or international TLD servers, which would now translate second level domain names, sending the queries to companies or other name servers, which would in turn interpret third and fourth level domain names into IP addresses. There have been other proposals, one of which I tend to like, which uses a set of directories. The important thing is not the exact service used to implement names, it is the hierarchical characteristic that makes them manageable and scalable. What the IETF told ICANN (which I know because I conveyed the messages) is that discarding the hierarchical architecture and returning to the moral equivalent of the HOSTFILE is a bad idea. It should stay with an architecture that allows for hierarchy and scaling. There have been proposals to partially or fully replace the DNS with a directory-based system, which I think are viable; the DNS as it stands is not the only way to do this. But whatever system we choose must scale to the size of the Internet, and the HOSTFILE - and any other flat name space - doesn't. When pressed for an exact number of gTLDs, the IETF said that there is no technical reason to choose one number over another. To illustrate, consider the case of a ccTLD; if someone decides to divide Liechtenstein into two countries, regardless of the number of TLDs already in the roots, the new country's ccTLD will be given a place in the root zone. Ergo, regardless of the number of TLDs that are already in the root zone, it is technically possible to add another. So there is no simple technical limit. Rather, the IETF told ICANN that this was essentially a business issue - how many TLDs are simultaneously viable as businesses? When pressed even further, the IETF said that ICANN might consider initially adding seven to ten new gTLDs and seeing how it went. I don't know what ICANN plans to do with the number of gTLDs; if I were king, gTLDs would be removed from the system entirely, and people would get their names from national registrars. This question is not a technical question. I personally think that there is not room for an infinite number of gTLDs, and in fact from a business perspective the number is very finite, on the order of tens, perhaps hundreds, but not thousands.

MC: Do you agree with ICANN's statement that VeriSign's Wildcard could be, besides other issues, contrary to the stability of the DNS?

FB: Many ISPs have facilities that look up the domain name or email address of the sender of an email message, to determine whether the message is spam. The way these facilities work is that they ask DNS to look up the name. If it returns an address, they might decide that the message is not bogus. If they are checking the email address and DNS returns an IP address, they then open an SMTP connection to the IP address and tell it they want to send a message to the email address. If SMTP states that it will accept the email, they decide that the message is not bogus. When Verisign deployed Wildcard, suddenly all DNS lookups returned an address. Thus, spam filters based on that concept failed. The thing that made ICANN and many enterprise and service networks angry was not just that this happened, but that Verisign, which is in a position of public trust with respect to the DNS, deployed it without discussion with them. In their opinion, an opinion I share, Verisign abused its position of public trust for its own benefit. Yes, I support ICANN's position that deploying the Wildcard service, especially without discussion, was inappropriate.

MC: In these and other issues, is ICANN a public policy maker in disguise? Ss ICANN doing its best to separate technical issues from broader policy issues? If not, what would you like ICANN to do?

FB: Yes, in some ways it is a public policy maker. That is the reason it has a Government Advisory Council to help it make wise public policy decisions. ICANN seeks significant technical and policy input from a variety of sources to make these decisions. It sees itself as the current guardian of the root zone of the DNS, and of certain aspects of Internet operation. It gets a lot of abuse for the decisions it makes, and at times that abuse may be deserved. But its charter is to set policy for the root zone from a position of public trust, which implies that it gives freedom to operate in ways that don't hurt the Internet, and withholds that freedom when there is a perception that withholding it is better for the Internet.

MC: The ITU is trying to play a bigger role in the Internet. Since the ITU is associated by many with both the failed OSI way of (not) doing things, traditional Telecom lobbying and UN burocrats, most of the Internet community seems to be afraid of what could come out from WSIS, for example. On the other hand, the ITU and other UN sponsored agencies and initiatives, seems to some as a possible way to counterbalance the large influence of a very few big American corporations. Your opinion?

FB: I'm curious what American corporations you have in mind. The ones that come quickly to my mind as having funded ICANN operation have a history of supporting ICANN's role as a public trust. The ITU would make sense as a place for governments to talk with each other about the Internet, if governments were the key players in the Internet. Personally, I think the US Department of Commerce (which has been slow to completely hand over the reins to ICANN for various reasons, some of which might be better than others) plays too strong a role in the management of the root zone. But for the most part, governments are very secondary players; the key interests are corporate, and are largely trans-national. On the other hand, to be honest, I can't point to very many Internet-related projects or technologies in which the ITU has been very successful. From my perspective - and this is a very personal perspective - the important point is not whether one organization or another has historically had a role in some unrelated technology; what is important to me is whether there is demonstrated competence in handling Internet-related issues. At the moment, the success rate of the ITU doesn't convince me that it is well qualified to manage the Internet's root zone.

MC: What is the role of ISOC in this environment? I understand that ISOC is supportive of but wishes to stay separate from ICANN. How do you see this?

FB: ISOC tries very hard to educate people about the Internet and to provide fora for them to discuss issues they might have with it. ISOC also provides the corporate home for the IETF, which has for eighteen years been providing technical solutions that make the Internet work, such as BGP, PPP, SMTP and TCP extensions, and so on. ISOC supports ICANN's charter and efforts to manage the Internet root zone, and it participates in the WSIS discussions from a viewpoint of promoting educated discussion. That said, ISOC is not ICANN; ISOC reserves the right to disagree with ICANN when it would be constructive to do so.

MC: Some local ISOC chapters, included the Italian chapter, have been recognized as At-Large structures. As the chairman of the ISOC Board, how do you see such a development?

FB: By this, I believe that you mean that ICANN has essentially blessed an ISOC chapter as its spokesperson in a region on a subject, or asked to directly use the chapter's mailing list for a purpose. Here, I am personally concerned. ISOC or an ISOC chapter may very well decide to cooperate with ICANN on a project, or mail a message to its membership at ICANN's request. That doesn't make the mailing list or the ISOC Chapter an ICANN-related organization; it makes it a friend of ICANN willing to cooperate with it. I personally think that an ISOC chapter declaring itself to also be an ICANN mouthpiece has given up an important aspect - the ability to disagree with ICANN when it would be constructive to do so. So I would far rather see the chapter remain an ISOC chapter cooperating with ICANN under some specified rules of engagement, rather than trying to make itself simultaneously part of ISOC and part of ICANN.

MC: Let me move away from 'politics' and look at the IETF. What is it hot? What should a young researcher look at?

FB: There are a list of interesting technologies in the Internet, many of which are currently in the process of development or extension. Let me tell you a story. Once, I was speaking at a research conference, and a professor complained to me that the IETF generally and I personally write too many RFCs. "When you publish an RFC, we start our research on it. Then you publish a new RFC, and we have to rethink our research." I fairly sputtered... My answer to him was "then you're not doing research; you're doing Quality Assurance. If I am the guy pushing the envelope, I am doing the research. What I wish you would do is do the research first, figuring out all the ways to do something, and clarifying which ones are good ways. Then I could write the RFC once, and it would use the best way that our industry had thought of." Research is about trying out new ideas, often when we don't know much about how to make them work. As such, research comes up with a lot of stuff that is not directly of commercial value. Every once in a while, though, it does, and the compendium of learning developed in trying out the good the bad and the ugly helps us to go on to deploy useful technologies. That is what research is for. I would encourage researchers to do exactly that - play with the toys, try new and innovative ideas, and make mistakes. That is how we learn, and every so often we get it right. What is "hot", right now, is the general topic of security on the Internet, a variety of new applications such as voice, video, gaming, and peer-to-peer file sharing, and so on. Service providers are looking at ways to deliver stable service to converged networks carrying a variety of applications that are not now ubiquitous. Especially in a few years, when all of these applications are running encrypted, there are some interesting problems there. Also, using TCP in a highly predictable fashion has some important problems that need research consideration. And oh by the way, applications we haven't thought of yet are begging to be developed.

MC: I hear something about MANET, the idea being, if I understand correctly, that of a network of mobile devices which form a peer-to-peer network for the exchange of voice and data, somehow like an Internet version of the radio walkie talkie connection.

FB: Mobile Ad Hoc Networks are networks that use IP routing technology suitable for wireless IP networks. This could be used by mobile telephones, but at this point that is not the plan. It has a lot of similarities to wireless LANs, but again is not the same thing because that runs at a lower layer. Mobile Ad Hoc networks are wireless *IP* networks, that change their routing as computers within them move around or routing otherwise changes. I'm not sure what your question is, but if it is "so what are they all about", there has been quite a bit of research on them over the past decade, and it looks to me like they might provide underlying network connectivity for new classes of applications such as allowing cars on the road to talk with each other about things that are important to cars and to driving. They would very likely interact with the fixed Internet when such interaction is possible, but could also be used to quickly deploy basic IP connectivity in a region where there is no fixed Internet.