Phil Lapsley
Giganews Usenet History > Phil Lapsley
Phil Lapsley is the co-author of the Network News Transfer Protocol, which is the standard method for Usenet transmission to this day.
Lapsley attended the University of California at Berkeley where he earned B.S. and M.S. in electrical engineering and computer science in 1988 and 1991, respectively. In addition to developing NNTP, during his time at Berkeley he co-founded the Experimental Computing Facility and contributed to the Berkeley UNIX project.
After graduating from Berkeley, Lapsley went on to co-found two companies, Berkeley Design Technology, Inc., a digital signal processing technology advisory firm, and SmartTouch, a company specializing in biometric financial transaction processing.
In addition to his degrees earned at Berkeley, Lapsley also holds an MBA from the MIT Sloan School of Management. Lapsley worked as a management consultant in the high-tech practice of McKinsey and Company’s Silicon Valley office. He is currently writing a book on the history of “phone phreakings” (the predecessor to computer hacking).
Interview (5/1/2007) with Phil Lapsley
May 1, 2007
Before I answer your questions, let’s hop in the wayback machine and take a quick ride to check out the computing environment at U.C. Berkeley in 1984, as well as set the stage for my involvement in all this:
- This was almost 10 years before the World Wide Web. While there was such a thing as “the Internet,” it was in the form of the ARPANET. A “high speed” connection on the ARPANET was 56 kbps, i.e., about what you get out of a good dial-up modem now. (Does anybody actually use dial-up modems anymore?)
- Ethernet was a new invention and had only recently been deployed around a few parts of the Berkeley campus. In fact, the main CS Department Ethernet was 3 megabits/second, and a new Ethernet had just been installed that was 10 megabits/second. The prior RS-232-based serial network, something called “Berknet,” developed by some Berkeley master’s student named Eric Schmidt (hmm… where have I heard that name recently? Maybe I should google him… hmm…) was being phased out, but a few machines still used it.
- The best computer you could reasonably hope to get access to was the Digital Equipment Corporation VAX. Berkeley had about a dozen or so these campus wide, shared among students, professors, researchers, and staff. The big machines were VAX-11/780s, of which there were only a couple, and the small machines were VAX-11/750s. A typical machine had 4 megabytes (yes, megabytes) of memory and a few hundreds of megabytes of disk.
- The best disk drive at the time was the Fujitsu Eagle. This monster took up a good chunk of a 19″ rack and weighed 143 pounds. It shipped from the factory with lifting straps in the box to help you lift it into the rack. It took 30 seconds to spin up when you turned it on, and it gave you a whopping 470 megabytes (yes, megabytes) unformatted, which worked out to about 330 megabytes formatted. It cost $10,000 or so. If you were lucky, your VAX might have a couple of them.
- You interacted with a computer via a “smart” terminal, meaning text only, 24 lines of 80 characters. Now by 1986 this was starting to change a little bit, in that the first black and white Sun 3/50 workstations were starting to become available, but they were rare and the majority of people interacted with computers via terminals – no graphics, no mouse, just text.
- If you dialed up a computer from home, or if you had two computers connected together (and you weren’t lucky enough to have ARPANET access), you used a dialup modem. The popular modem at the time was 1200 bps, with 2400 bps just starting to take off. Seems slow, but it was a big upgrade from the 300 bps modems of just a year or two earlier.
Ok, with that as background, let’s talk about USENET, or netnews as it was more commonly called.
Netnews in those days was distributed by dial-up modem using the UUCP protocol. That is, the computers dialed each other up and exchanged netnews and email files at 1200 or 2400 bps. A few lucky people had Telebit modems that ran at 9600 bps half-duplex (which was fine for UUCP). But there were several problems with it, at least at Berkeley.
First, most computers at Berkeley didn’t have modems or telephone lines, so most computers simply couldn’t get a dial-up UUCP connection to another computer. And even if your machine did have a modem, long-distance telephone calls were expensive. In fact, even local calls cost you a few cents per minute because the phone lines were business (vs. residential) lines.
The second problem was that netnews was viewed as frivolous – not something that had real academic priority. In fact, there was one professor, Richard Fateman, who several times a year posted a short message reminding people that “csmsgs” (the internal CS department netnews-like message board at Berkeley) was for official use only and not to be used for posting anything else like apartments for rent, because it wasted computing resources. You can imagine how he felt about netnews. (I wonder how he feels about the web today?)
But Prof. Fateman was right in one regard, which was the third problem: netnews was expensive in terms of disk and CPU resources. If I remember correctly, in 1985 or so a typical “full” news feed might consume 5 megabytes a week, so if you kept a month of netnews around, that was 20 megabytes of disk space. Again, remember that 300 megabytes might be all the disk your computer had back in those days, and a lot of that was used by the UNIX operating system, so you couldn’t really afford to devote roughly 10% of your total disk space (and much more of your available disk space) to some frivolous stuff.
As a result of all this, only two machines at Berkeley had netnews: ucbvax and ucbcad. Unfortunately, very few people had accounts on these machines. This meant that if you were just a typical student, or even a typical professor, you just didn’t have access to netnews.
And that’s where I came in: I was a pushy freshman in electrical engineering and computer science, I wanted netnews access, and I couldn’t get it, because I didn’t have a ucbvax account.
But I had a couple of things going for me. One was a fascination with networks and telecom. Before I was even a freshman at Berkeley, back in high school, I had written my own dial-up bulletin board system. It wasn’t as fancy as netnews, but it was my own step in that direction, and it gave me enough of a taste of netnews-style communications to be powerfully motivated to get access to it.
The second thing I had going for me was a willingness to learn stuff and write code. Again, even before being at Berkeley, I had managed to get ahold of the “ARPANET Resource Handbook” (essentially a collection of early RFCs) and was madly studying it, even though it didn’t make a great deal of sense to me at the time.
I had been hanging out at Berkeley since I was in high school (1983 or so), and I ended up helping out with the Berkeley UNIX project with the Computer Systems Research Group. I wanted to learn about how networking worked under BSD UNIX, so I talked to Mike Karels, one of the lead CSRG researchers, and asked him what I could do to help. We agreed that I would revise the “sockets” documentation (more formally known as the “4.2BSD Interprocess Communications Primer”). This lead to my working on the “inetd” program, and in turn, by 1985, I was one of a handful of people at Berkeley who knew about client/server architecture and socket programming.
Somehow, from this background, I proposed to Mike and (the now late) Bob Henry that I develop a protocol for accessing netnews remotely from ucbvax over the Ethernet. This would benefit people in the computing community at Berkeley, as well as make life easier for Bob because he wouldn’t have people pestering him for accounts on ucbvax!
They thought this was an ok idea, so I started writing code, basing the whole thing on an SMTP-style client/server architecture, and pretty soon had something working. I modified a version of Larry Wall’s “rn” program to speak the protocol so there was a user interface for it. A few months into it, my friend Dave Pare at U.C. San Diego mentioned that his friend Brian Kantor was working on something similar. I had met Brian through Dave in 1983 or so on a visit to San Diego, so Brian and I already knew each other, at least casually. As I remember it, we traded a bunch of emails and some code and decided that it was silly for us both to develop things independently, and the collaboration was born.
Now notice that this point that all of my work was targeted as news reading, not news transfer. That is, the goal was for a news reader to be able to display articles to a user, not to transfer news between machines to replace UUCP. While we realized that it could be used for this, it wasn’t my driving motivation. But Erik Fair, a friend of mine and a wheel in USENET circles, convinced us that it could and should be modified to be both reading and transport.
So, on to your questions:
1. What benefits did Usenet provide for your professional or academic life?
I learned a lot: programming, how to do software distributions, how to work with other people, the concept of “finding a need and filling it.”
I got a fair amount of name recognition for it. I remember hiring a guy from some big company in the early 1990s and calling his former boss, whom he had listed as a reference. When I called him and identified myself as Phil Lapsley, he responded, “the Phil Lapsley?” That was nice.
Never did make any money off of it – not, mind you, that any of us back in those days would have even thought of doing that.
Hey, if Giganews goes public, you’re gonna cut me in on some friends and family shares, right? 🙂
2. How did the various NNTP developers collaborate?
Brian Kantor and I worked almost entirely through email. Brian was at U.C. San Diego and I was at Berkeley, so it almost had to be that way. In fact, I never worked with him in person, or even by telephone, until towards the very end of the project when we were working on a final version of the RFC. I flew down to San Diego to visit a friend and stopped by his office to talk about the RFC. I recall him saying words to the effect of, “Damn, we shouldn’t have actually gotten together in person. That way we could have told everybody it was done entirely remotely!”
Erik Fair and I were both physically in Berkeley, so we got together over Chinese food a lot, though we also interacted a lot through email. I also remember him reviewing a draft version of the spec when he was in Europe on a skiing vacation. He didn’t have email access, so he wrote out his comments and mailed it to the CS department office at Berkeley. Unfortunately, I was just a lowly undergrad and didn’t have a mail box, so they dumped the package in the dead letter bin… this despite the fact that he wrote my email address on the envelope in several places, underlined and circled…
<sigh>
3. Technical challenges?
Things were actually relatively straightforward in that there were no “laws of physics” being violated. In retrospect, there were a whole bunch of things we could have done to make things better (e.g., make the protocol stateless), but what we were doing wasn’t rocket science (especially in retrospect!).
Probably the biggest concern was that the NNTP server, nntpd, needed to be efficient so that ucbvax wouldn’t bog down. It would have defeated the purpose of the whole thing had that happened, because then Bob Henry would have been forced to shut it off, and the very act of trying to bring netnews to a wider audience would have been the thing that did it in. Fortunately, nntpd ended up being pretty efficient, so that wasn’t a big concern. The client program, rn, was a bit of a pig (hi, Larry! J) and on some of the “instructional” computers at Berkeley rn was disallowed because it clobbered the machine too much. We ended up writing a light-weight reader called “rmsgs” or something like that which was much less expensive in terms of CPU usage.
It was a bit of a challenge to maintain nntpd for the various versions of UNIX (from BSD to SysV to Xenix to god knows what all else), and to sync it up with new versions of rn from Larry Wall. But these weren’t really technical challenges, more like operational challenges – they just took time, energy, and effort, which was something that I had in short supply being a full-time student during this period.
4. Client-server model?
The client/server model was pretty new. It sounds funny to say that now, since it’s such an accepted part of the way things work, but at the time it was an exotic thing. But even then, it was the mainstay of Internet architecture: you had SMTP, telnet, ftp, rlogin, rsh, etc., all of which were based on client/server. And in fact, inetd was written to deal with the growing numbers of daemons that were being written and used. With inetd, you didn’t have to have dozens and dozens of background servers sitting around clogging up your system while waiting for connections.
In fact, both helped and hurt. On the one hand, when nobody was reading news, you didn’t see any pesky nntpd processes floating around, which made things look good. On the other hand, it also meant that you got one nntpd process per remote newsreader, which had the potential to clog things up a bit. I know we toyed with the idea of a multi-threaded nntpd (one nntpd serving multiple clients) but I don’t remember that we ever got around to implementing it.
5. Specific roles?
Like I said up above, Brian and I both independently had developed our own news servers, and decided to collaborate and make a single server. There was never a conscious decision, but at some point Brian started handling more of the writing of the RFC while I was doing more of the code writing. This ended up being a pretty good system because we had not only a spec (RFC) but a reference implementation for the spec at the same time. Meanwhile, Erik Fair poked and prodded and cajoled us into doing various Right Things … plus he wrote some code of his own.
It’s that line attributed to David Clark: “We reject kings, presidents, and voting. We believe in rough consensus and working code.” I think NNTP very much embodied this approach.
6. Were you involved in any other USENET projects?
Not really. I maintained NNTP for a few years and then passed it over to Stan Barber at Baylor around 1988, who did a great job with it. By that time I had started grad school in digital signal processing and was trying to spend more time learning DSP algorithms and less time writing UNIX code, so I somewhat naturally got out of it.
7. Was the development part of your studies, or personal?
As you can probably see from above, personal. You gotta realize that at Berkeley in the early 1980s they didn’t even teach C programming, and the profs had no idea what TCP/IP was, so there’s no way this would have been for a class.