[Slash-interview-wiki 138] RobPike

Back to archive index

hayam****@users***** hayam****@users*****
2004年 10月 25日 (月) 15:33:40 JST


+Pike:
+First, let me clear up a misstatement. I am not a co-creator of Unix. I suppose I am described that way because I am co-author (with Brian Kernighan) of a book about Unix, but neither Brian nor I would want to take credit for creating Unix. Ken Thompson and Dennis Ritchie created Unix and deserve all the credit, and more. I joined their group - the Computing Science Research Center of Bell Labs - after 7th Edition Unix had come out.
 
+1) Innovation and patents - by Zocalo
+With so many of your ideas being used with such ubiquity in modern operating systems, what is your stance on the issue of patenting of software and other "intellectual property" concepts? Assuming that business isn't going to let IP patents go away as they strive to build patent stockpiles reminiscent of the nuclear arms buildup during the cold war, how would you like to see the issue resolved?
+
+Pike:
+Comparing patents to nuclear weapons is a bit extreme.
+
+2) Systems research - by asyncster
+In your paper, systems software research is irrelevant, you claim that there is little room for innovation in systems programming, and that all energy is devoted to supporting existing standards. Do you still feel this way now that you're working at Google?
+
+Pike:
+I was very careful to define my terms in that talk (it was never a paper). I was speaking primarily about operating systems and most of what I said then (early 2000) is still true.
+
+Here at Google the issues are quite different. The scale of the problem we're trying to solve is so vast there are always challenges. I find it interesting that the slide in that talk about 'Things to Build' is a close match to the stuff we're doing at Google, if you squint a bit. To summarize:
+
+GUI: Google put the cleanest, prettiest UI on the internet and work continues to find new ways to present data and make it easy to explore.
+
+Component architectures: We use a number of big (BIG!) piece parts like the Google File System (GFS) and MapReduce (see the paper by Jeff Dean and Sanjay Ghemawat in the upcoming OSDI http://labs.google.com/papers/mapreduce.html) to build massive engines for processing data. Using those pieces we can harness zillions of machines with a few keystrokes to attack a problem like indexing the entire internet. (OK, it's not quite that easy, but it's still amazing.) I have a daily test job I run to monitor the health of one of the systems I'm developing; it uses a week of CPU time but runs for only a few minutes of real time.
+
+Languages for distributed computing: I'm part of a team working on something along those lines that we hope to write up soon.
+
+Bringing data to the user instead of the other way around: Those damn browsers are still in the way, but other ways of connecting to data are starting to appear, things like the Google API. However, the surface is barely scratched on this topic.
+
+System administration: Google's production people are phenomenal at keeping all those machines humming and ready for your queries. They demonstrated that there was real progress to be made in the field of system administration, and they continue to push forward.
+
+3) Back in The Day - by Greyfox
+Were programmers treated as hot-pluggable resources as they are today? There seems to be a mystique to the programmer prior to about 1995.
+
+From reading the various netnews posts and recollections of older programmers, it seems like the programmer back then was viewed as something of a wizard without whom all the computers he was responsible for would immediately collapse. Has anything really changed or was it the same back then as it is 2004-10-25 (月) 15:33:40 I'm wondering how much of what I've read is simply nostalgia.
+
+Pike:
+Isn't it just that today there are a lot more computers, a lot more programmers, and most people are familiar with what computers and programmers do? I'm not sure I understand your reference to 1995, but twenty or thirty years ago, computers were big expensive temples of modernity and anyone who could control their power was almost by definition a wizard. Today, even musicians can use computers (hi gary).
+
+4) What are you doing... - by Mark Wilkinson
+Google employees are apparently allowed to work on their own projects 20% of the time. Given that you probably can't comment on what you're doing for Google, what are you doing to fill the other 20%?
+
+Pike:
+One of the most interesting projects out there, one I am peripherally (but only peripherally) involved with, is the Large Synoptic Survey Telescope http://www.lsst.org, which will scan the visible sky to very high angular precision, in multiple colors, many times a year. It's got an 8.4 meter aperture and 10 square degree field, taking an image every 20 seconds with its 3 gigapixel (sic) camera. The resulting data set will be many petabytes of image and catalog data, a data miner's dream. The software for the telescope is as big a challenge as the instrument itself; just the real-time pixel pipeline on the mountain will make today's compute clusters look wimpy.
+
+5) Database filesystems - by defile
+The buzz around filesystems research nowadays is making the UNIX filesystem more database-ish. The buzz around database research nowadays is making the relational database more OOP-ish.
+
+This research to me sounds like the original designers growing tired of the limitations of their "creations" now that they're commodities and going back to the drawing board to "do things right this time". I predict the reinvented versions will never catch on because they'll be too complex and inaccessible.
+
+Of course, this second system syndrome isn't just limited to systems. It happens to bands, directors, probably in every creative art.
+
+I think what we've got in the modern filesystem and RDBMS is about as good as it gets and we should move on. What do you think?
+
+Pike:
+This is not the first time databases and file systems have collided, merged, argued, and split up, and it won't be the last. The specifics of whether you have a file system or a database is a rather dull semantic dispute, a contest to see who's got the best technology, rigged in a way that neither side wins. Well, as with most technologies, the solution depends on the problem; there is no single right answer.
+
+What's really interesting is how you think about accessing your data. File systems and databases provide different ways of organizing data to help find structure and meaning in what you've stored, but they're not the only approaches possible. Moreover, the structure they provide is really for one purpose: to simplify accessing it. Once you realize it's the access, not the structure, that matters, the whole debate changes character.
+
+One of the big insights in the last few years, through work by the internet search engines but also tools like Udi Manber's glimpse, is that data with no meaningful structure can still be very powerful if the tools to help you search the data are good. In fact, structure can be bad if the structure you have doesn't fit the problem you're trying to solve today, regardless of how well it fit the problem you were solving yesterday. So I don't much care any more how my data is stored; what matters is how to retrieve the relevant pieces when I need them.
+
+Grep was the definitive Unix tool early on; now we have tools that could be characterized as `grep my machine' and `grep the Internet'. GMail, Google's mail product, takes that idea and applies it to mail: don't bother organizing your mail messages; just put them away for searching later. It's quite liberating if you can let go your old file-and-folder-oriented mentality. Expect more liberation as searching replaces structure as the way to handle data.
+
+6) Thoughts on Bell Labs - by geeber
+Plan 9, Unix and so many other great things came out of Bell Labs. Since the crash of the internet bubble, telecom companies have suffered immensely. One of the results of this is that Lucent has systematically dismantled one of the world's greatest industrial research facilities. You spent a great part of your career at Bell Labs. What are your thoughts about the history and future (if any) of Bell Labs, and how did the culture of the Labs influence the growth of Unix?
+
+Pike:
+It's unfair to say `systematically dismantled', as though it was a deliberate process and there's nothing left. A more honest assessment might be that changes in the market and in government regulation made it harder to keep a freewheeling research lab thriving at the scale of the old Bell Labs. Bell Labs Research is much smaller these days, but there are still some very bright people working there and they're doing great stuff. I hope one day to see Bell Labs restored to its former glory, but the world has changed enough that that may never happen.
+
+I could go on for pages about the old Bell Labs culture, but I must be brief. When I arrived, in 1980, the Computing Science Research Center, also known as 127 (later 1127; therein lies a tale) had recently launched 7th Edition Unix and the Center, after a long period of essentially zero growth, was just entering a period of rapid expansion. That expansion brought in a lot of new people with new ideas. I was a graphics guy then, and I hooked up with Bart Locanthi, another graphics guy, and we brought graphics to Research Unix with the Blit. Other folks brought in new languages, novel hardware, networking; all kinds of stuff. That period in the early 80s generated a lot of ideas that influenced Unix both within the Labs and in the outside community. I believe the fact that the Center was growing was a big part of its success. The growth not only provided new ideas, it also generated a kind of enthusiasm that doesn't exist in the steady state or in a shrinking group. Universit
 ies harness a variant of that energy with the continuous flow of graduate students; in industrial research you need to create it in other ways.
+
+One odd detail that I think was vital to how the group functioned was a result of the first Unix being run on a clunky minicomputer with terminals in the machine room. People working on the system congregated in the room - to use the computer, you pretty much had to be there. (This idea didn't seem odd back then; it was a natural evolution of the old hour-at-a-time way of booking machines like the IBM 7090.) The folks liked working that way, so when the machine was moved to a different room from the terminals, even when it was possible to connect from your private office, there was still a `Unix room' with a bunch of terminals where people would congregate, code, design, and just hang out. (The coffee machine was there too.) The Unix room still exists, and it may be the greatest cultural reason for the success of Unix as a technology. More groups could profit from its lesson, but it's really hard to add a Unix-room-like space to an existing organization. You need the culture
  to encourage people not to hide in their offices, you need a way of using systems that makes a public machine a viable place to work - typically by storing the data somewhere other than the 'desktop' - and you need people like Ken and Dennis (and Brian Kernighan and Doug McIlroy and Mike Lesk and Stu Feldman and Greg Chesson and ...) hanging out in the room, but if you can make it work, it's magical.
+
+When I first started at the Labs, I spent most of my time in the Unix room. The buzz was palpable; the education unparalleled.
+
+(And speaking of Doug, he's the unsung hero of Unix. He was manager of the group that produced it and a huge creative force in the group, but he's almost unknown in the Unix community. He invented a couple of things you might have heard of: pipes and - get this - macros. Well, someone had to do it and that someone was Doug. As Ken once said when we were talking one day in the Unix room, "There's no one smarter than Doug.")
+
+7) Languages - by btlzu2
+
+Hello!
+
+Maybe this is an overly-asked question, but I still often ponder it. Does object-oriented design negate or diminish the future prospects of Unix's continuing popularity?
+
+I've developed in C (which I still love), but lately, I've been doing a lot of purely object-oriented development in Java. Using things like delegation and reusable classes have made life so much easier in many respects. Since the *nixes are so dependent upon C, I was wondering what future you see in C combined with Unix. Like I said, I love C and still enjoy developing in Unix, but there has to be a point where you build on your progress and the object-oriented languages, in my opinion, seem to be doing that.
+
+Thank you for all your contributions!!!
+
+Pike:
+The future does indeed seem to have an OO hue. It may have bearing on Unix, but I doubt it; Unix in all its variants has become so important as the operating system of the internet that whatever the Java applications and desktop dances may lead to, Unix will still be pushing the packets around for a quite a while.
+
+On a related topic, let me say that I'm not much of a fan of object-oriented design. I've seen some beautiful stuff done with OO, and I've even done some OO stuff myself, but it's just one way to approach a problem. For some problems, it's an ideal way; for others, it's not such a good fit.
+
+Here's an analogy. If you want to make some physical artifact, you might decide to build it purely in wood because you like the way the grain of the wood adds to the beauty of the object. In fact many of the most beautiful things in the world are made of wood. But wood is not ideal for everything. No amount of beauty of the grain can make wood conduct electricity, or support a skyscraper, or absorb huge amounts of energy without breaking. Sometimes you need metal or plastic or synthetic materials; more often you need a wide range of materials to build something of lasting value. Don't let the fact that you love wood blind you to the problems wood has as a material, or to the possibilities offered by other materials.
+
+The promoters of object-oriented design sometimes sound like master woodworkers waiting for the beauty of the physical block of wood to reveal itself before they begin to work. "Oh, look; if I turn the wood this way, the grain flows along the angle of the seat at just the right angle, see?" Great, nice chair. But will you notice the grain when you're sitting on it? And what about next 15:33:40 Sometimes the thing that needs to be made is not hiding in any block of wood.
+
+OO is great for problems where an interface applies naturally to a wide range of types, not so good for managing polymorphism (the machinations to get collections into OO languages are astounding to watch and can be hellish to work with), and remarkably ill-suited for network computing. That's why I reserve the right to match the language to the problem, and even - often - to coordinate software written in several languages towards solving a single problem.
+
+It's that last point - different languages for different subproblems - that sometimes seems lost to the OO crowd. In a typical working day I probably use a half dozen languages - C, C++, Java, Python, Awk, Shell - and many more little languages you don't usually even think of as languages - regular expressions, Makefiles, shell wildcards, arithmetic, logic, statistics, calculus - the list goes on.
+
+Does object-oriented design have much to say to Unix? Sure, but no more than functions or concurrency or databases or pattern matching or little languages or....
+
+Regardless of what I think, though, OO design is the way people are taught to think about computing these days. I guess that's OK - the work does seem to get done, after all - but I wish the view was a little broader.
+
+8) One tool for one job? - by sczimme
+Given the nature of current operating systems and applications, do you think the idea of "one tool doing one job well" has been abandoned? If so, do you think a return to this model would help bring some innovation back to software development?
+
+(It's easier to toss a small, single-purpose app and start over than it is to toss a large, feature-laden app and start over.)
+
+Pike:
+Those days are dead and gone and the eulogy was delivered by Perl.
+
+9) Emacs or Vi? - by Neil Blender
+
+Pike:
+
+Neither.
+
+When I was a lad, I hacked up the 6th Edition ed with Tom Duff, Hugh Redelmeier, and David Tilbrook to resuscitate qed, the editor Ken Thompson wrote for CTSS that was the inspiration for the much slimmer ed. (Children must learn these things for themselves.) Dennis Ritchie has a nice history of qed at http://cm.bell-labs.com/cm/cs/who/dmr/qed.html>.
+
+I liked qed for one key reason: it was really good at editing a number of files simultaneously. Ed only handled one file at a time.
+
+Ed and qed were command-driven line editors designed for printing terminals, not full-screen displays. After I got to Bell Labs, I tried out vi but it could only handle one file at a time, which I found too limiting. Then I tried emacs, which handled multiple files but much more clumsily than qed. But the thing that bothered me most about vi and emacs was that they gave you a two-dimensional display of your file but you had only a one-dimensional input device to talk to them. It was like giving directions with a map on the table, but being forced to say "up a little, right, no back down, right there, yes turn there that's the spot" instead of just putting your finger on the map.
+
+(Today, emacs and vi support the mouse, but back in 1980 the versions I had access to had no support for mice. For that matter, there weren't really many mice yet.)
+
+So as soon as the Blit started to work, it was time to write an editor that used the mouse as an input device. I used qed (mostly) and emacs (a little) to write the first draft of jim, a full-screen editor that showed you text you could point to with a mouse. Jim handled multiple files very smoothly, and was really easy to use, but it was not terribly powerful. (Similar editors had been at Xerox PARC and other research labs but, well, children must learn these things for themselves.)
+
+A few years later I took the basic input idea of jim and put a new ed-like command language underneath it and called it sam, a locally popular editor that still has its adherents today. To me, the proof of sam's success was that it was the first full screen editor Ken Thompson liked. (He's still using it.) Here's the SP&E paper about sam from 1987: http://plan9.bell-labs.com/sys/doc/sam/sam.pdf.
+
+A few years later, I decided the pop-up menu model for commanding an editor with a mouse was too restrictive, so I started over and built the much more radical Acme, which I'm using to write these answers. Here's the Acme paper: http://plan9.bell-labs.com/sys/doc/acme/acme.pdf
+
+I don't expect any Slashdot readers to switch editors after reading these papers (although the code is available for most major platforms), but I think it's worth reading about them to see that there are ways of editing - and working - that span a much larger gamut than is captured by the question, 'Emacs or vi?'
+
+10) Biggest problem with Unix - by akaina
+Recently on the Google Labs Aptitude Test there was a question: "What's broken with Unix? How would you fix it?"
+
+What would you have put?
+
+Pike:
+Ken Thompson and I started Plan 9 as an answer to that question. The major things we saw wrong with Unix when we started talking about what would become Plan 9, back around 1985, all stemmed from the appearance of a network. As a stand-alone system, Unix was pretty good. But when you networked Unix machines together, you got a network of stand-alone systems instead of a seamless, integrated networked system. Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient.
+
+Nothing's really changed today. The workarounds have become smoother and some of the things we can do with networks of Unix machines are pretty impressive, but when ssh is the foundation of your security architecture, you know things aren't working as they should.
+
+Looking at things from a lower altitude:
+
+I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!
+
+I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
+
+11) Re: Plan9 - by Spyffe
+
+Rob,
+
+Right now, there are a large number of research kernels. Plan 9, Inferno, AtheOS, Syllable, K42, Mach, L4, etc. all have their own ideas about the future of the kernel. But they all end up implementing a POSIX interface because the UNIX userland is the default.
+
+The kernel space needs to be invigorated using a new userland that demands new and innovative functionality from the underlying system. Suppose you were to design a user environment for the next 30 years. What would the central abstractions be? What sort of applications would it support?
+
+Pike:
+At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.
+
+Applications - web browsers, MP3 players, games, all that jazz - and networks are where the action is today, and aside from irritating little incompatibilities, the kernel has become a commodity. Almost all the programs I care about can run above Windows, Unix, Plan 9, and on PCs, Macs, palmtops and more. And that, of course, is why these all have a POSIX interface: so they can support those applications.
+
+And then there's the standard network protocols to glue things together. It's all a uniform sea of interoperability (and bugs).
+
+I think the future lies in new hardware as much as in new software. A generation from now machines will be so much more portable than they are now, so much more powerful, so much more interactive that we haven't begun to think about the changes they will bring. This may be the biggest threat to Microsoft: the PC, the desktop, the laptop, will all go the way of the slide rule. As one example, when flexible organic semiconductor displays roll out in a few years, the transformation in how and where people use computers and other devices will be amazing. It's going to be a wild ride.
+




Slash-interview-wiki メーリングリストの案内
Back to archive index