The fundamental limitation of our technologies will eventually be our ability to explain what we want them to do. After nanotechnologies begin to perfect the strength, electro-magnetic distortion, density, and other physical factors of the built environment, and after the continuing Moore's Law growth of processing and storage power lead to greater raw processing power than a human brain(but notably not logical induction), and after Fusion electricity generation is ready for production - i.e. in about 50 years - we will spend most of our time specifying what we want and how to build it.
Probably I'm jaded by my own inability to communicate with people, and I project that difficulty onto computers of the future. It's also possible that my view of what industry will be important is skewed by my own role in the design field. And certainly I'm viewing this all through the lens of the book "What Computers Still Can't Do" by Hubert Dreyfus, as well as my limited experience programming. So I'm willing to admit that there have been huge advances in natural language processing, and that it's common for people to refer to computers that, for example, "think" I want to search for porn when I type in 'put it between your legs and squeeze' and I was trying to look up the thighmaster.
But this Kurtzweilian sense of general optimism concerning the future of computing and of technologies in general - as justified as it all is - obscures a more fine-grained understanding of how and why computers won't fulfill all our dreams any more than television, flying machines, windmills, God, or any other advanced technologies have. Yes, computers can do virtually anything, but they won't just get up and do it.
We'll have to tell them to do it. Yes, we'll have 90% of the human race working several hours a day to do just that, but even so there are certain fundamental limitations on what can be represented with mathematics, logic, and binary notation. Yes, we can asymptotically approach overcoming those limitations as has been happening during the last 50 years of computer science at MIT among other places, but that process is slow and it's likely that our desires and goals will change (partly in response to unexpected, unasked-for functionality like drawing fractals) before we achieve the goals we set out for. Yes, we will eventually be able to build a house by telling the pervasive, hyperconnected global-cloud computer, "Build me a house. [walk inside] I want a wall here [point and wave] and a sink here, with bright windows there that overlook a secluded beach in hawaii, and the windows over there that overlook 5th Avenue in New York City."
But long before then, when those voice recognition technologies are in their infancy, our language will begin to co-evolve in response to them, and our thinking and our ability to describe things will become intertwined with what computer language allows us to describe. Just look at how words (and the associated concepts) like system, network, database, and automatic have colored our conceptions: not just of the built environment (factories, beaurocracies, cities) but of ourselves (social networks, automatic behaviors).
If we want to understand what will be happening 100 years from now, we have to take a specific, in-depth look at how this feed-back cycle could play out, and at least admit that tasks which seem awesome and useful today will be vastly unimportant by then. Not just because they are so easy as to be commonplace, but because new values, standards, and conceptions will define what everyone thinks about when they wake up and what they dream about when they go to sleep. I'm betting they'll be thinking about abstractions - the deeper mathematics necessary to second-guess what the computer will do when it is given complicated, interlocking instructions (the most basic ones left over from today in the assembly language that underlies C++ and other languages) accessed only through vast generalizations that take in millions of objects in one simple sentence like "Bring me a beer" so that we won't have to worry about whether it's going to prioritize that command over keeping the oxygen coming.
Of course, we could program the system to carry out ever more complex tasks. For example, it could respond to the command "Build a house." There's no reason we couldn't have a default size, architectural style, material selection, build time, location, and all the other details built in. But who would want such a house, other than the person who defined that default in the first place? Better to have a system that - when presented with such an assignment - will draw up a menu of options: the number of rooms, colors, layouts, etc. The user can select his or her preferences in increasing levels of detail until they get sick of it and select the "use defaults for further details" option.
While this would make building a house simpler, faster, and more responsive to user needs than even the most skilled architect with a team of assistants can match today, it will none the less require more thought and consideration from your "average user" than he would spend choosing a house today. Anyone who choses will have access to the most awesomely customized built environment they can possibly imagine, but anybody who doesn't have the interest will probably live in conditions not too dissimilar from what we experience today. Even more worrisome, a certain population of people with more curiosity than common sense will use these new technologies to build unthinkably ugly, uncomfortable, and even unsafe environments and objects for themselves. How will we keep these amateurs from polluting our future with nanotechnology mcmansions?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment