The age of the Renaissance man is long gone. No one thinks it is possible anymore for an individual to grasp, fully, all areas of science and technology. Popular software contains millions of lines of code. Mechanisms of the immune response for just one kind of lymphocyte take up thousands of pages of scholarly journals. An iPod's elegantly simple appearance masks underlying technology that is understood by only a tiny percentage of its users.
But, despite the vast incompleteness of our knowledge, recent research suggests that most people think that they know far more than they actually do. We freely admit to not knowing everything about how a helicopter flies or a printing press prints, but we are not nearly modest enough about our ignorance.
The easiest way to show this is to have people to rate the completeness of their knowledge on a seven-point scale. For any question, a “7” denotes the equivalent of a perfectly detailed mental blueprint, and a “1” implies almost no sense of a particular mechanism at all, just a vague image. People happily, and reliably, assign numbers to their understandings of everything from complex machines to biological systems to natural phenomena such as the tides; but these ratings are usually far higher than their actual knowledge.
We can measure the discrepancy between what we think we know and what we actually know by simply asking people, after they have given their initial ratings, to tell us how some things work in as much detail as they can and then to rate their knowledge again in light of their attempt to explain.