Invention is the Mother of Necessity

Inventions arise when there is an unmet market need. Inventors who perceive a unmet need are motivated to fulfill it due to economic rewards of inventing, such as money or fame. Some inventions fit this path, like the cotton gin and the steam engine. Necessity is the mother of invention–as they say–or is it?

What if the opposite is also true?

When Nikolaus Ott built his first gas engine, in 1866, horses had been supplying peoples land transportation needs for nearly 6,000 years, supplemented increasingly by steam-powered railroads for several decades. There was no crisis in the availability of horses, no dissatisfaction with railroads.

What if “many or most inventions were developed by people driven by curiosity or by a love of tinkering, in the absence of any initial demand for the product they had in mind?” This question is explored in Jared Diamond’s book Guns, Germs, and Steel: the Fates of Human Societies.

When invention is the mother of necessity, the inventor finds an application for the invention after it is invented. And, “only after it had been in use for a considerable time did consumers come to feel that they ‘needed’ it.” Also, a device may be invented for one purpose, but eventually it is adopted in wide use for other,unanticipated purposes. Diamond says:

It may come as a surprise to learn that these invention in search of a use include most of the major technological breakthroughs of modern times, ranging from the airplane and automobile, through the internal combustion engine and electric light bulb, to the phonograph and transistor. Thus, invention is often the mother of necessity, rather than vice versa.

In addition to the Ott engine example above, Diamond provides several other examples, one of which is Edison’s phonograph. When Edison created the phonograph in 1877, he published an article proposing ten uses for the invention, such as preserving the last words of dying people and recording books for blind people to hear. However, the invention was later adopted for playing music, which Edison objected to as a debasement from the serious uses he intended.

Early versions of inventions often are not ready for use. Ott’s first engine was “weak, heavy, and seven feet tall, [and] it did not recommend itself over horses.” As Diamond says, “Inventors often have to persist at their tinkering for a long time in the absence of public demand because early models perform too poorly to be useful.”

The view that invention is the mother of necessity aligns with the examples where significant inventions were develop by hobbyists and English clergy.  And, it fits with Chris Dixon’s assertion that “What the smartest people do on the weekends is what everyone else will do during the week in ten years.”

What Galileo’s Pendulum Clock Teaches About Inventing

HowWeGottoNow_SixInnovationsThatMadetheModernWorld

Fifty-eight years in the making, his slow hunch about the pendulum’s “magical property” had finally begun to take shape. The idea lay at the intersection point of multiple disciplines and interests: …Physics, astronomy, maritime navigation, and the daydreams of a college student: all these different strains converged in Galileo’s mind.

“After experiencing a desire to invent a particular thing, I may go on for months or years with the idea in the back of my head,” said Nikola Tesla. Tesla calls this the incubation period, which precedes direct effort on the invention. Science writer, Steve Johnson, calls it a slow hunch; an idea that comes into focus over a long time.

Johnson discusses several examples of how slow hunches develop in his excellent book, Where Good Ideas Come From and again is his most recent book How We Got to Now: Six Innovations That Made the Modern World. Johnson shows how innovation is most often a product of slow hunches and not eureka moments.

One of the six innovations that made the modern world is the clock for keeping accurate time. In Johnson’s discussion of time, he recounts the events and circumstances that led up to Galileo invention of the pendulum clock.

The story shows the invention of the pendulum clock was not a product of a eureka moment, but of Galileo’s experiences and cross-disciplinary studies over 58 years. Johnson starts with Galileo’s experience at university.

 Suspended from the ceiling is a collection of altar lamps. They are motionless now, but legend has it that in 1583, a nineteen-year-old student at the University of Pisa attended prayers at the cathedral and, while daydreaming in the pews, noticed one of the altar lamps swaying back and forth. While his companions dutifully recited the Nicene Creed around him, the student became almost hypnotized by the lamp’s regular motion. No matter how large the arc, the lamp appeared to take the same amount of time to swing back and forth. As the arc decreased in length, the speed of the lamp decreased as well. To confirm his observations, the student measured the lamp’s swing against the only reliable clock he could find: his own pulse.

Galileo’s daydreaming about time could have been influenced by the fact that his father was a music theorist and played the lute. Twenty years later, after becoming a professor of mathematics, Galileo decided to build a pendulum that would recreate what he had observed at Pisa.

He discovered that the time it takes a pendulum to swing is not dependent on the size of the arc or the mass of the object swinging, but only on the length of the string. “The marvelous property of the pendulum,” he wrote to fellow scientist Giovanni Battista Baliani, “is that it makes all its vibrations, large or small, in equal times.”

The then existing clocks did not keep accurate time. They could be off by tweenty minutes a day and had to be reset using a sundial. But no one needed accurate clocks in the sixteenth century for keeping daily schedules. The need for accurate time keeping arose from shipping navigation needs.

But sailors lacked any way to determine longitude at sea. Latitude you could gauge just by looking up at the sky. But before modern navigation technology, the only way to figure out a ship’s longitude involved two clocks. One clock was set to the exact time of your origin point (assuming you knew the longitude of that location). The other clock recorded the current time at your location at sea. The difference between the two times told your longitudinal position: every four minutes of difference translated to one degree of longitude, or sixty-eight miles at the equator.

The problem with this system was the accuracy of the clock at the point of origin.

With timekeeping technology losing or gaining up to twenty minutes a day, it was practically useless on day two of the journey. All across Europe, bounties were offered for anyone who could solve the problem of determining longitude at sea.

So, after years of working in various disciplines and influenced by the rise of a need for accurate time keeping, Galileo together with his son, drew up plans for the first pendulum clock.

Fifty-eight years in the making, his slow hunch about the pendulum’s “magical property” had finally begun to take shape. The idea lay at the intersection point of multiple disciplines and interests: Galileo’s memory of the altar lamp, his studies of motion and the moons of Jupiter, the rise of a global shipping industry, and its new demand for clocks that would be accurate to the second. Physics, astronomy, maritime navigation, and the daydreams of a college student: all these different strains converged in Galileo’s mind.

Owning to the improved accuracy of the pendulum clock it was in wide use by the end of the next century.

Galileo’s invention of the pendulum clock is just one example of many where the invention resulted from a long series of events and cross-disciplinary influences rather than a momentary flash of genius.

Overcoming the Difficulty of Recognizing Good Ideas

Knowledge formation, even when theoretical, takes time, some boredom, and the freedom that comes from having another occupation, therefore allowing one to escape the journalistic-style pressure of modern publish-and-perish academia… –Nassim Talab.

Antifragile“The future is already here — it’s just not evenly distributed” is a quote often attributed to William Gibson. Nassiam Taleb, the author of Black Swan, and more recently Antifragile: Things That Gain from Disorder, asserts that in many cases you cannot predict the future. We have a hard time recognizing good ideas and implementing them. Having time and cultivating a capacity for boredom, as explained below, can contribute to one’s ability to recognize good ideas.

When a good idea succeeds, it can have a huge upside–a much greater upside than downside. Taleb says that anything that has more upside than downside from random events is antifragile. Further, antifragility describes “things that benefit from shocks; [] thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty.” Inventing can be an antifragile activity.

The Difficulty of Recognizing Good Ideas

Taleb points out we have a difficult time recognizing opportunities that are staring us in the face. This is the same vein as the Gibson quote above, which was repeated by Chris Anderson, editor of Wired Magazine. Taleb says:

It struck me how lacking in imagination we are: we had been putting out suitcase on top of a cart with wheels, but nobody thought of putting tiny wheels directly under the suitcase…Can you imagine that it took close to six thousand years between the invention of the wheel (by, we assume, the Mesopotamians) and this brilliant implementation (by some luggage maker in a drab industrial suburb)? And billions of hours spent by travelers like myself schlepping luggage through corridors full of rude customs officers. Worse, this took place three decades or so after we put a man on the moon….Indeed, though [the wheeled suitcase was] extremely consequential, we are talking about something trivial: a very simple technology.

This tells us something about the way we map the future. We humans lack imagination, to the point of not even knowing what tomorrow’s important things look like.

Although not the case with the wheeled suit case, sometimes the difficulty in recognizing good ideas is–as Peter Thiel notes–they often look like bad ideas.

As Steven Johnson asserted in his book Where Good Ideas Come From: The Natural History of Innovation, we need to cultivate opportunities where ideas can collide unpredictably. Taleb too asserts that we need randomness to stumble upon good ideas:

We are managed by small (or large) accidental changes, more accidental than we admit. We talk big but hardly have any imagination, except for a few visionaries who seem to recognize the optionality of things. We need some randomness to help us out–with a double dose of antifragility.

Implementation Does Not Always Follow Quickly From Invention

Even when you do stumble upon a good idea and develop it into an invention, there’s still the difficult road to implementation and commercial success. This is, in part, why there are many many uncommercialized inventions described in patents and patent applications, which you can’t find on the market.

…Implementation does not necessarily proceed from invention. It too, requires luck and circumstances. The history of medicine is littered with the strange sequence of discovery of a cure followed, much later, by the implementation—as if the two were completely separate ventures, the second harder, much harder, than the first. Just taking something to market requires struggling against a collection of naysayers, administrators, empty suits, formalists, mountains of details that invite you to drown, and one’s own discouraged mood on occasion. In other words, to identify the option (again, there is this option blindness). This is where all you need is the wisdom to realize what you have on your hands.

For there is a category of things that we can call half-invented, and taking the half-invented into the invented is often the real breakthrough. Sometimes you need a visionary to figure out what to do with a discovery, a vision that he and only he can have. For instance, take the computer mouse, or what is call the graphical interface: it took Steve Jobs to put it on your desk, then laptop–only he had a vision of the dialectic between images and humans–later adding sound to a trilectic. The things, as they say, that are “staring at us.”

The difficulty of recognizing good ideas, and the uncertainty of proceeding with an idea, contributes to huge upsides for those that do.

The Need for Time to Allow Ideas to Percolate: The Clergy and Hobbyists

Chris Dixon said, “What the smartest people do on the weekends is what everyone else will do during the week in ten years.” Taleb makes a similar point.  Many significant inventions were developed by hobbyist and the English clergy. They had ample time to let ideas percolate and collide–in other words, to invent.

Knowledge formation, even when theoretical, takes time, some boredom, and the freedom that comes from having another occupation, therefore allowing one to escape the journalistic-style pressure of modern publish-and-perish academia to produce cosmetic knowledge…

There were two main sources of technical knowledge and innovation in the nineteenth and early twentieth centuries: the hobbyist and the English rector, both of whom were generally in barbell situations.

An extraordinary proportion of work came out of the rector, the English parish priest with no worries, erudition, a large or at least comfortable house, domestic help, a reliable supply of tea and scones with clotted cream, and an abundance of free time. And, of course, optionality. The Reverends Thomas Bayes (as in Bayesian probability) and Thomas Malthus (Malthusian overpopulation) are the most famous. But there are many more surprises, cataloged in Bill Bryson’s Home, in which the author found ten times more vicars and clergymen leaving recorded traces for posterity than scientists, physicists, economists, and even inventors. In addition to the previous two giants, I randomly list contributions by country clergymen: Edmund Cartwright invented the power loom, contributing to the Industrial Revolution; Rev. Jack Russell bred the terrier; Rev. William Buckland was the first authority on dinosaurs; Rev. William Greenwell invented modern archaeology; Rev. Octavius Pickard-Cambridge was the foremost authority on spiders; Rev. George Garrett invented the submarine; Rev. Gilbert White was the most esteemed naturalist of his day; Rev. M. J. Berkeley was the top expert on fungi; Rev. John Michell helped discover Uranus; and many more.

The Industrial Revolution, for a refresher, came from “technologists building technology,” or what he [Terence Kealey] calls “hobby science.” Take again the steam engine, the one artifact that more than anything else embodies the Industrial Revolution. As we saw, we had a blueprint of how to build it from Hero of Alexandria. Yet the theory didn’t interest anyone for about two millennia. So practice and rediscovery had to be the cause of the interest in Hero’s blueprint, not the other way around.

Having free time and cultivating a capacity for boredom allows ideas to percolate, even subconsciously. This appears to enhance the ability to recognize and implement good ideas and to possibly profit from the antifragile nature of inventing.

Antifragle is a thought provoking book in its entirety with possible wide ranging applicability.

Is the Invention before Its Time? What iPods, Biology, and Computers Teach about Inventing in the Adjacent Possible

WhereGoodIdeasComeFromIn 1979, Kane Kramer invented a portable digital music player. He sought patents in numerous countries, including the United States where he was granted US Patent No. 4,667,088.

The Kramer portable digital music player used memory cards, the size of a standard credit card, which were each capable of holding 3.5 minutes of music (i.e. one song).  A record shop could store blank cards and load those cards on-demand from a digital music data store in the music shop at the time of sale.

A media outlet asserts that Kramer was the “inventor behind the iPod.” That statement probably goes too far in characterizing a reference that Apple made to Kramer’s patent and invention as prior art in a patent lawsuit.

Regardless, Kramer’s device was an early portable digital music player. The problem for Kramer was that his device did not become a commercial success. And later his patents lapsed because he was unable to pay the patent maintenance fees.

While we don’t know why Kramer’s device was not a commercial success, it might be that in 1979 the Internet did not exist that made electronic distribution of music easy–you don’t have to go to the physical music store. It might be that the electronic storage capacity of the memory card for the device could only hold one song. It might be that the elements helpful for commercial success did not exist in the 1980s when Kramer attempted to commercialize the invention.

Maybe Kramer’s music player was not within “the adjacent possible.” Stated another way, maybe it was before its time.

Steven Johnson discusses “the adjacent possible” in his book, Where Good Ideas Come From. The adjacent possible provides an outer boundary to how advanced your invention can be from the current state of the art. It is one factor to consider when evaluating the possible commercial success of your invention.

The Adjacent Possible from Evolutionary Biology

Johnson notes that scientist Stuart Kauffman coined the term “the adjacent possible” to describe the set of first-order combinations of molecules that were possible given the composition of the earth’s environment before life emerged:

The lifeless earth was dominated by a handful of basic molecules: ammonia, methane, water, carbon dioxide, a smattering of amino acids and other simple organic compounds… Think of all those initial molecules, and then imagine all the potential new combinations that they could form spontaneously… trigger all those combinations [and] you would end up with most of the building blocks of life: the proteins that form the boundaries of cells; sugar molecules crucial to the nucleic acids of our DNA.

But you would not be able to trigger chemical reactions that would build a mosquito, or a sunflower, or a human brain… The atomic elements that make up a sunflower are the very same ones available on earth before the emergency of life, but you can’t spontaneously create a sunflower in that environment, because it relies on a whole series of subsequent innovations that wouldn’t evolve on earth for billions of years: chloroplasts to capture the sun’s energy, vascular tissues to circulate resources though the plant, DNA molecules to pass on sunflower building instructions to the next generation.”

On the pre-life earth formaldehyde was within the adjacent possible but more complex organisms were not. The more complex organisms required intermediate building blocks that had not yet come into existence.

The Difference Engine and The Analytical Engine

The adjacent possible is not only applicable within evolutionary biology, but applies to human-made inventions.

Johnson notes two inventions of Charles Babbage–the Difference Engine and the Analytical Engine–to show when an invention is within the adjacent possible and when it is not. Babbage was a nineteenth-century British inventor, now known as the father of modern computing.

The Difference Engine was advanced mechanical calculator described as a very complex “fifteen-ton contraption, with over 25,000 mechanical parts, designed to calculate polynomial functions that were essential to creating the trigonometric tables crucial to navigation.” While Babbage did not build the Difference Engine during his lifetime, the Difference Engine was within the adjacent possible of the Victorian technology. Many improvements occurred within the field mechanical calculation during that time based on Babbage’s architecture, according to Johnson.

On the other hand, Babbage’s Analytical Engine was not within the adjacent possible. On paper, the Analytical Engine was the world’s first programmable computer. But it was so complicated most of it never got past the blueprint stage:

Babbage’s design for the engine computer anticipated the basic structure of all contemporary computers: “programs” were to be inputted via punch cards…; instructions and data were captured in a “store,” the equivalent of what were now call random access memory, or RAM; and calculations were executed via a system that Babbage called “the mill.” uniting industrial-era language to describe what were now call the central processing unit, or CPU.

Babbage had most of the system sketched out by 1837, but the first true computer to use this programmable architecture didn’t appear for more that a hundred years. While the Difference Engine engendered an immediate series of refinements and practical applications, the Analytical Engine effectively disappeared from the map. Many of the pioneering insights that Babbage had hit upon in 1830s had to be independently rediscovered by the visionaries of the World War II-era computer science.

Implementing the Analytical Engine with mechanical gears and switches would have been extremely complex and difficult to maintenance according to Johnson. On top of that it would have been slow. For the Analytical Engine to work or work well, the logic needed to be implemented with electronics and not mechanical gears. Therefore, the Analytical  Engine was not within the adjacent possible in the 1837.

YouTube

Johnson leaves us with one more modern example: Youtube. Johnson notes that if Youtube was created 10 years earlier in 1995 it would have failed. This is because in 1995 most web users were on slow dial-up connections and it could take an hour to download a standard Youtube clip. In 1995, Youtube’s innovation was not within the adjacent possible, but ten years later, with broadband Internet and Adobe’s Flash technology, it was.

Invention Evaluation Factor: Is it Within the Adjacent Possible?

One question you should answer when evaluating your invention is whether the invention is within the adjacent possible.

It is not enough to consider whether it is technically possible to create your invention. The Kramer portable digital music player was technically possible to create at the time. But the underlying infrastructure–the lack of the Internet–and the memory storage capacity available at the time could have constrained its ability to be a commercial success.

At least for human-made inventions, practical application of the adjacent possible principle must consider not only whether it is technically possible to manufacture/make, but whether the infrastructure and other elements helpful for commercial success exist at the time.

Invention and How to Predict the Future

Larry_Page_Charlie_Rose1The future is already here — it’s just not evenly distributed.

Trying to determine whether your product or service will be a success is the business of predicting the future. Predicting can be hard. The difficulty is compounded by the fact that often you will want to determine whether there is a market for the invention before spending money on the patent process, but patent law encourages you to file a patent application before you make your invention public. Below are ideas on predicting the future.

Larry Page, founder of Google, said in a conversation with Charlie Rose:

Invention is not enough. Tesla invented the electric power we use, but he struggled to get it out to people. You have to combine both things: invention and innovation focus, plus the company that can commercialize things and get them to people. . . .

Lots of companies don’t succeed over time. What do they fundamentally do wrong? They usually miss the future. I try to focus on that: What is the future really going to be? And how do we create it? And how do we power our organization to really focus on that and really drive it at a high rate? When I was working on Android, I felt guilty. It wasn’t what we were working on, it was a start-up, and I felt guilty. That was stupid! It was the future.

Chris Anderson, entrepreneur and former editor-in-chief of Wired magazine, says he doesn’t try to predict the future, but he actually does by observation. Anderson said:

I actually never, never make the mistake of trying to predict the future, ’cause I suck at it. And I fall back — and you’ll forgive my little semantic parlor trick here — but I fall back on William Gibson’s famous quote that ‘the future is already here — it’s just not evenly distributed.’ To predict the future, you just have to keep your eyes open, and there it is.

Anderson’s strategy of predicting the future by observation is the concept discussed by entrepreneur and investor, Chris Dixon. Dixon said:

…Business people vote with their dollars, and are mostly trying to create near-term financial returns. Engineers vote with their time, and are mostly trying to invent interesting new things. Hobbies are what the smartest people spend their time on when they aren’t constrained by near-term financial goals.

It’s a good bet these present-day hobbies will seed future industries. What the smartest people do on the weekends is what everyone else will do during the week in ten years.

(empahsis added).

Makers are a type of hobbyist. What does it mean to be a maker? Seth Godin spoke at the World Maker Faire and said this:

[7:58] What real makers understand is this: If it might not work, then you are doing some making. If you are doing something that might not work…then you are doing something important because it is risky.  It’s risky because when you finish, you need to turn to someone and say here, here it is. I made this. And the other person can say I don’t like it … it doesn’t work right … i don’t want it. That is hard.

…[A]ll hacking is, all innovation is, all creating is, all science is: is doing things over and over and over and failing and failing and failing until it works. So if you are not willing to fail, then you cannot possibly innovate.

…if you are a maker, what have you made recently that was a complete and epic failure?

(emphasis added).

Turning back to predicting the future, a post from brainpicking.org about a PBS video from Joe Hanson provides highlights of a discussion on why some science fiction writers are good at predicting the future.  The video provides:

One right prediction in any one body of work would be lucky, but this many right answers can’t be luck — clearly, something sets these people apart. Many of the greatest sci-fi writers also had serious scientific training: Isaac Asimov had a Ph.D. in biochemistry, and Arthur C. Clarke had degrees in math and physicns; H.G. Wells had a degree in biology…

At its core, good science fiction must rest on good science

(emphasis added).

Conclusion
So, it is important to focus on the future. The future is already here, you only need to look for it. One place to look for it is in cutting edge science. Another way to look for it is to watch what makers, hackers, and hobbyists are doing with their free time.

Photo credit to flickr user Steve Jurvetson under this creative commons license.