I neglected to include two points: support of a bold claim in the Summary video (that Paper Prototyping can yield 100× speedups), and support of a bold phrase in the talk’s title (“Reverse-Engineering the Mind to…Extend It“).
These statements are not hand-waving; I feel strongly about the value of both. Especially for an audience of other engineers, I did not want it to seem like hype. Here’s a brief outline of each argument. I’m happy to expand on them if there’s interest.
Paper Prototyping can yield 100× speedups
Paper Prototyping catches critical workflow bugs at an early stage: after the initial cognitive engineering efforts but before coding specs are written.
The 100x number is, I believe, an underestimate of the time saved on a particular (very common) kind of interface bug; I don’t have a name for the class of bugs but examples include: a missing factor needed in a decision, a reference to the wrong kind of data, an inability to take action; or more subtly, extra steps needed to get to decision factors, or distracting information that is not necessary for this exact task.
Though the metrics are hard to quantify they’re real, and I rely on the experience of anyone who has done real-world software development and deployment to see that these numbers may be lower bounds.
I believe it takes at least ten times longer to write a rigorous software spec than it does to scribble an extra field onto a piece of paper. I believe it takes ten times longer to code and test to that spec than it takes to write the spec. These factors multiply: if a bug gets caught, the erroneous spec isn’t written (saving 10×) and no time is spent coding and testing done based on that spec (saving 10× times 10×).
Reverse-Engineering the Mind to…Extend It
In the talk I try to make a good argument that we can fit the mind’s actual perceptual and cognitive processes in a more natural way: integrating with our mental mechanisms as real-world things do. We can take advantage of the longer-term evolutionary processes that created the mind if we engage with them as did the natural things that were part of that process.
There’s a fascinating philosophical/scientific stance that cognitive processes may not just be “the software that runs on the brain,” but extend past the cranium. See Noë, Chalmers, and Chemero for more about this stance.
I like the simplicity of the engineering/bootstrapping conception of the mind as a recruiter: it’s nothing magical, just a process/processor tailored by evolution to make sense of anything that’s presented to it, and adopt it if it responds to our control in a predictable and reliable way. Thus, soon after birth a human mind makes sense of the vast, noisy visual input it’s presented with, and that lets it integrate proprioceptive input to see how the visuals change when the head moves. Then it closes the loop: some of that visual input is the flailing arms and legs—not part of the brain, but the brain can change their flailing and recognize that flailing. Ultimately, a limb’s touch input teaches contact with the non-body world, and sometimes a block is moved, showing the mind it can control that, too, to some extent. This process continues to let us recruit pencils to help us think; recruit other people to extend our reach; recruit vast, impersonal macroeconomic forces to forward our personal goals.
Where one draws the line delimiting the end of the mind proper and the beginning of the outside world is an interesting academic debate (the forebrain? the cranium? the body? my set of second-nature-usable tools? only spatially, or do we include time, behavior, memories?). But it’s irrelevant to our concerns as cognitive engineers. It’s a gradient—and our tools are right in the middle of it.
My recruiting of that concept for the Cognitive Engineering Design Methodology has two contributions: it makes us work harder, and shows us how to do that effectively.
It make me work harder anyway, because I’m not just writing software or designing tools—stated most strongly: I’m creating something that will become a part of someone’s mind, perhaps thousands of minds. Even if I don’t think people assimilate everything they touch, it makes me focus on the behavioral changes I’m inducing in my users, and ups my responsibility to treat them well. It gives me a much higher bar.
It shows us how to make that happen by giving us a simple overarching goal for all tools: make sure they’re easily recruited. And to be effective make them fascile, accurate, fail gracefully if they fail, and directly serve our goals with minimal side effects.
The most effective tools aren’t just controlled; they provide rich and useful instantaneous feedback: I can feel the screw with my screwdriver, the wood with my axe, just as I can hear the physical properties of the thing making a sound—hearing “through” the sound itself. This happens because the perceptual/cognitive system serves up meaningful units to the mind, it pre-processes and fuses the sensory input before it ever gets to consciousness. (Unless we add a level of UX indirection that prevents that—the state of virtually all interfaces today.)
But evolution of any description requires negative feedback: how far was the result from the optimal result.
If we close that loop well, we not only become a part of someone’s cognitive processes, we’re once again part of evolution. Social evolution, not Darwinian (as schools pass along the ideas of our forebears), cultural evolution (as NYSE brokers shaped their paper execution pads)—much faster than Lamarck’s passing along of an individual’s improvements to its offspring: this happens in real time.
And, like understating a 100x development improvement, I may have understated even that grand idea of extending the mind: one mind—I wonder if we’re extending collective superminds, a distributed cognitive collective , human/computer cyborgs ?
I’d like to suggest that if we get past today’s UX indirection roadblocks, we can richly, directly connect our minds with active tools, tools that can themselves react to us—e.g., machine learners that shape their offerings and capabilities to our needs as we use them. If we can do this in a natural (recruitable!) way, we close the feedback loop. We allow what may be a new kind of evolution: instantaneous, real-time advancement through a problem’s solution space; a directed search otherwise akin to genetic algorithms (a hybrid of Intelligent Design and actual evolution? ; )
And we start working with AIs to distribute our scarce resources to partially satisfy our unlimited needs—instead of against AIs that are spending our resources to intensify our needs (the wrong direction on both sides of the equation…)