Open Gio-Choi opened 2 months ago
Perhaps the biggest observation I can make from the discussions regarding augmented intelligence is the fact that humanity as we have known it throughout any period throughout all of history is inseparable from the augmentations we have made to our condition and experience; the basal human form is hidden beneath so many levels of augmentation, whether of the artifactual, linguistic, or methodological form, that it is essentially impossible to determine what it actually is; at any given level, it is arguable that some augmentation to an even deeper level produced it.
The conversation reminds me of a short story by science fiction writer Ted Chiang called “The Truth of Fact, the Truth of Feeling”. In the story, humanity is augmented with a technology called “Remem” which automatically records our entire lives from our eyes, allowing for an artificially perfect memory to be saved. Chiang explores how this technology impacts our “true” psychological memory, and the social and interpersonal implications of the use of this technology, establishing the augmented human in the story as a distant cyborg, only to conclude with the observation that Remem is essentially an analogy to the technology of writing, and thus, that we were cyborgs all along.
Douglas Engelbart’s H-LAM/T framework provides a foundation through which to analyze any level of human augmentation. Under this framework, we can view writing as a composite process of the man-artifact system, composed of the explicit-human processes of language and fine motor control, and the explicit-artifact process of the trail-leaving property of pencils and pens (or the imprinting property of styli, or the carving property of carving tools, etc. depending on the type of writing). From here we can observe that without the integration of these multiple processes, the particular form of augmentation that writing provides, and the higher levels of augmentation it may support, such as particular kinds of external symbol manipulation, the composition of very long bodies of language, and the externalized form of unchanging memory that Ted Chiang’s story highlights, would not be possible.
The relationship between artificial intelligence and augmented intelligence is an interesting one to investigate. On one hand, artificial intelligence can be conceived of as just another artifact with respect to the H-LAM/T system, albeit perhaps the most complex one we have encountered; thus, all of its internal mechanisms can be considered explicit-artifact processes, and our interactions with it can be considered composite processes (for example, prompting an LLM is a composite of typing and the LLM’s internal mechanisms). However, we can also conceive of artificial intelligence systems as a means of modulating human interaction; since many modern systems are essentially function approximators trained of vast quantities of human-produced data, we can also treat them as a form of communication between the humans in this process. The training on human-produced data suggests we can envision a parallel conceptual framework for augmentation of these models; here, interaction with humans (via their data) can be thought of as essentially augmenting the AI.
Discovery: Reading Ashby’s Introduction to Cybernetics was very interesting, and shed some light onto how disparate areas of systems-related science came to be related via a shared formal framework. It’s interesting to realize how many fields and methodological approaches have been shaped by the cybernetic approach; a lot of the framework seems very reminiscent of automata theory as I have encountered it in other classes. It seems a bit distinct from augmented intelligence at first glance, but the sort of inductive way in which Engelbart’s H-LAM/T system is built up seems like something that could very well pair with cybernetic analyses, especially given that human augmentation fits very well within the scope of things cybernetics aims to address; namely, systems so complex that they have been often neglected by scientists in a rigorous way; while social and behavioral scientists have developed a variety of methods in their own right for understanding human augmentation in its various forms, the perspective of cybernetics definitely promises valuable insights in a way that produces results ripe for analogy with already well-understood systems, allowing for better understanding of the human augmentation systems themselves.
The primary difference between artificial intelligence and augmented intelligence lies in their respective goals. Augmented intelligence aims to create a “human-computer symbiosis” by aiding humans and extending their capabilities. Artificial intelligence, on the other hand, seeks to replicate human behavior in full, performing tasks independently rather than assisting humans in carrying them out.
That said, the boundary between artificial and augmented intelligence is often ambiguous. For example, a self-driving car might be considered artificial intelligence because it performs a task—driving—that a human would normally do. However, it can also be viewed as augmented intelligence, since it allows a human to reallocate their attention to other tasks during transportation. The classification depends largely on how the technology is used and what task is considered central.
This ambiguity applies even to simple tools such as calculators. A calculator completes tasks a human is capable of doing and could therefore be classified as artificial intelligence, at least within the narrow domain of arithmetic. Yet it also performs those tasks in service of broader goals—freeing humans from repetitive work so they can focus on higher-order mathematical reasoning or creative problem solving. In this broader context, the calculator functions as augmented intelligence.
This suggests that the distinction between artificial and augmented intelligence is relative to the problem in question. If the problem is defined as performing calculations, the calculator replaces human labor and acts as artificial intelligence. If the problem is broader—engaging in mathematics—then the calculator augments human ability by handling low-level tasks. Under a sufficiently expansive definition of human activity, nearly all modern AI, including systems like GPT, could be classified as augmented intelligence. Even if a system could carry out every cognitive task a human can, it would not eliminate the need for humans to continue performing biologically necessary tasks such as eating or breathing. In that sense, AI would be functioning alongside, rather than in place of, human life.
The research proposed by Douglas Engelbart emphasized the development of systems that could support collaborative problem-solving. Rather than focusing on automation, Engelbart envisioned computers as tools that could improve human communication and collective intellectual performance. Technologies such as video conferencing platforms and collaborative digital workspaces can be seen as direct realizations of this vision, as they enhance how humans work together rather than replacing human input altogether.
One ethical concern raised by this approach is privacy. Engelbart’s framework assumes that users are willing to share information across digital systems to improve collaboration. As augmentative tools become more embedded in everyday life, there is a growing risk that sensitive personal data may be misused, accessed without consent, or exploited by third parties.
An underexplored possibility is whether AI could take on basic biological tasks such as eating and breathing. If AI could handle these functions on our behalf, it might eliminate the need for humans to stop three times a day to consume food - a hassle when one has multiple assignments due before a midterm. In theory, if AI could be made to perform all the necessary biological processes, it could keep a body running even after biological death. This raises the question of whether AI could be used to produce humanoid zombies. In a cultural moment dominated by AI apocalypse narratives, it seems only fair that zombie apocalypse scenarios be given the same speculative consideration.
Discovery This week’s readings and media made me reconsider the usefulness of drawing a firm line between artificial and augmented intelligence. Engelbart’s report emphasized augmentation, but what stood out most was how the same system could be viewed as either augmentation or automation depending on how narrowly or broadly we define the task. Brynjolfsson’s warning about the “Turing Trap” raised useful concerns about mimicking human behavior, but even that distinction seems less solid when considering technologies like calculators or self-driving cars. Watching Engelbart’s 1968 demo also made me realize how many so-called "augmentative" tools fundamentally change our workflows in ways that could just as easily be described as replacing human input. Overall, I’ve started to see the line between artificial and augmented intelligence not as a hard boundary but as a shifting frame—one that depends more on context and interpretation than on any inherent quality of the technology itself
In the prior stages of computational development, basic progress has followed from the successive recognition and abstraction of distinct features necessary for broader progress in the physical and social sciences. Notably, for deeply novel concepts of generalized computational problems and statistical analysis to first gain articulation, rather than an argument for the utility of these modes of analysis, an argument for a deeper necessity of these modes of analysis by analogy to the divine was introduced. Turing’s conceptual development for computer intelligence differed in its non-insistence on a relation to an acknowledged prior higher intelligence. Whereas prior developments relied on analogy to a supreme, divine intelligence from which the smaller human intelligence is derived, Turing’s abstraction of higher intelligence exists simply by its capacity to mimic the expression of human intelligence in general. Though Turing is unwilling (unable, by his own admission) to establish the end of this notion of intelligence in general, it can be conceived that he has posited a notion of general high intelligence as the arbitrary capacity to subsume and imitate, independent of the operation of a human intervening. I have gone on this diatribe because I think the best way to categorize the distinction between “augmented/extended” and prior artificial intelligence neatly is to consider what form supreme intelligence takes from these models. At lower levels of capacity, the operations of these systems seem at times entirely indistinguishable. Even up to our baffling contemporary level of computational intelligence, this distinction seems non-obvious. To what extent can Chat-GPT’s function – or any LLM’s function, for that matter – be called relevantly independent when it is in the truest sense ideally suited as the aid to a long-form correspondence between the prompter and the AI system? The form of intelligence posited by Turing’s model asserts, as the general limit of human intelligence, the capacity to take on the role of a non-fixed discursive participant without real regard for the actions of this discursive participant. Consequently, Turing has to at no point directly address the presence of human intelligence as necessarily directed towards understood ends. Whether or not this introduces some contradiction due to the prior imposition of imitation as a goal is unclear. Augmented intelligence recognizes the articulated, purposive dimension of human intelligence quite elegantly. Where Turing’s model of intelligence requires at no point a transition between a system not claimed as non-intelligent to one claimed as intelligent, the view of augmented intelligence, in its attempt to actually formulate such a system, needs to make this transition. Not seeing a rational model by which to do this from underlying parts without the intervention of a human actor, the augmented system simply works in the operation of the understood human intellect. It works in the understood “synergism” between various necessary actions of the intellect (Bush, 18). Thus, at its limit, the highest intelligence would emerge as an object internal to human thought, capable of coordinating recognized faculties of the intellect with arbitrarily great efficiency. To do this requires that we retain, though it needn’t be immediately evident to us, some notion of the connection between ideas and operations. This is very much in line with Bush’s notion of the operation of operation on the internet. Bush’s highest intelligence would be capable of efficiently deploying these connections with arbitrary efficiency, with discrete ends in mind. Though this intelligence concept notably makes no metaphysical claims, it harkens back to some of the first abstractions necessary for the development of computation as a notion. Notably, where production could first become acknowledged as an explicit goal and rationalized, the efficient coordination of distinguished resources of specialization was the first mode by which the rationalization was expressed via Smith. In this sense, Bush’s notion of intelligence can be viewed as the furthest expression thus far of intelligence as the means for efficient production. Explore: The Luhn articles offer an obvious case of the work needed to facilitate effective augmented intelligence. Identified early in Bush’s work is the problem of communication between disparate but ultimately – and serendipitously – unified disciplines. This, in turn, raises the question of an efficient means for identifying the general language potentially of use to those engaged in separate tasks within the same organization. This problem motivated Luhn to find efficient ways of channeling textual information. Though a certain loss of information is always necessary when translating information into more efficiently processed units (note the distinction between “2,3” and “a,b s.t. a+b = 5”), the basic form that the hashing algorithm takes, the efficient elimination of obvious redundancy, provides key insight into the basic model for the role of computation in eliminating human work at any level of abstraction.
Across this week’s readings, a common thread emerges: the idea that the most meaningful role for computers is not to replicate human cognition, but to augment it. In other words, to form a partnership that strengthens our intellectual capacities rather than rendering them obsolete. From Vannevar Bush’s early conceptualization of the memex to Douglas Engelbart’s systematized HLAM/T framework, and finally to Erik Brynjolfsson’s critique of today’s AI trajectory, the question is not just what machines can do, but how they can do it with us.
In As We May Think, Bush argued that while scientific knowledge was growing rapidly, our ability to organize, access, and apply that knowledge remained stuck in the past. He envisioned a new kind of tool, one that would allow individuals to build associative trails, echoing the way the human mind naturally links ideas. Rather than filing knowledge into rigid categories, users would follow connections that mirrored personal reasoning and discovery (Bush, p. 15). His memex wasn't just about efficiency; it was about designing tools that support the human experience of learning, remembering, and reflecting.
Building on this, Engelbart’s Augmenting Human Intellect report offers a more detailed and operational vision. His HLAM/T system identifies multiple interdependent components - Human, Language, Artifacts, Methodology, and Training - that shape intellectual work (Engelbart, p. 17-20). Crucially, he emphasizes that even small improvements at the base level (such as better text editing or interface design) can lead to substantial changes in higher-level capabilities like planning or collaboration (p. 31). The ultimate goal is not to replace human thinking, but to enhance it through carefully co-evolved systems of tools and methods.
By contrast, Brynjolfsson warns of a shift in direction: modern artificial intelligence, especially human-like AI (HLAI), tends to aim for substitution rather than support. In what he calls “the Turing Trap,” firms prioritize automation that mimics human capabilities, often displacing workers and reinforcing inequality. He argues that we should instead build systems that excel precisely where humans do not, and which help people become more productive and creative in the process (Brynjolfsson, p. 278).
Together, these three texts suggest that the most forward-looking path isn’t to build machines that think like us, but to design systems that make us better at thinking. This vision of partnership over replication calls for a more intentional design of our tools, where human judgment, learning, and values remain at the center.
Discovery In revisiting Bush’s description of information compression, he imagines storing “a million books” in the space of a desk drawer. (p. 7) I was struck by how quickly we achieved that reality, but how little we’ve solved the deeper problem of meaning-making. Storage and access have become trivial; what remains challenging is helping people navigate, connect, and synthesize what they find. This underscores a key point across all three readings: that the central challenge of computation is not just more data but more thoughtful design, one that supports human insight in an increasingly complex world.
Vannevar Bush’s idea in As We May Think (1945) describes tools like the "memex," a machine meant to enhance human memory and research capabilities. Similarly, Douglas Engelbart’s vision from Augmenting Human Intellect (1962) imagines computers that help humans tackle complicated problems, keeping the human always in control. Erik Brynjolfsson (2022) describes Artificial Intelligence's push toward automation as a "Turing Trap." He argues that technology focusing Augmented Intelligence which complements humans, rather than replacing them, creates more widespread social benefits. Augmented systems let humans remain essential and empowered rather than replaced and marginalized. I believe the key difference between augmentation and automation lies in control and responsibility. Augmentation means humans are still essential and oversee the task. Automation means technology operates independently without human oversight. A fully self-driving garbage truck, for example, would be automation because no human decision-making is required. But if technology assists human workers—helping with navigation or lifting—it would count as augmentation. Engelbart proposed researching tools specifically designed to help people think better. His goal was to enhance human abilities to manage complex information, collaborate, and solve problems. He envisioned highly interactive computing environments, where technology works alongside humans. Honestly I believe his research sets the foundation to the emergence of so many tech companies today since he introduced graphical interfaces using mouse and keyboard, and online collaboration using video conferencing, which basically helps democratize programming. Under an augmentation-focused approach, technologies that enhance human decision-making, improve memory and knowledge management, and facilitate collaborative work would become common. For instance, decision-support tools in medicine or business, intelligent interfaces for education, and interactive collaboration platforms would thrive. An augmentation-focused approach has clear ethical benefits, as it preserves human dignity, autonomy, and participation. It can help distribute economic benefits more evenly, as humans remain integral rather than replaced by machines. However, I believe that if all research focused only on the augmentation mindset, technological progress might actually slow down. This is because it's easier to leave the more difficult or complex tasks to the human, rather than pushing to solve them through technology. In that sense, the automation approach is usually the more challenging path, since it requires tackling problems that augmentation might simply hand off to people. On the other hand, much current AI development favors automation. If research emphasized augmentation, many valuable tools and solutions could emerge, significantly enhancing human capabilities across society. Under-explored areas include personal knowledge management tools, improved team collaboration systems, personalized learning assistants, and clearer ways for humans and machines to learn together.
Discovery:
After watching Douglas Engelbart’s The Mother of All Demos, I was amazed to realize how many of the everyday technologies we now take for granted were first introduced in that single presentation—things like the computer mouse, graphical user interfaces, hypertext links, real-time text editing, video conferencing, and collaborative editing. It made me realize that almost everything we associate with modern computer use, especially in personal computing, traces back to this work. Every personal computer today comes with a mouse and some form of graphical interface. When we navigate files or click through web pages, we’re using systems based on hypertext. Whether it’s Microsoft Word or Google Docs, we rely on real-time editing to work efficiently. And platforms like Zoom or FaceTime have become part of our daily lives, connecting us virtually across distances.
What stood out to me most is how these innovations weren’t just technical tricks—they came from a mindset focused on augmenting human capability. Engelbart wasn’t trying to replace people with machines; he wanted to build tools that help us think better, work together, and solve complex problems more effectively. I now see that so much of what makes computing accessible, collaborative, and empowering today came from that original vision of augmented intelligence. It’s incredible to think how far-reaching and impactful that mindset has been—not just for computer scientists, but for society as a whole.
Original Materials.
In the summer of 1945, Vannevar Bush envisioned a future in which “wholly new forms of encyclopaedia will appear, ready‐made with a mesh of associative trails” (Bush 1945). That vision set artificial intelligence on the path of replicating human thought, aiming to transfer judgment and reasoning to silicon. By contrast, Augmented Intelligence—as Douglas Engelbart defined in his 1962 report—is a partnership: “increasing the capability of a man to approach a complex problem situation…to gain comprehension…and to derive solutions” (Engelbart 1962). Where AI seeks autonomy, augmentation insists on human agency, treating machines as extensions of the mind rather than replacements.
The boundary between augmentation and automation often blurs at the point of human disengagement. A self‐driving garbage truck that navigates, collects, and disposes refuse without oversight exemplifies pure automation, removing people from the process. Yet Engelbart’s NLS mouse‐driven editor—demoed in 1968—offered suggestions only when invoked by the user, preserving control (Engelbart 1968). Augmentation ends when algorithms decide unilaterally; it begins where machines defer to human intent.
Engelbart’s research was as much sociological as technological. His 1962 report reads like a manifesto: he assembled engineers, psychologists, and organizational theorists around a single goal—evolving tools through iterative prototypes and user studies. He insisted on integrated systems, training regimens, and even physical workspaces designed to foster collaborative problem‐solving (Engelbart 1962). Into Englebart’s San Jose lab streamed video‐conferencing, multi‐window editing, and real‐time shared screens—all under human orchestration.
Under this paradigm, emerging technologies would emphasize transparency and partnership. We might see hypertext webs that adapt to individual thought patterns, eye‐tracking interfaces that anticipate intent only with explicit consent, or neuron‐link implants that augment memory without obscuring its origins (Bush 1945; Engelbart 1968). In contrast, opaque machine‐learning black boxes—algorithms whose logic users cannot inspect—would be sidelined, viewed as antithetical to the spirit of augmentation.
The ethical stakes of such a program are profound. If augmentation tools concentrate in elite institutions, they risk deepening social inequality. Engelbart warned that “tools which amplify intellect must be universally accessible” (Engelbart 1962). Privacy, consent, and cognitive autonomy must be safeguarded: systems should log and share only that data which users knowingly furnish, with safeguards against dependency that might erode critical thinking.
Yet vast territories remain underexplored. Collective idea‐mapping platforms could thread distributed teams into a living network of insights. Emotional augmentation—tools that reflect and regulate affective states—beckons from the horizon. Educational systems might evolve into adaptive environments that scaffold learners’ problem‐solving strategies rather than merely test outputs. In charting these frontiers, we honor the lineage from Bush’s Memex to Engelbart’s “Mother of All Demos,” carrying forward a program that amplifies intellect while preserving the human spark.
Discovery I am reminded of a question that Peter Thiel often asks: will AI be a substitute or a complement to humans? Quoting him, "Will a machine replace you? Futurists can seem like they hope the answer is yes. Luddites are so worried about being replaced that they would rather we stop building new technology altogether. Neither side questions the premise that better computers will necessarily replace human workers. But that premise is wrong: computers are complements for humans, not substitutes. The most valuable businesses of coming decades will be built by entrepreneurs who seek to empower people rather than try to make them obsolete".
The fundamental difference between artificial and augmented intelligence is clued in by a different question provided: does it aim to automate or augment? That is, is it aimed at performing a task in a manner similar to humans in order to replace human labor and intelligence, or is its aim to improve the capabilities of humans performing any specific task. Often, certain technologies can blur the line between the two. The calculator for example, as we have discussed many times, can be used to replace the calculating labor of humans. It is also helpful in enabling humans to perform tasks that are beyond calculation, yet involve calculations, so it helps humans perform the task with greater speed. This thus becomes augmentation. Self-driving seems to fall squarely into automation: the human is (hopefully) no longer required at all to drive the vehicle. A garbage cleaning service seems more akin to augmentation, as it seems reasonable that it would still require human operation/supervision to undertake its tasks rather than being completely independent of humans. It may simply assist the human in more efficiently performing the cleaning task. Englebart’s research was heavily focused on augmentation rather than automation. It is human-centric in that aspect. It wishes to assist humans by helping collective intelligence and improving decision making by enabling humans a greater wealth of information through technology. That is, rather than attempting to replace humans, it seeks to augment human intelligence. The things that emerged under these standards were things like the mouse and graphical interfaces and things like video-conferencing. These all enable better human collaboration and information accessibility and storage. The subsequent step appears to be things like neural implants, in which information access moves from an external device to instant results by simply thinking. Devices that intake and store information (automated note-takers) by themselves could emerge in order to reduce human labor for sometimes menial tasks. Note that these technologies still require “human thinking” at the end of them, and are not solutions to any specific societal problem. What we then lose is fully autonomous technologies, such as AI agents which perform tasks independently, as well as delegating their own tasks. That is, they require little to no human interaction and are capable of sound decision making, executing plans, and reaching conclusions. Of course, with any new and emerging technology, new ethical questions emerge. A constant ethical question is the question of access: is this a tool (or perhaps a toy) only for the rich and privileged? Who does it most benefit, and can we give those people access? Should it be produced for profit or for all? Another is privacy. With all emerging technologies today and previous technology, the question of individual data, data collection, and use are all of great importance. Who and why are corporations allowed to collect data? When they implant a neural chip, what data are they collecting then? These ethical considerations are through lines of any technological innovation.
Discovery: What I found particularly interesting was the conversion of fairly abstract concepts in Englebart’s paper to concrete and very familiar real-life objections in the demonstration. From writing about symbols, external and internal manipulation, and rough pictures, to a real-world product that performs a whole host of operations is astonishing, particularly considering there was only 6 years between the paper and the demo. The paper tackles very fundamental, almost philosophical questions of the basis of human intelligence and how it formed. On the other hand, we reach the real-world demonstration which shows a fruit of that human intelligence, and how it is capable of further improving it.
Augmentation Intelligence is a compliment to Artificial intelligence. Their differences address two different expectations on machines: are they responsible for creativity, or they facilitate human thinking? To that end, these machines help with repetitive mechanical tasks, and is in effect a division of tasks between human and machine. Augmentation Intelligence aims slightly differently from these mechanical machines: it facilitates human’s creative activity by extending human’s capability of interacting with the physical world, extracting, memorizing, managing, and storing information, and so on. Bush who managed US scientists for warships during WWII envisioned a future where scientists should work to produce such augmentation intelligence, most of which are realized in the modern time. Later in 1962 Engelbart proposed a general framework of augmented intelligence. A large part of both articles was devoted to technical details which hardly matter in the current days. The theoretical characterizations are the more valuable thoughts. Engelbart made an abstraction called the H-LAM system, which describes Augmentation Intelligence in the most general form: A human, A machine. Both exchange energy with the outside world, and they exchange information within the system, through a special interface. They went on to qualify human behaviors by sort of viewing them as machines.They discussed manipulation of concepts, language, and other external tools, as well as the impact of languages in human thinking. Many of their discussions are fun: they discussed the potential impact of the efficiency the recording tools on our intellectual thinking by performing mental experiment on how human society would evolve if the pen is as thick as a brick. The style is similar to Turing’s style, albeit there is little theoretical knowledge resulting from such abstraction, but the mental experiments are more justifiable as demonstrating the importance and potential impact of augmented intelligence.
Another noteworthy fact is the qualifications to such machines being ‘’amplifying’’ human intelligence, where the author clearly stated the machine does not amplify native human intelligence, but rather help produce more outcomes comparable to humans with a higher intelligence. This idea is remarkable as a reflection on the key principle: the machine only facilitates and does not have any intelligence on its own; human’s native intelligence is not increased. In this system, the peak intelligence is strictly smaller than its outcome.
Discovery: Will machines facilitate us or replace us? The long-standing question regain popularity every time new breakthrough of machines emerge; in particularly for the LLMs, as they striked panic among a multitude of careers; some people, from the traditional nlp researchers to grammarly, and to customer services, already lost their jobs; others are shuffling in fear. However, the boundary between them is not firm; Whether it replaces or facilitates depends on whether machines will take over us on our key creative work or just on the technical and repetitive work. But the boundary between creative and repetitive is constantly changing. A mathematician typically solves lots of computational works in pde back to 19th century, where these are considered as creative. Since the computers beat most mathematician and in computations, people no longer regard complicated computations of integrals as meaningful topics, and shift their interest. The question is thus that whether AI will even be more capable of discovering new meaningful topics, which might be unlikely as the realm of meaning is generally considered as human exclusive.
What is the difference between the two AIs—between Artificial and Augmented Intelligence? Where does augmentation end and automation begin (e.g., is a self-driving, cleaning garbage service an augmentation or an automation?) What was the character of the research proposed (e.g., by Englebart) to build machines designed to augment human capacity? What kinds of technologies might emerge, and not emerge under such a standard? What are ethical considerations of the augmented program? What are un(der)explored possibilities?
Whereas artificial intelligence aims at imitating and automatizing the full range capacity of human intelligence, augmented intelligence is geared up to extend, or "augment" the human mind. Or as Englebart (1962) puts it, "augmented human intellect" increases the "capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problem". The specific human capabilities he considers include "artifacts," "language," "methodology," and "training". To cash out in Heideggerian terms, whereas augmented intelligence resembles equipments (or "readiness-to-hand"), artificial intelligence seeks to parallel the flexibility and sensibility of human ("Dasein").
It may be, however, difficult to pinpoint where exactly augmentation ends and automation begin. Indeed, it may require us to elaborate the very conception of "humanity" and "identity", which are elusive to define. Presumably, one could emphasize the fact that "augmentation" implies some kind of quantitative reinforcement, as opposed to any ideal of "increase of intelligence", as Englebart would caution us against. To the extent that, for example, self-driving technology or calculators are merely automating tasks that humans could have achieved without the support of an intelligent machine. But that on its own seems to underappreciate the novelty of technical objects, if not simply deferring our problem. In the most extreme case, it is not as if humans could have soared the sky prior to the invention of parachutes and airplanes, but we certainly don't think of these machines as "artificially intelligent". And when a computer is brute-forcing problems that could have taken inestimable amount of time to solve by sheer human power, the intuition may still be salvageable, but it does stretch us too far from having a clear definition. (There is the further ethical/axiological question of what's the value of learning in an age of artificial intelligence.)
Perhaps, then, we could argue that it's more contingent upon the "unity" of self-consciousness? To the extent that we use augmented intelligence while remaining conscious of what it's doing, the action taken is solely attributable to myself as a causal agent. Whereas the same cannot be said for an artificial intelligence: otherwise we are dealing with the absurdity of fission. But, as Haraway has argued in A Cyborg Manifesto, the very ideology of human-machine dualism is an incredulous one given the range of biological modifications and augmentations we have implemented that challenge the rigidity (and singularity) of personal identity. Also consider the less radical example of sub-consciousness, of which we are yet striving to understand. The plastic and adaptive nature of human intelligence makes it a lot trickier to settle with a hypostatic account of personal identity.
Discovery
I was watching Englebart's "The Mother of All Demos"- and it was indeed mind-boggling! It is really a groundbreaking demonstration that, as far as I've understood, set the conventions (esp. the use of mouse and other GUI design, although it does look more like a digital typewriter at this point) and standard for modern OS (but it's also interesting how this was framed at first as a project of "augmenting human intellect", while we now take pcs as more tools than "augmented intelligence").
The first initial consideration I would want to make is that of expanding our scope from “augmented intelligence” to “extended intelligence”. All of these frameworks consider the idea of amplification of intelligence and human processing capacity, but this approach as Turing’s limited notion of HLAI, may lead to blind spots in system design and evaluation. What I mean is, that we should consider that these systems become external implants of our cognition, a view that I feel is well expressed in Clark’s work of The Extended Mind and Natural-Born Cyborgs. Most particularly I want to reflect on the constraining role that such mechanisms may exert upon our cognition and intelligence. When faced with a digital interface we not only extend our capacity for symbol manipulation and information access, but also the inherent influence that these terminals have in our cognitive environment. Say, when using a computer beyond just the interface I use to write this text, I also have almost instant access to a variety of entertainment options such as games, short-form content media, social media, etc. There is a quote on system design that resonates with me, ““We become what we behold. We shape our tools and then our tools shape us” (Culkin, 1967).
The line between augmentation and automation is tenuous in the current AI climate. I would say that a relatively powerful way to evaluate this is in terms of the architectural permissibility of external/human output. This was for example something explored in Chef Watson on making AI interfaces that are fully autonomous vs semi-autonomous and the kind of products it generates. The As We May Think article brings a powerful image of accessing and categorizing data in a way that reflects more closely thought patterns vs indexing. An adjacent topic is that of us producing much more information than we can handle, and how the next big discoveries like Mendel’s genetic framework can get “lost in the map of the inconsequential”. I believe that part of extending the capacities not of intellect itself but of human capacity of recognition, is in possible use of AI to be able to comb through this data as its capacities improve. Not only about being able to parse the main insights but even analyze incidental data. Some supplementary data in a biological study could silently encode insights that could be revolutionary (RNA self-splicing?), but that are obscured due to the approach. Furthermore, aggregate patterns may emerge from data that would be otherwise impossible to derive even from the most extensive review or statistical analysis.
The example of AI automation of human capacities in ancient Greece was particularly striking to me. It is difficult for humans to take a step away from their frameworks into pure abstraction and consider the tools as such. The report on Augmenting Human Intellect and also Cybernetics was such an interesting take on this. What happens when we consider the capacities of current AI models abstractly, and how they may interact with humans rather than trying to just replicate human output? Like Daedalus we might by focusing on the only high intellect we are familiar with, miss the adjacent symbiotic, high-resonance forms of intelligence that would revolutionize society.
Discovery: But how do we go about restructuring and improving such an abstract notion as intelligence itself without the human framework? One approach is to not try to escape the conceptual space of human intelligence but rather see what could work symbiotically with this system. I never considered that the field of cybernetics could deal with such notions as information “engineering”, reading through Ashby’s intro was enlightening even if some of the deeper technical implications of entropy and state regulation were lost on my first reading. From the MoAD demonstration, what worries me the most as arising from it, is the realization that we may have pursued systems of intellectual augmentation of humans as they showcase. But in my opinion, current design of systems through ads and psychological usage design, are much more propelled by monetary gain derived by the designers of the system than augmenting the user.
The development of intelligent machines has largely followed two distinct paths, often intertwined yet fundamentally different in their aims. One path, often associated with the classic Turing Test and what Erik Brynjolfsson terms Human-Like Artificial Intelligence (HLAI), focuses on replicating human capabilities. Automation is the creation of systems that can perform tasks instead of humans, effectively substituting machine labor for human labor. This pursuit, while yielding powerful tools, carries the risk of the "Turing Trap": as machines become better substitutes, human workers lose bargaining power, potentially leading to increased inequality and concentration of wealth.
Douglas Engelbart however, notes the possibility of augmenting human intellect. Augmentation doesn't seek to replace the human but to enhance their innate capabilities. Engelbart envisioned improving the entire "H-LAM/T" system – the Human using Language, Artifacts, and Methodology, supported by Training. He hoped for a more effective human-machine partnership capable of tackling complexity previously beyond reach. This approach fosters complementarity, where technology enables humans to do new things or perform existing tasks far more effectively, potentially creating new forms of value and retaining the centrality of human skill and judgment.
Engelbart's proposed research methodology was itself a form of augmentation, an approach where the tools developed would be immediately applied to accelerate further research and development. It required a systemic view, understanding that artifacts like computers co-evolve with human language, methods, and skills. This led to foundational augmentation technologies like the mouse, hypertext, and collaborative computing environments, all designed for direct human interaction and control.
The line between automation and augmentation can be blurry. A technology might automate specific sub-tasks (like autopilot) while still augmenting the overall capability and responsibility of the human operator. The crucial distinction often lies in whether the technology primarily serves to displace human labor in existing tasks or to empower humans, enabling new capabilities and workflows.
While augmentation avoids some pitfalls of pure automation, it raises its own ethical considerations, including equitable access to enhancing tools, the potential for deskilling if humans become overly reliant, privacy concerns with increasingly integrated systems, and questions about human agency. However, consciously steering technological development towards augmentation, focusing on enhancing human potential rather than merely replicating it, offers a promising path. It aligns with the goal of creating tools that not only boost productivity but also empower individuals and potentially lead to more broadly shared prosperity, sidestepping the zero-sum dynamics of the Turing Trap.
Discovery: The Mother of All Demos clearly earned it’s name. It showed how interactive computing could truly augment human intellect through the mouse, hypertext, the user interface. The tools however, present a specific way of structuring information and interacting with it. I have recently become interested in Martin Heidegger and his critique of science and technology’s way of framing the way that we think. I was also reminded of Theodor Adorno’s Pandora’s Hope, in which he expressed how science and technology can reframe the way that people process information. The result of these changes is that people become more “computer-like”. Are we certain that the augmentation of human intelligence can’t go both ways? That people might be temped to frame their worldview through the computer screen.
The argument on Augmentation in “The Turing Trap” seems to cover a lot of the assertions I was previously concerned about in Turing’s paper— specifically, the goal of pursuing easily done machine tasks in the spirit of advancement other than forcing conformity with human ability. It is true that with automation all of human work will be reduced; it is also true that most technological progress will cease in the expected manner. Fields of math like linear algebra would not have been discovered if calculators were to replace all of human mathematical computation with no pursuit of research in the field. However, I will point out that if HLAI can automate research and development tasks in all scientific fields, then perhaps human civilization will not stagnate if everything is automated.
Regardless of this, Brynjolfsson asserts a good point that most of economic growth and wealth will be shunted to the wealthy and prepared minority through technological automation; those with companies seeking to replace human labor can do so to reduce costs and taxes, while attempting the opposite will only increase said taxes on businesses. At the same time, doing so would preserve and enforce growth throughout the working class. It seems that we are locked in a reality where the less effective option for growth is actively promoted.
Vannevar Bush seems to reinforce Brynjolfsson’s argument, imagining a world in which human workers and researchers are aided by a “memex”, similar to the computers of today, albeit slightly more interconnected with the rest of the world in terms of ease of access to information. Productivity and ability of individuals would soar and the burden of human memory would be reduced relative to as it was in 1945— if his imagination is anything to go off of, this is but one example of how augmentation could push the fields of human civilization past what could be done with automation alone. Engelbart seems to agree, considering an H-LAM/T system in which educated humans, augmented with forms of technology and computers, solve the problems of today. Computers, in Engelbart’s view, are a means for enhanced human communication and collaboration, instead of automators.
In my initial statement, I claim that perhaps technology will be able to automate research and development as well during the automation process. This thought isn’t far-fetched: large AI systems already do push the boundaries of human knowledge today, and it isn’t absurd to think that in a few years, AI could direct its focus and specify the fields it wants to push by itself. Indeed a perfect artificial general intelligence would not only replace all of human labor, but human advancement as well.
Discovery: While Brynjolfsson’s paper is fairly recent, Engelbart and Bush’s are not— this shows as Brynjolfsson’s paper pivots from the imagined capabilities of augmented humans to the very real effects of automation as opposed to augmentation, and I think Engelbart and Bush support Brynjolfsson nicely. Reading Bush’s paper on imagined technologies that already basically exist today was also an experience.
Response While augmented and automated learning are two distinct categories of technology, it may be hard to draw a clear line between them. The ultimate goals of the two are different in viewing the relationship between humans and machines. While the former wishes the two to become “complements” of each other, the latter focuses on making machines “better substitutes for human labor and workers” (Brynjolfsson 273). However, the machines' applications in real-life scenarios could benefit from a more fine-grained classification than the dual distinction. As long as human intention is involved in the creative process, the level of human participation in this process lies on a continuous scale, and the type of participation also varies by whether it leans toward thought or action.
In a prevalent type of scenario, machines could help the user with their intended actions. For example, the machine could help with printing characters in the way that the user is only required to “pass over a line of the special printing” with their “reading stylus” to trigger the machine to print the intended character (Engelbart 13). This design automates a specific part of the writer’s workflow: transforming a blank paper into a piece with letters. However, the machine complements the human user by reducing the time to type letters and possibly enhancing their creativity as one can “quickly and flexibly change [their] working record” (Engelbart 14). As the proportion of automation increases, the human’s role would be left with instructing and supervising the manipulation of the materials. However, the developments in automated technology may not necessarily lead us onto this path, as they may instead be catalysts for new forms of interactions that augment human capabilities.
Another type of scenario eases the friction of the user’s cognitive operations, which could be more ambiguous in its categorization. In Bush’s vision, there could be “memex” that automates “selection by association” and “machine[s] which will manipulate premises in accordance with formal logic” (118, 121). They respectively replace the mental capabilities of memory retrieval and logical inference, which are regarded as essential components of our rationality. A contemporary parallel may be search engines and LLMs that shortcut our recalling and inferring processes. Since we plan our intentions and actions, which decide the overall design of the final product, through rationality, substituting these roles by such machines may seem more influential in the creative process than in the previous type of scenarios. While it still signifies a complementary relationship between humans and machines, losing the user’s subjective control of the creative process corresponds to automated learning.
Discovery
Artificial intelligence seems like its own agent in some sense - can reason and do cognitive tasks autonomously. On the other hand, Augmented intelligence seems more like a tool - for instance, The Mother of All Demos is a great example of this. While I’m skeptical about either of these concepts clearly defining reality by its joints (especially in edge cases like the ones mentioned), I think my definition is largely useful. Therefore, I think self driving cars to the degree that there doesn’t have to be a human involved is artificial intelligence, for instance. This does get into problems, however, like having it do the task you want is somewhat reliant – another way that these might not converge so much is having it entirely completing a subtask which seems only augmented or fully artificial. Maybe the best way to make these terms work is just where you define the important breaks in the spectrum of autonomousness the machine can be. Douglas Engelbart’s 1962 framework set out to “augment human intellect,” not supplant it. He proposed treating the human, language, artifacts, methods, and training as one co‑evolving system, pursuing rapid interactive computing—keyboards, pointing devices, hyperlinked displays, real‑time collaboration—as a bootstrapping lever to raise both individual and collective problem‑solving power. Under such an “augmentation‑first” standard we would expect technologies like personal workstations, explainable AI copilots, mixed‑reality interfaces, and shared knowledge graphs to flourish, while black‑box autonomy or batch pipelines that push humans to the margins would be de‑emphasized. The programme’s ethical stakes revolve around who gains access to these cognitive “power tools,” how they shape autonomy, privacy, labour, and intellectual credit, and whether they cement or narrow existing inequalities. Many possibilities remain under‑explored: large‑scale civic sense‑making platforms with LLM‑assisted provenance tracking, metacognitive wearables that surface our own biases, augmentation tuned for neuro‑divergent or low‑literacy users, and interfaces that let domain experts transparently probe and patch AI models in the flow of work. Re‑centering Engelbart’s vision today would steer us toward interactive, equitable, and interpretable technologies that enlarge—not eclipse—human agency. DISCOVER: Regarding the video, I was surprised by how early they were actually able to do in the 60’s – wow!
Original Materials
It seems to me that the line between augmented and automated intelligence is inherently quite foggy. Although there are some general boundaries drawn, the distinction seems to be quite dependent on the scale of task which you are considering. Take, for example the task of calculation. The invention of calculation devices could be considered either automation, if approaching the situation from the perspective of De Prony’s human calculators whose labor was replaced, or augmentation, if considering the incredible improvement in speed and accuracy that computational devices offer and the ways in which this can be utilized by scientists requiring fast calculation. It feels as though as AI and mechanical developments progress, maintaining many of the occupations that necessitate human labor would inhibit progress, as machines become more generally capable and thus better able to complete more and more tasks by humans, even if that is not the explicit goal for which they are developed. I’m considering this again with the example of the human computers. To improve the power of human calculation by giving the computers better organizational tools, but maintaining the human brain as the main substrate for calculation would be obviously inhibitory of total computation potential when there are devices that could, by totally replacing humans, complete the task of calculation exponentially better than a human brain ever could.
Thus, the Brynjolfsson’s analysis felt slightly short sighted? Although the concerns he voiced about incentives for technologists, buisnesspeople and policy makers pursuing automation, rather than augmentation, and the large scale political and economic disenfranchisement that would accompany this shift were absolutely important and relevant, I couldn’t help but feel slightly confused about his vision for augmentation and how it would hold up in the long term. Particularly, Figure 1, which depicted automatable human tasks, general human tasks, and new human tasks generated by machine collaboration, felt slightly unconvincing. I can easily imagine machine capacity far exceeding human capacity in essentially every realm (save for things whose value is generally defined by their connection to humanity, such as poetry or art). It seems almost inevitable that unless we arbitrarily choose domains to remain bound by human capacities, machines will grow to surpass general human capacity as they can be directed and evolved on an exponentially faster scale than humans. The Daedalus example is of course important to keep in mind, as simply automating sheep herding and the chanting for victims of disease does not allow for society to improve living standards and progress beyond the circumstances from which the automation arose (although it is quite likely that with more people freed from the necessary labor of herding sheep they would still have more attention to devote to creative technological endeavors). But, when computers themselves become better at identifying avenues for scientific and technological advancement than humans, to maintain humans as the main drivers of scientific progress would be crippling potential.
This does not mean that I think augmentation is futile to focus on. It is important for many reasons in general human well being, as outlined by Brynjolfsson. However, I want to challenge the idea that, in the long term, we can feasibly “work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans” without inhibiting our current vision of scientific and technological progress. Perhaps we should do that, seeing the possibility of automated advancement and rejecting it for the preservation of relevant human labor. I don’t know what the right choice would be, but I suspect that it is one that will have to be made.
Explore: I read the two papers on Luhn. I find it remarkable how significant a single idea (Luhn's hash) can be so influential on modern algorithms. So much has been built on this technique, and it has enabled the improved functioning of so many systems. It is like a little key which, when introduced in the right place at the right time, has become totally integrated into modern procedures. Also, I appreciated Luhn's prolific inventing of the "foldable raincoat, a device for shaping women’s stockings, a game table, and the 'Cocktail Oracle'"– very fun.
Discovery: The robot lab at Argonne was crazy to see in person. I suppose that basic lab work is something quite obviously automatable, but for some reason it was sort of jarring to see it actually happening so smoothly. A lab consistent of dozens of people could in theory be boiled down to just a few primary researchers commanding a host of machines which can not only provide the physical tasks required in a lab, but actually decide which experiments are worth pursuing at all. I wonder what the future of lab work will be, and what roles will still be relevant for humans to hold.
To Engelbart intelligence is augmented insofar as a human's "intellectual capabilities" are organized into "higher levels of synergistic structuring" [1]. Much of the augmentation comes from the conceptual and procedural restructuring that enhances intelligent behaviour i.e. goal-directed behaviour. It relies on the premise that some concepts are much easier to think about once the correct mental representation is chosen. For example, arabic numerals are much better for doing math than roman numerals or Chinese characters. Hence, the focus of his agenda is a system of external symbol manipulation augmentation that resembles our computer, internet, google doc, and hyperlink [2]. Indeed, it was incredible to see that as early as 1968 there was a interactive to-do list, real-time collaboration on a document, and a mouse. Engelbart's concept-focused framework does foreshadow LLMs today, which turns natural language into a programming language, but not necessarily more hardware based automation, like e.g. exoskeleton, or even a machine that fully replaces human cognitive capacity. His vision allows humans to ascend in the hierarchy of abstraction while systems automate the bottom level - the vision of the human-like AI, however, usurps humans role and becomes that very top-level abstraction. That is permanent, irreversible displacement of bargaining power. Somehow, we live in a society that incentivizes us going in this direction, because humans are unreliable and expensive (you have to pay their taxes) and replacing humans is much easier than imagining the new ways humans and robots can collaborate, as argued by Erik Brynjolfsson [3]. I think there's another force: we are incentivized to replace humans with processes to avoid accountability and increase mass-coordination efficiency, and machines embodying a process is much more efficient. [1]: Page 14 of Augmenting Human Intellect: A conceptual framework [2] Mother of All Demos https://www.youtube.com/watch?v=yJDv-zdhzMY [3] The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence
Discovery Douglas Engelbart's demo was incredible not just in its modern day relevance but also in its conceptual framework. The mouse, for example, isn't just a computer tool - it reduces information transaction cost for humans between intention and action.
This week’s readings made the difference between Artificial and Augmented Intelligence much clearer to me- not just as two technical approaches, but as fundamentally different visions for how humans and machines should relate. Artificial Intelligence, in its most recognizable form, tries to replicate human intellectual capacity. It aims for substitution— whether thats a chatbot replacing a customer service rep, or a robot automating warehouse labor. Augmented Intelligence, I found however runs on a different philosophy. It tries to enhance human capabilities, not replace them. That was clear in Bush’s idea of the “Memex,” a device that would let people build, trace, and share chains of association between ideas, making thinking itself faster and more creative. Engelbart takes this even further. His entire 1962 paper and later “Mother of All Demos” weren’t about what the computer could do on its own, but what it could help a person do better. Engelbart framed the computer as part of what he called the “H-LAM/T system”—a combined human using Language, Artifacts, Methodology, and Training. The tools were meant to boost the performance of this system, not replace it. The point wasn’t to eliminate human labor—it was to make it more effective and collaborative. That difference between automation and augmentation might sound theoretical, but it plays out in subtle ways in real systems. A self-driving garbage truck might sound like automation—but what if the human is still involved in coordinating routes, handling unexpected scenarios, or communicating with residents? Then it becomes augmentation. The line between the two isn’t always clean. To me, the test is whether the human still holds a meaningful lever of control—whether the system allows for human judgment to remain relevant. If yes, it’s augmentation; if no, it’s automation. That said, focusing on augmentation doesn’t mean we escape ethical challenges. In fact, I think it raises its own. If cognitive augmentation tools are expensive, patented, or require vast amounts of personal data, then we risk deepening inequality. The people who can afford better interfaces will literally think faster, learn better, and work more efficiently. There’s also the risk of dependency—when the system starts doing too much, human users might lose their base-level skills over time. And most importantly, there’s a question of agency. If the tools are so seamless and integrated that we no longer notice them—or if their default behaviors shape our decisions—are we still the ones in charge? Still, I think augmentation holds much more promise than pure automation. What we haven’t explored enough is how we can build interfaces that deliberately preserve human judgment and cognitive diversity, rather than just optimize for speed or efficiency. Regardless, these are all very possible directions under an augmentation framework, and they feel like they preserve something human at the core of the interaction.
Discovery: What surprised me most is how relevant the 1962 framework feels. Even modern transformer-based AI models can be slotted into Engelbart’s loop logic and it really shows that the future doesn’t have to be a contest between human and machine. We just have to make sure that this is the way that we have to build it out. The mouse and all of the other inventions before this are just the perfect examples of that if we look at it from this revamped lens.
We often find comfort in familiarity; the aspiration to create artificial intelligence in the likeness of humans allows us to exert control and create metrics for success while moving towards what we consider is progress. However, apart from the benefits and risks of augmented human intelligence, as exemplified by the media from this week, we must also consider the limitations of designing artificial intelligence within a human framework - one of the most notable of sorts being that our capacities and perspectives are limited in ways that machines might not be.
In “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” the discussion of risk primarily centers on increasing disparity - where jobs that are more difficult to emulate stand to gain more benefit and wealth than those that may be deemed as more replaceable; additionally “superhuman performance” seems to be even more beneficial than machines that attempt to copy the human life/work-style. Though this primarily rests on the consequences of abilities of superhuman machines, if we consider even the modes of “thinking” for human-like machines, in emulating ourselves, we are restricting future potentials to ideas that continue to perpetuate the systems which we are a part of.
Especially with large amounts of human data being fed to machines, we are also privy to restricting modes of knowledge and learning - as the world is much more than visual and textual data. In some ways, augmenting intelligence feels like an attempt for humanity to look upon itself and understand what it means to live. However, in ignoring the practical realities and non-human possibilities of life, the risk of feeling subordinated by the “superhuman” prevents us from even noticing the ways that non-human entities contribute to our livelihoods even now.
Human intelligence, insofar as we define it as multilayered and multi-directional, perhaps, if we consider the way that it is situated in context, is not so dissimilar from modes of intelligence that we deem as being foreign. In some ways, Engelbart's consideration of human-machine relationships to the point of augmentation seems to almost consider the machine as an extended part of us. Then the question becomes: what level of influence necessitates a blurring of lines such that the object of our use becomes a part of our identity and mode of living?
Discover
I enjoyed seeing Aurora in person! It was very interesting to see the machine up close and then visit the data visualization lab, where much of its output comes to life. Learning about the bugs that had appeared in the water system was also a very realistic portrayal of how if something can go wrong, it might as well. Aurora, being in such a controlled space and having been developed over many years, having issues with this depicted how engineering research can comprise of the random surprises that occur along the way to solving larger/priority problems.
As the Artificial Intelligence program to replicate human intellectual capacity grew, a less organized but arguably more socially and economically important and commercially success program emerged to amplify or augment human intelligence. And if robots were the embodiment of artificial intelligence, then interfaces (e.g., mice, screens, filesystems, hypertext, eye-trackers, and now neuro-implants) were the embodiment of augmented intelligence. This week, we first read a vision of augmentation from Vannevar Bush, MIT’s Dean of engineering, science advisor to President Roosavelt (who proposed the National Science Foundation) and inventor of a massive, analog computer. Perhaps no one was more influential in its first implementation and expansion than Douglas Englebart in his 1962 report on augmenting intelligence and the “Mother of All Demos” (1968) that flowed out of it. What is the difference between the two AIs—between Artificial and Augmented Intelligence? Where does augmentation end and automation begin (e.g., is a self-driving, cleaning garbage service an augmentation or an automation?) What was the character of the research proposed (e.g., by Englebart) to build machines designed to augment human capacity? What kinds of technologies might emerge, and not emerge under such a standard? What are ethical considerations of the augmented program? What are un(der)explored possibilities?
Post your response as a Comment in reply to this message.