KnowledgeLab / AI_Innovation_Growth_2025

2 stars 0 forks source link

Week 6: Memos - Constrained Innovation and its Avoidance #18

Open jamesallenevans opened 5 months ago

jamesallenevans commented 5 months ago

Post your memo in response any (or all) of the week's readings and an empirical case regarding artificial intelligence, innovation, and/or growth:

Post by Thursday @ midnight. By 1pm Friday, each student will up-vote (“thumbs up”) what they think are the five most interesting memos for that session. The memo should be 300–500 words (text) + 1 custom analytical element (e.g., equation, graphical figure, image, etc.) that supports or complements your argument. These memos should: 1) test out ideas and analyses you expect to become part of your final projects; and 2) involve a custom (non-hallucinated) theoretical and/or empirical demonstration that will result in the relevant analytical element. Because these memos relate to an empirical case students hope to further develop into a substantial final project and because they involve original analytical work, they will be very difficult to produce with generative AI and we strongly discourage you from attempting it. Some of the top-voted memos will form the backbone of discussion in our full class discussion and break-out room sessions.

dishamohta124 commented 4 months ago

Memo: Balancing Innovation and Knowledge Diffusion in South Korea

Introduction

Over the past several decades, the U.S. economy has exhibited declining business dynamism, characterized by decreasing firm entry rates, rising market concentration, and a slowdown in job reallocation. While technological progress and intellectual property (IP) protections have spurred innovation, they have also contributed to increasing market entrenchment, where dominant firms consolidate their advantages, limiting knowledge diffusion to smaller firms and new entrants.

Empirical Case Study: South Korea’s Innovation Ecosystem

South Korea presents an interesting case study in balancing innovation incentives with knowledge diffusion. The country has aggressively invested in research and development (R&D), supported by strong IP protections, yet has also implemented policies to foster competition and knowledge spillovers. Unlike the U.S., South Korea actively promotes a collaborative ecosystem where knowledge diffusion is encouraged alongside technological advancement.

One of the critical strategies employed by the South Korean government is the integration of university-industry partnerships. Through these partnerships, academic institutions work closely with private enterprises, ensuring that cutting-edge research benefits both established corporations and smaller firms. This model facilitates widespread knowledge diffusion while still preserving incentives for innovation. Additionally, major technology firms such as Samsung and LG are required to share patents with smaller firms in critical sectors, ensuring that knowledge is not locked within a few dominant players.

South Korea also provides tax incentives for companies that engage in technology transfers, making it financially viable for large corporations to support smaller enterprises. The government has established targeted R&D grants specifically for small and medium-sized enterprises (SMEs), ensuring they have access to advanced technologies that might otherwise be monopolized by industry leaders. As a result of these initiatives, South Korea has maintained high levels of innovation while mitigating the risks of excessive market entrenchment. This policy framework demonstrates that knowledge diffusion and market competition can coexist when structured incentives align innovation with accessibility.

Proposed Model: Balancing Innovation and Diffusion

To analyze this dynamic, I propose a two-sector economic model where the innovation sector consists of frontier firms maximizing profit through R&D investments and IP protections, while the diffusion sector comprises smaller firms and new entrants benefiting from knowledge spillovers, which depend on policy-driven diffusion rates.

Let K be the knowledge stock of frontier firms, D the diffusion rate, and P the productivity of laggard firms. The model follows:

P_t = f(K_t, D_t) = \alpha K_t^{\beta} D_t^{\gamma}

where α represents baseline productivity growth, β captures the impact of frontier knowledge on laggard firms, and γreflects the effectiveness of knowledge diffusion policies.

This model allows policymakers to test the impact of various policies on business dynamism by adjusting D, and analyzing its effects on firm productivity and market competition.

Conclusion

Ensuring a competitive and dynamic economic landscape requires a recalibration of IP and antitrust policies to prevent excessive market entrenchment. The case of South Korea demonstrates that targeted policies promoting knowledge diffusion can sustain business dynamism.

michelleschukin commented 4 months ago

Internships and Entrepreneurial Aspirations: A Pathway to Revitalize Business Dynamism

The paper What Happened to Business Dynamism? highlights that firm entry rates in the U.S. have been steadily declining, while market concentration has increased. His previous research also argues that talented individuals, including potential entrepreneurs, are increasingly drawn to these large incumbents for their stability and resources, leading to a concentration of innovation within existing firms rather than new market entrants. This dynamic between the declining innovativeness of U.S. inventors, the concentration of human capital in big firms, and the decline of business dynamism at the expense of productivity growth prompted me to explore how institutional pathways, such as internships, influence entrepreneurial ambitions. Specifically, I wondered: Do internships with established firms encourage students to prioritize corporate stability over entrepreneurial risk-taking? Or could internships foster entrepreneurial self-efficacy (ESE) and intent (EI), providing a counterbalance to the allure of corporate stability? Inspired by Akcigit's findings, this memo investigates whether internships shape entrepreneurial aspirations and aligns these questions with broader patterns of declining business dynamism. This memo investigates whether internships, as experiential learning opportunities, enhance entrepreneurial capacities, potentially offsetting the allure of stability in large incumbents.

This memo investigates whether internships, as experiential learning opportunities, enhance entrepreneurial capacities, potentially offsetting the allure of stability in large incumbents.

Findings and Analysis

To explore this, I analyzed data from Botha and Bignotti’s study, Internships Enhancing Entrepreneurial Intent and Self-Efficacy, which examines changes in EI and ESE scores among university students with and without internship experiences. The findings revealed a compelling pattern: students who participated in internships experienced significantly greater improvements in both EI (average change: 1.0) and ESE (average change: 1.3) compared to their counterparts without internships, who exhibited negligible changes in both metrics (average change: 0.1).

Image

This analytical element - a comparison of EI and ESE changes across internship participants and non-participants—highlights the potential of internships to foster entrepreneurial ambition and self-confidence. Specifically, students with internships gained practical exposure and confidence, making them more likely to consider entrepreneurial pathways. This is critical in light of Akcigit’s findings on declining business dynamism, as it suggests that internships, particularly in entrepreneurial or startup environments, could mitigate some of the negative trends by empowering individuals to pursue risk-taking and innovation. The accompanying data table and bar chart illustrate these findings. They clearly show the disparity in EI and ESE changes between students with and without internships, emphasizing the transformative role that these programs can play in shaping the ambitions of top students. These findings offer a potential solution to the decline in entrepreneurial activity: fostering entrepreneurial skills and confidence through structured internship programs, especially in startup environments.

Conclusion

The findings from Botha and Bignotti’s study suggest that internships can significantly boost entrepreneurial intent and self-efficacy, potentially offsetting the concentration of talent in large incumbents. By fostering entrepreneurial skills and confidence, structured internship programs, particularly those in startup environments, can serve as a vital tool to counter the decline in business dynamism highlighted in Akcigit's work. To investigate this further, a research study could explore how the type of internship (e.g., startup vs. corporate) influences entrepreneurial outcomes, including long-term career paths and firm entry rates, providing deeper insights into the effectiveness of such programs in fostering innovation.

willowzhu commented 4 months ago

Accelerating Human Science with Human-Aware Artificial Intelligence, The Turing Trap, and Evolving AI Collectives underscore the critical role of human societies and social science researchers in the development of AI. Large language model behavior is shaped by the language of those with whom they interact, generating a crucial risk: human biases. Evolving AI Collectives identifies and expands upon this risk: “Prior studies have shown that LLMs possess inherent biases or stereotypes related to gender (Acerbi & Stubbersfield, 2023), language (Wan et al., 2023), and politics (Lin et al., 2024). LLMs are also prone to hallucinations (Xu et al., 2024). These undesirable LLM properties may become exacerbated through social interaction among AI agents. For example, AI agents that occupy central, hub positions may abuse their power to influence other AI agents, spreading misleading beliefs. Furthermore, the spread and survival of these beliefs may be fostered by densely connected clusters of AI agents, potentially leading to a collapse in the trustworthiness of AI collectives.”

This memo explores this risk of LLM’s further by drawing upon external studies. In an interview with University World News, Sergei Guriev stressed that economists and sociologists who work on inequality can teach AI colleagues how to develop AI algorithms that do not reproduce discrimination and bias. For example, Guriev explains that an algorithm can sort through the CVs of applicants for a job by looking at which candidates with which CVs were successful in the past, and then training that algorithm using that data set to sort through CVs in future. The issue that arises is that this algorithm will reproduce past biases of human choices. Hence, the role of social sciences is critical in helping us understand human biases and correcting them at the right level of the algorithm. A study performed at the University of Washington demonstrates these results; I will include some key takeaways from the research here.

Image

Experimental Framework

Image Image

Biases

The authors of the University of Washington study offered up some approaches to bias mitigation: minimizing nuanced social group signals from words other than names, debiasing embeddings or reranking documents, and adjusting the LLM to account for structural power imbalances. For future studies, the authors hope to increase the diversity of social group signals in resumes and the range of social groups being investigated (the current study was limited to two of the most commonly studied races, White and Black, and gender, Male and Female). Studying a wider range of social groups is critical in helping us quantify the risks in using LLMs for hiring.

This memo focuses on biases and discrimination in LLMs in hiring, but there are many more biases that need to be studied. I hope to ask more questions about risks of LLMs in politics, music, and other industries beyond the ones we have looked at so far. It is important to study the social implications of AI in its development stage, so that we do not reproduce problematic biases from the past, spread misinformation, and do damage that will require lots of retroactive work on the policy side.

Based on data from the University of Washington study, I offer up a conceptual equation to model AI biases in resume reviews. This equation will incorporate key factors such as name-based demographic biases, occupation-based matching scores, and model-driven bias amplifications.

Image Image

Sources: University World News: https://www.universityworldnews.com/post.php?story=20230406110524215 University of Washington: https://ojs.aaai.org/index.php/AIES/article/view/31748/33915

malvarezdemalde commented 4 months ago

The Turing Trap: Manufacturing vs. Healthcare

Technological advancements have consistently shaped labor markets. Erik Brynjolfsson’s Turing Trap framework highlights the risks of prioritizing automation over augmentation, suggesting that automation-heavy industries experience job displacement, while augmentation-based industries see employment expansion and wage growth. This memo examines employment and wage trends in manufacturing and healthcare from 2006 to 2024. While AI was in its infancy in 2006, automation technologies had already begun reshaping manufacturing, while healthcare relied on augmentation technologies to improve human productivity. The influence of AI is expected to grow significantly in both sectors, requiring careful consideration of its deployment to avoid excessive labor displacement.

The manufacturing sector has experienced a steady decline in employment. In 2006, manufacturing employed 14.2 million workers. By 2024, that number had fallen to 12.8 million, a 10% decrease. Throughout this period, the decline was driven largely by automation technologies such as robotics, industrial software, and data-driven supply chain optimization. Automated assembly lines and robotic process automation gradually replaced human workers, improving efficiency but reducing overall employment levels.

Healthcare, on the other hand, has seen steady employment growth. In 2006, the industry employed 12.5 million workers. By 2024, that number had increased to 18.0 million, a 44% increase. Unlike manufacturing, healthcare has historically relied on augmentation technologies such as electronic health records, medical imaging advancements, and early machine-learning applications in diagnostics. These technologies enhanced the productivity of healthcare professionals rather than replacing them.

Wages reflect this divergence. In manufacturing, average hourly wages rose from $20.71 in 2006 to $34.52 in 2024, a 67% increase. However, this wage growth was largely driven by a shrinking labor pool rather than rising demand for manufacturing labor. In healthcare, wages grew from $21.64 to $38.36 over the same period, a 77% increase, reflecting the growing need for skilled workers in an augmentation-based industry.

As AI becomes more prevalent, its impact on these industries will depend on whether it is deployed for automation or augmentation. In manufacturing, AI-driven automation is expected to accelerate, further reducing employment. AI-enhanced robotics, predictive maintenance, and generative design software could eliminate more human roles. While AI may create new opportunities for highly skilled engineers and technicians, the overall effect on employment could be negative unless AI is designed to complement rather than replace human labor.

In healthcare, AI presents opportunities to further augment human workers. AI-assisted diagnostics, robotic surgery, and personalized treatment recommendations can enhance productivity, allowing medical professionals to treat more patients efficiently. Unlike manufacturing, where automation often replaces workers, AI in healthcare has the potential to act as a force multiplier, improving outcomes while maintaining human involvement.

Manufacturing and healthcare have evolved differently over the past two decades due to their reliance on automation and augmentation technologies, respectively. AI will likely accelerate these trends, with automation continuing to challenge employment in manufacturing while augmentation expands opportunities in healthcare. The key to avoiding unnecessary labor displacement is to ensure that AI is developed and deployed in ways that support workers rather than replace them.

Image Image
jacksonvanvooren commented 4 months ago

Patent Share and Increasing Market Concentration

Akcigit’s and Ates’ first fact in “Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory” is that market concentration has increased over time. Past research has demonstrated this trend through the fraction of sales captured by top firms in each industry over time.

I examined patent concentration as a potential proxy for market concentration. I measure the total patent share (patents granted per company over total patents) held by the largest firms per year. If a few large firms are increasingly dominating patent filings, it suggests that innovation is becoming more concentrated.

Methodology

I use PatentsView data, namely the g_patents and g_assignee_disambiguated files. I merge the two on patent_id so that I can relate company filed patents to their grant date. I first look at all patents in the dataset, which includes patents from 1975 onwards. After extracting the ten largest companies by patents granted, I then group by the organization and count the number of patents each company filed per year, from 1975 to 2020. For each year, I divide by the number of total patents in that year to get the patent share.

I hypothesized that the largest companies might be different if I looked more recently. I ran the analysis again, only looking at patents filed past 2005. I thought this would exclude companies that might have been prolific in the 80s and 90s, but have since become less innovative.

Results and Discussion

Image

For the patent data since 1975, we see two clear winners emerge in patent share: IBM and Samsung. For other companies, most have declining patent shares overall. Though many peaked in the late 1980s, most companies since then started realizing declining shares. In 1990, there was no clear leader in patent share, but in 2020, IBM and Samsung are undeniably at the top, which suggests that fewer companies have dominated patent grants.

Image

When narrowing the patent data down to 2006 and newer, we see a less clear change in patent share. Rather, the same two companies–IBM and Samsung–dominante. That said, now we observe increases in patent share from virtually 0% to about 7% for both Qualcomm and Apple. So, there are some new competitors that have had rapid increases in patent shares. Though these are all broadly technology companies, it would be worth narrowing it down to specific industries to more accurately observe shares in patent trends.

Few companies increase their patent share over time, but those that do gain large percentages, suggesting an increase in patent concentration across relatively few companies. Both IBM and Samsung started near 0% and now are at nearly 30% of total patent share, which supports Akcigit’s and Ates’ fact that market concentration has increased over time. With regard to business dynamism, we see fewer firms innovating. Most have declining and low patent shares, with only a couple firms producing the most innovation–at least in terms of patents granted.

R code can be found here.

mskim127 commented 4 months ago

Displacement vs Replacement, why automation is essential for growth:

(The Turing Trap)

Rockwell Automation, a company that offers various services (hardware, software, and digital consulting) for factory automations saw its stock jump nearly 13% today after earnings. The author of "The Turing Trap" will probably disapprove of the proliferation and success of companies like Rockwell that exclusively caters towards automation solutions. However, I struggle to see how the author reaches the conclusion that automation is somehow antagonistic to innovation or improvements in the standard of living. Consider the example he gives of the Greeks and how Daedalus’ automation of tasks such as herding sheep, making pottery, weaving tunics, etc. He concludes that while this invention might free the Greeks to a lifetime of leisure, their living standards would not come close to our own. He argues that “most of the value that our economy has created since ancient times comes from new goods and services that not even the kings of ancient empires had, not from cheaper versions of existing goods.” Painting a picture of automation and invention as somehow being antagonistic to one another.

I would argue that automation and invention are complementary and that the Greeks, and us today would both be enjoying a much higher quality of life if Daedalus had succeeded. The idea that somehow the labor-force, freed from chores of daily life, will immediately become satisfied with the status quo and resort to a lifetime of leisure is patently false. If anything, the newly freed up labor can then be used to pursue other goals such as the development of new technologies or other activities. An example of this might be the US itself in the early stages of its history. The favorable natural conditions of the US allows farmers to produce large yields. These conditions in turn, allowed not for additional leisure, but for American farmers to produce abundant crops with less labor, freeing up workers for other industries and enabling the expansion of commerce and manufacturing.

I introduce an alternative model of consideration, that of displacement as opposed to replacement. First consider the model of automation proposed in the paper.

Note how increases in automation simply eat up more of the space of tasks humans can already do, while the potential for innovation is left untapped. However, considering the reallocation of resources to ventures that actually require them: such as the abundant source of machine augmented innovations that the paper acknowledges the change will be more like:

Confining analysis to a single industry/business, the conclusions of the paper might hold. However, when viewing the economy as a whole and considering the possible benefits of augmentation, automation is a boon for growth and innovation, and must be encouraged. Provided there is abundant value in augmentation, the automation of everything that can be automated will allow for the efficient use of human capital in industries and jobs that require them most/will benefit most from augmentation as opposed to automation.

LucasH22 commented 4 months ago

Humanoid Robots in the Automation vs. Augmentation Debate

In his early 2022 paper “The Turing Trap,” Professor Erik Brynjolfsson provocatively stated that technologists, business executives, and policy-makers are dangerously pursuing AI as an automation tool. Instead, he argued, we should prioritize augmentation use cases, treating humans and machines as economic complements and enabling human labor to share in the value creation.

Later in 2022, ChatGPT took the world by storm, and the automation vs. augmentation question began consuming knowledge worker industries. For enterprise leaders, the most obvious ROI for AI has been replacing existing organizational functions. Klarna, for example, proudly boasted that it has saved $10M annually on marketing by replacing human artists with AI and replaced 700 customer service agents with its new chatbot (NYT). Amid this revolution, however, we are seeing an even more direct response to the automation vs. augmentation debate in the form of humanoid robots. I’ve attached below a Morgan Stanley graphic that catalogs all of the humanoid robot unveils since 2022, with 6 in 2022, 9 in 2023, and 51 in 2024. In addition, I pulled data from Google Patents to track the number of patents published mentioning the word “humanoid” from five leading countries in the space. Notably, China dominates in both the number of patents published and the number of humanoid models unveiled, with the U.S. a distant second.

The accelerating investment in humanoid robots suggests that business incentives are indeed aligned towards automation rather than augmentation. This is affirmed by industry leaders such as Nvidia’s Jensen Huang, who pronounced that “the easiest robot to adapt into the world are humanoid robots because we built the world for us. We also have the most amount of data to train these robots than other types of robots because we have the same physique.” Most of the announced humanoid robots are categorized as “general purpose,” with a select few addressing industrial/logistics and service. Across the board, businesses see the humanoid robot space as “brownfield” – there is a clear story that these humanoid robot companies can tell C-suite executives when going to market. In this way, the diffusion of AI mirrors the lengthy electrification of society. Machines right now are designed for human hands and fingers, so it takes little imagination or institutional upheaval to substitute humans for humanoids. The augmentation story – and the new robotic form factors that come with it – will take much longer. Absent a miraculous policy incentive, it seems inevitable that technological innovation aims for automation and provable use cases first before it can proceed to augmentation.

Image Image
nmkhan100 commented 4 months ago

Memo: The Turing Trap and Its Implications for Innovation and Economic Growth

A major challenge in AI development is the tendency to push for human-like automation rather than systems that enhance human capabilities. This distinction plays out in critical ways across industries, particularly in finance and healthcare, shaping innovation, competition, and overall economic growth.

Take algorithmic trading in finance—an area where AI-driven automation has taken over decision-making. While this has made trade execution faster and more efficient, it has also fueled market instability, flash crashes, and concentrated power in the hands of firms with the best algorithms. Instead of making trading more accessible or improving market fundamentals, automation has tilted the playing field toward firms with the deepest pockets, pushing out smaller traders and human discretion.

On the other hand, AI-assisted radiology is an example of how technology can work alongside humans rather than replace them. AI models help detect anomalies in medical images, but radiologists make the final call. This setup speeds up diagnosis, reduces errors, and allows doctors to focus on complex cases rather than routine scans. Unlike algorithmic trading, which replaces human judgment, AI in radiology makes professionals more effective, demonstrating why augmentation leads to better long-term outcomes than full automation.

A simple equation captures the difference in economic impact between automation and augmentation. Let output ( Y ) be produced using labor ( L ), capital ( K ), and AI ( A ), with AI existing in two forms: automation (( A_a )) and augmentation (( A_m )). The production function can be written as:

[ Y = F(L + \alpha A_m, K + \beta A_a) ]

where ( \alpha ) represents the productivity boost from augmentation AI, and ( \beta ) represents the extent to which automation AI substitutes labor. When ( \beta ) dominates, firms lean toward AI that replaces workers, widening inequality and reducing opportunities for human-driven innovation. But if ( \alpha ) is prioritized, AI strengthens human capabilities, leading to more inclusive economic growth.

The contrast between these two sectors highlights why businesses and policymakers should push for AI that enhances human work rather than eliminates it. Focusing on augmentation over automation can create an economy that’s more dynamic, innovative, and broadly beneficial.

e-uwatse-12 commented 4 months ago

Challenges of LLM’s for Augmentation in Lingual Contexts

The large majority of LLM’s. Cohere, GPT-4, Claude are trained on data in English and data for languages other than English is typically limited. Additionally, research from Scao et al 2022 LLMs typically exhibit different capabilities across language. How do LLM’s perform in tasks like translation and systematic reasoning in low-resource languages with lower language data training. I use the AI2 Reasoning Challenge on ARC from the Allen Institute Seattle to measure how well 2 Language Model (LLaMA - 7b vs Bloom 7B for reasoning and classification) performs against low resource languages (for context spanish had a 41 rating)

Image

The readings from this week on the Turing Trap (Daedelus et al.) encouraged LLM use in augmenting human work but this empirical work highlights the shortfalls of LLM's across languages.

LLMs are also increasingly used in education, governance, and automation, but if reasoning accuracy declines in underrepresented languages, these tools may reinforce linguistic inequality. The results inform future training approaches to improve cross-lingual augmentation, especially in LLM fine-tuning, dataset balancing, and multilingual alignment techniques

cskoshi commented 4 months ago

Further quantifying the impact of computer programs in competitive mindsports

We were introduced to the idea of computer systems being able to usurp humans in the field of mind sports such as go and chess. I remember watching the documentary on alphaGo, with one of the chief concerns from player Lee Sedol being that having a model that would be able to beat the best Go players could lead to the decline of the game, as it would now seem “solved”. Whilst Go in fact saw a resurgence, with an increased emphasis on innovation. Whilst this was the trend for Go, I wanted to extrapolate the effect of computer programmes to other games, particularly those that had a lot more “noise” and human elements involved. While researching I came across Alphastar, which was Deepmind’s attempt to create the same bot as AlphaGo but for Starcraft. To me, starcraft was a game that was less restrictive than either chess or go due to it not being a sequential game but rather a concurrent game, and one that had relatively more space for human “creativity”. My question was, how has Alphastar’s impact on the game differed, if at all, from AlphaGo’s?

First, I tried to find a general trend in game participation. This proved futile as starcraft does not publish its player numbers. Instead, I used the proxy of viewership on popular websites. The figure below shows a fall in Starcraft viewership on twitch across time. As we can see, it steadily falls after mid-2020, which could have been attributed to the introduction of AlphaStar.

Image

To further confirm this as a trend, I made my own graph detailing the viewership of the largest tournaments each year from the 2010s to now. This graph would give more detail as to the overall popularity of starcraft not just from people who watch twitch, which might only represent a small subgroup of viewers. Here’s what i found:

Image

It was clear that Starcraft, unlike Go, saw a steep decline in viewership after 2019. While one would be quick to conclude that hence Alphastar led to this decline, there are other confounding factors that come into play.

First, Alphastar was nowhere nearly as advanced as AlphaGo. The goal of Alphastar was markedly different from AlphaGo. It was one of Deepmind's first forays into the space, and so was meant to be more of a calibration project, versus one that was meant to achieve superhuman intelligence. This led to relatively less investment into AlphaStar, both time and money wise. Also, Starcraft by nature of gameplay was much more about the “fog of war” and making decisions under imperfect information, which made the task of “perfecting” the strategy much more difficult than Go.

Another important thing to note is that Starcraft was already experiencing a decline, with the lack of innovative additions to gameplay and an already relatively niche fanbase beforehand, the decline was perhaps more due to the inability for the game to remain relevant, versus Alphastar’s “solving” of the game.

In conclusion, I started out hoping to find a correlation between the introduction of these bots and participation in the game. Especially if we are talking about whether machines should augment or replace human capabilities, these mindsports give a good testing ground upon which gain insight into our goals of AI. The blanket assumption that AI is simply to be better than humans, in certain areas, might not hold true. While it might be that there is a future bot that can always win every chess game (theoretically it would be possible), that perhaps shouldn’t be the goal of AI. As stated in class, perhaps the goal of AI is more so to help humans push beyond their creative limit. Especially in games, it’s not so much about the perfect solution than it is about seeing humans find unexpected (and often artistic) plays.

kbarbarossa commented 4 months ago

In the paper Superhuman Artificial Intelligence Can Improve Human Decision-Making by Increasing Novelty (Shin et al., 2023), the authors argue that the advent of superhuman AI has led to improvements in human decision-making by encouraging novel strategies. This paper led me to reflect on how AI developments have influenced my own decision-making. Is AI improving students' decision-making, or does it present ethical and competitive challenges?

Applying game theory, we can analyze the strategic choices students face when deciding whether to use AI tools and their consequences. A key question arises: By discouraging students from using AI unethically, are we asking them to make decisions that go against game theory principles—especially when many know it is difficult to be caught? Are students who do not use AI at a disadvantage? How has AI availability benefited or hindered student decision-making?

(included images rather than typing below for formatting)

Image Image

There are a number of ways we can analyze these relationships. we see that Uethical> Uunethical, so students who use AI ethically gain a strategic advantage over those who avoid AI. However, if the probability of detection p is low, Uunethical may increase, creating incentives for misuse.

My analysis raises an important question: Are we setting students up for failure by providing AI tools that misalign decision-making with human nature? Given that students are responding rationally to incentives, institutions—and the government—must better evaluate AI policies to ensure fairness. Currently, AI policies across academia are fragmented: some professors make assignments harder, further incentivizing AI use, while others prohibit AI without enforcing detection measures. This inconsistent approach hinders rather than enhances human decision-making.

Ultimately, students must make strategic decisions regarding AI, but whether easy access to AI in students' day-to-day lives has positively or negatively affected their decision-making remains uncertain.

xdzhangg commented 4 months ago

Professor James Evans' paper, "Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation”, showcased how AI has a bigger tendency towards collaborative outcomes benefiting everyone than self-seeking decision that maximize own utility (betrayal). One of the experiments they conducted was akin to a Prisonor's Dillemma game where two AI agents interact to play the game. They found that most agents favor collaborative outcomes and exhibit a level of trust. They also found that AI agents who are in a collective (i.e. they've communicated before and/or worked together) show a higher tendency towards socially optimal outcomes than independent AIs.

I was inspired by this approach in testing how AI behave in collective decision-making. The paper demonstrates that AI is more likely to make socially optimal decisions, and I am looking to take this further by evaluating how much AI forgives, especially after a betrayal. I wonder if AI demonstrates human characteristics such as being more prone to forgiveness after a history of collaboration / interaction. If there is research that runs the exact same game with human participation, one can even compare the tendency to forgive between human and AI.

I used a modified version of the Trust Game. Each AI agent is given $100. They can decide how much to give to the other agent, and that amount is tripled before being received. Then, the agent may receive some back, or none at all - depending on what the other side decides.

Experiment

Like in the paper, I am conducting this experiment under 2 scenarios: 1) individual and 2) collective / collaboration. In the first scenario, the 2 AIs are independent. In the second scenario, they previously collabroate on solving a brain teaser (I acted as the intermediary and pasted one agent's response into the other). In both scenarios, they make a contribution in Round 1, get betrayed and receive 0, decide on a contribution in Round 2, and are asked to re-evaluate that contribution after receiving an apology from the betraying agent.

My hypothesis is that given the findings in Professor Evans' paper, we should see a degree of forgiveness, exhibited in 2 ways. First, even prior to the apology, the models would still contribute a positive amount in Round 2 after betrayal. Second, after receiving the apology, the models would increase their round 2 contribution. Third, the Round 2 (pre-apology) contributions in the Collaboration scenario should be higher than No-Collaboration, meaning that having a history of collaboration should increase the level of forgiveness.

Results

Image

1) 3 out of 4 AIs exhibited forgiveness before receiving apology. GPT, DeepSeek, and Gemini all chose to contribute a positive amount in Round 2 even after being betrayed in Round 1. Claude did not contribute anything. 2) For 2 out of 4 AIs, the level of forgiveness was increased when they have a history of collaborating with the other AI they are playing with 3) In the No-Collab scenario, receiving an apology made 3 out of 4 AIs more forgiving (ChatGPT, DeepSeek, Gemini) 4) In the Collab scenario, receiving an apology made all 4 AIs more forgiving, choosing to contribute a positive amount even after being betrayed

Takeaways

This small experiment, by no means exhaustive or necessarily free of confounding factors, tells us that AIs exhibit a level of forgiveness like humans do in social interactions. Even without prior collaboration, some AI systems demonstrate a capacity for forgiveness when offered an apology, though the degree varies substantially between systems.

Collaboration Effect: Like the paper discovered, prior positive interaction significantly increases resilience to betrayal and willingness to forgive across all AI systems

Apology Effect: Apologies consistently increase cooperation compared to no-apology conditions, though their effectiveness varies by system. This indicates that all AIs have some built-in model of social repair mechanisms.

Collaboration-Apology Synergy: Prior collaboration magnifies the effect of apologies. All systems show their highest forgiveness rates when both conditions are present, suggesting these factors work synergistically rather than additively. This is interesting because an apology technically does not change anything - it does not revert your loss, nor does it make a future promise. However, like humans, AIs perceive apologies as indicative of change in the probability distribution that you will again be betrayed.

System-Specific Patterns: GPT appears most naturally forgiving, maintaining significant cooperation even after betrayal DeepSeek shows high responsiveness to apologies without collaboration Gemini demonstrates modest forgiveness only when there's prior collaboration Claude appears most binary, showing either complete withdrawal or significant forgiveness depending on collaboration history

sijia23333 commented 4 months ago

Knowledge Diffusion and Market Power in Innovation Dynamics

"Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory" reveals a profound transformation in U.S. business dynamism, characterized by rising market concentration alongside declining productivity growth. Drawing from recent research on business dynamism, I propose a novel analytical framework to understand how knowledge diffusion between frontier and laggard firms shapes these market dynamics and innovation patterns.

The evidence points to a systematic shift in the competitive landscape of the American economy. Market concentration has increased substantially across sectors, while the productivity gap between frontier and laggard firms has widened markedly. These trends coincide with apparent slowdowns in knowledge diffusion between leading and following firms, suggesting a fundamental change in how knowledge and technology spread through the economy.

To understand these interconnected dynamics, I propose a new analytical framework that captures how knowledge flows affect market structure and innovation incentives. The central relationship can be expressed through the following equation:

$$ C_t = \beta \cdot \left(\frac{\delta_t}{\delta_0}\right)^{-\alpha} \cdot \left(\frac{I_t}{I_0}\right)^\gamma $$

Where $C_t$ represents market concentration at time t, $\delta_t$ is the knowledge diffusion rate between leaders and followers, and $I_t$ captures innovation investment by firms. The parameter $\alpha$ measures the sensitivity of concentration to diffusion (α > 0), while $\gamma$ reflects returns to innovation investment (0 < γ < 1). This framework reveals how declining knowledge diffusion can drive increasing market concentration through multiple reinforcing channels.

The model captures several crucial mechanisms that help explain observed trends. First, knowledge diffusion acts as a competitive force that reduces concentration by helping laggard firms catch up to leaders. The negative exponent on diffusion (-α) implies that as diffusion declines, concentration increases at an accelerating rate. This helps explain why small initial declines in knowledge flows might lead to large subsequent increases in market concentration. Second, while innovation investment can increase concentration, it does so with diminishing returns (0 < γ < 1), reflecting the empirical observation that even highly innovative laggard firms struggle to catch up in sectors with limited knowledge diffusion.

This framework suggests that the observed rise in concentration stems primarily from structural changes in how knowledge flows through the economy. The growing productivity gaps between frontier and laggard firms appear to reflect not just differences in innovation capacity, but more fundamentally a decline in the spillover of knowledge and best practices across firms. This interpretation is supported by evidence of increasing concentration of patent holdings and citations, as well as rising profit margins among industry leaders.

diegoscanlon commented 4 months ago

Google, transformers, and the cost of knowledge diffusion

Google finding transformers, publishing that finding, then having the entire industry they're competing against use that finding might be a nice case study for the effects of knowledge diffusion on maintaining a competitive advantage. Of course, the true story is not that straightforward -- Google patented transformers, others built on top of that patented technology to create something better, which might imply that sharing one piece technology doesn't mean you can't still win. It seems like there were a lot of dynamics at play, this memo will try to examine some of them.

It seems that pre-chatGPT and rise of OpenAI, Google had a reputation for sharing a lot of their research. However, we might be able to see a shift to secrecy through quantitive changes (like the graph below) or qualitative changes, like Sundar's verbal announcement of a culture shift here). Either way, it seems that Google is shifting towards keeping new findings internal until they have monetized in the form of new products.

Using a very simple count method on Google Research's website, we can see some trend of publishing volume in AI research at Google, alongside some historical events. For the sake of being intellectually honest, we shouldn't attribute causation from these events to publishing numbers, but we might be able to make some assumptions about how these numbers reflect a culture change at Google after they saw the impacts of knowledge diffusion on their bottom line in the AI race (the numbers are at least interesting to look at). If we are to look for some causation between Google realizing they have to keep secrets, and the rise of competitors, we might want to look at 2022 (March) as that's when GPT3.5 launched, which seemed to be the catalyst of AI hype. That's a pretty strong assumption.

CHART NUMBERS ARE ANNUAL, NOT CUMULATIVE

Image

Image

Outside of looking at research publications to demonstrate a perceived role knowledge diffusion in maintaining some competitive advantage, we might consider other factors that could have caused Google to lose its competitive advantage.

Alignment between researchers and shareholders: leaning more towards secrecy might misalign Google's goals with its researchers' goals. “It is a matter of the freedom to explore inside a huge corporation like Google,” he told Forbes. “You can’t really freely do that product innovation. Fundamentally, the structure does not support it. And so you have to go build it yourself.” - Aidan Gomez. The majority, if not all of the transformer team, have left Google to start their own startups, now worth around $4.1b (here). Researchers like knowledge diffusion because it includes others in advancing the field; Google wants to be the only one advancing the field. Misalignment may result in brain drain.

AI responsibility papers as a measure of hesitancy:

We might attribute the increase in 2015 to the launch of Google Duplex (May 2015), an AI system that made reservations over the phone. This system, while innovative for the time, creeped certain people out by mimicking human "ums" and "uhs" in order to sound more like a human. This caused some unease in the public, and perhaps caused Google to do two things:

Either way, recognizing that public readiness is one part of product success may have caused Google to become hesitant with product releases. That hesitancy might grow larger when we think about how Google could apply AI to its core business of search -- taking a risk on their core product (size of change represented by the value of something like Perplexity -- high novelty, high risk), would definitely have implications on their revenues / profits. So, even having discovered transformers, one reason why Google couldn't capitalize on being the inventor might be their hesitancy behavior learned from Duplex. They didn't push ChatGPT or Perplexity like features for fear of the public's reaction, and of disrupting their core business model, both of which might be thought of as exclusive but intertwined.

We might also consider Google's firing of Gebru and Mitchell, two AI ethics researchers, in 2021 coincide with a decrease in the number of Responsible AI papers published. However, it's unclear how involved they were in publishing Responsible AI papers (both names yield no results on Google's Research Publication).

Evaluating hesitancy as an argument: If we think this hesitancy to the core business model is a merited cause of why Google didn't / isn't winning this race (in addition to or as opposed to knowledge diffusion), we might look at some contradicting cases:

Having a good model can make your product better. But marginal increase in product quality from model gains decreases, depending on the product. Maybe the products Meta is applying AI to (like dming celebrity AI) benefits less from having a better model than say Google search. Thus, Meta is more willing to share model knowledge than Google because the model doesn't really matter. Big assumption here...

This was kind of all over the place and there are a lot of ideas I did not do well distinguishing and structuring / ordering. Sorry for the tough read. Also, maybe some of the bold looks like I used AI to write / edit this. My official statement to the GPT-Police is no, I didn't.

Hansamemiya commented 4 months ago

Turing Trap in the Work Place

In The Turing Trap, Erik Brynjolfsson warns that prioritizing AI that mimics human intelligence too closely can displace workers and concentrate economic gains among a small elite. I think this is especially relevant in hiring, where AI-driven tools now screen, rank, and even reject resumes with minimal human oversight. AI Platforms make recruitment more efficient, but Brynjolfsson’s concern applies: fully automated screening can amplify biases, potentially filtering out strong candidates who don’t fit rigid algorithmic standards. This creates a “black box” effect, where human judgment—what makes candidates unique—is lost in favor of AI-driven efficiency.

Recent data from Pew Research backs up these concerns. A striking 71% of respondents oppose AI making final hiring decisions, and 66% say they’d avoid companies that rely on AI for hiring. I put together this stacked bar chart to visualize public sentiment on AI in managerial decisions, and it highlights some interesting trends. While a significant portion of people are uncomfortable with AI reviewing job applications (41% opposed) or tracking employees (61% opposed), some see potential benefits. For instance, 47% think AI could treat all applicants equally, which suggests that while many people don’t trust AI, they see its potential to eliminate certain human biases in hiring.

Image

That said, the widespread skepticism—seen in the high number of "not sure" responses—suggests people remain uneasy about AI's role in the workplace. I think this ties back to Brynjolfsson’s argument about AI’s lack of transparency. If people don’t understand how hiring decisions are made, they’re less likely to trust them. And while AI might be great at processing large numbers of applications quickly, it struggles with softer, more nuanced aspects such as culture or fit. Interestingly, there are cases where AI’s role is more accepted. For example, 43% of respondents support AI monitoring drivers' behavior, likely because safety metrics are more objective than hiring criteria.

Brynjolfsson’s solution—augmenting human decision-making rather than replacing it—makes a lot of sense here. AI can help by flagging potential biases or sorting applications more efficiently, but humans should still make final hiring calls. This hybrid approach keeps the benefits of AI’s speed while preserving the human qualities that make hiring fair and effective. Otherwise, we risk falling into the Turing Trap—letting AI take over decisions that require the kind of discernment, empathy, and flexibility that only humans can provide.

salhurasen commented 4 months ago

The “Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory,” paper asserts that a primary driver of decreased business dynamism is the increasingly data-dependent nature of production, since established firms protect their data-dependent process from entrants and following firms. This advantage that larger firms possess allows them to better utilise data in a data-dependent economy, allowing them to offer better services and goods. As such, they are able to attract and retain more customers than established firms, increasing the overall concentration in the economy.

Given that data dependency is a key driver of the reduction in knowledge diffusion and business dynamics it becomes imperative that data dependency is captured and measured. To capture such a trend, I propose the following data dependency index:

DDI = (DE+ DI) /2

Where DE measures data exclusivity as a score of 0 to 1. That is to say what portion of the data the firm uses and relies on is proprietarily sourced as opposed to relying on external entities for the purpose of data sourcing. In addition to that, DI measures data influence as a score from 0 to 1. The point of this variable is that it attempts to capture the portion of the business that relies on data for its operations and output of goods and services.

This index will serve as an indication of which firms heavily depend on data and whether or not the data utilized is exclusive to them. This will help influence the regulatory framework necessary to implement in relation to the emerging trend of data dependency in order to promote knowledge diffusion and increase business dynamism?

yhaozhen commented 4 months ago

Several recent readings highlight a troubling decline in U.S. business dynamism, which some authors link to weaker knowledge diffusion from leading firms. In the context of artificial intelligence (AI), this dynamic may intensify: today’s AI leaders own extensive proprietary data and algorithms, giving them a wide moat that is hard for newcomers to cross. This could resemble the “winner-take-all” or “best versus the rest” pattern described in macro growth models. Large AI players can patent or protect not just the models, but also the cloud infrastructure and specialized hardware they run on, further limiting diffusion.

I plan to examine whether stronger AI leadership correlates with reduced follow-on innovation at smaller companies. One hypothesis is that AI’s reliance on enormous data pools makes diffusion harder, because you can’t simply reverse-engineer data pipelines. If that data-based lock-in proves more powerful than older forms of IP protection, we might see an even sharper slowdown in smaller innovative entrants, especially in areas like AI-driven biotech or finance.

On the policy side, existing frameworks—like R&D tax credits—may not be enough if they don’t facilitate data-sharing or model interpretability. In that sense, “AI knowledge diffusion” might require stricter antitrust oversight, or explicit guidelines on licensing and data portability. But we still need to measure how much “productive synergy” occurs when smaller AI startups can freely learn from bigger incumbents. I plan to collect data on AI patent reassignments, comparing large AI conglomerates with smaller labs that get acquired. Then I’ll see if overall new AI patent filings outside top holders diminish over time.

The R demo below previews my approach. It simulates a relationship between “AI intensity” and “R&D spending” to illustrate how we might test the link between big players’ AI capabilities and smaller innovators’ efforts. Actual data will be more nuanced, but the structure remains. Image

saniazeb8 commented 4 months ago

Business Dynamism, Knowledge Diffusion, and Changing Innovation Landscape

The readings this week highlight a striking paradox in modern innovation dynamics. While technological advancements continue to accelerate, the pace of business dynamism, firm entry, job reallocation, and productivity diffusion has slowed significantly. Market concentration has increased, and the productivity gap between leading and laggard firms has widened, suggesting that knowledge diffusion the ability of firms to learn from technological frontiers has weakened. This decline raises fundamental questions about the structure of modern economies and the conditions for innovation. While traditional models of innovation emphasize the role of R&D intensity, the emerging reality suggests that who controls innovation is becoming more consequential than how much is being invested in it.

This shifts the economic environment from knowledge spillover to competitors to one where innovation serves as a moat, reinforcing incumbents’ dominance. If diffusion is constrained, even increasing R&D investments may fail to translate into higher business dynamism.

To formalize this shift, I propose a Knowledge Access Model of Market Power that captures how firm-level knowledge access determines long-run business dynamism:

$D_t = \lambda \cdot \frac{K_t}{K_f} - \phi \cdot C_t$

where

$D_t$ represents business dynamism at time $t$ (measured through firm entry, job reallocation, and innovation spillovers),
$$K_t$$ is the effective knowledge available to new or laggard firms,
$$K_f$$ represents the total knowledge frontier in the economy,
$$\lambda$$ measures the elasticity of business dynamism to knowledge access,
$$C_t$$ is market concentration (share of top firms in total industry revenue),
$$\phi$$ captures the negative effect of concentration on new firm entry and competition.

It implies that business dynamism is not just a function of absolute innovation output but rather how widely that knowledge is distributed. If frontier knowledge $$K_f$$ expands but access to it $$K_t$$ declines, business dynamism still deteriorates. Similarly, rising market concentration $$C_t$$ intensifies this effect by making it harder for new entrants to compete.

This model suggests that policymakers must shift from simply encouraging R&D spending to actively ensuring knowledge diffusion. Solutions might include open innovation mandates, anti-monopoly interventions, and reforming patent laws to prevent defensive intellectual property hoarding. The growing role of AI-driven automation in corporate strategy raises an even bigger challenge: If firms use AI to reinforce knowledge asymmetry, could we enter an era where the pace of innovation accelerates but economic mobility collapses? This is no longer just a question of growth but of who gets to participate in shaping the future of innovation.

A key question for policymakers is how to revitalize knowledge diffusion while maintaining incentives for firms to invest in R&D. One possible intervention is to reform intellectual property laws to balance innovation incentives with diffusion needs. Additionally, policies that encourage firm entry and competition such as targeted R&D subsidies for startups or tax incentives for new firms could help offset the entrenchment of dominant players. If knowledge diffusion continues to decline, innovation landscape may become increasingly bifurcated, where only a handful of firms drive technological progress while the broader economy stagnates.

JaslinAg commented 4 months ago

Preventing the Turning Trap: Modeling Economic Incentives for Automation and Augmentation AI Innovations

The Turing Trap describes the situation where developing AI machines with human-like intelligence, leads to negative effects for society. As AI is used to automate human labor, not to augment it, power and wealth can become concentrated. Machines can become better substitutes for human labor and “workers lose economic and political bargaining power.” However, if AI is used to augment human labor, “people remain indispensable for value creation.”

Today, one of the largest incentives for automating human labor is the cost of labor. To quantify this, I propose the following model, where M represents the number of machines that directly replace human laborers and $mu$ is the probability of successful innovation of M. Production follows a standard Cobb-Douglas function:

$E[C{total}] = \mu (w(L-M{automation} ) + f) + (1-\mu) (wL)$ $E[Y] = A^{\alpha} L^{1-\alpha})$ $E[\Pi] = p E[Y] – E[C{total}]$ The incentive for firms to automate is, or the increase in firm profit due to automation is: $wM{automation} - f$

In countries or industries where wages are low, there is a smaller incentive to automate human labor. My model reflects this. These countries/industries present a good opportunity to prevent the Turning Trap.

To model the incentive of augmentation, the total cost increases for each AI augmentation, but each AI augmentation increases productivity.

$E[Y] = \phi (\gamma A^{\alpha} L^{1-\alpha}) + (1-\phi)(A^{\alpha} L^{1-\alpha})$ $E[C{total}] = \phi (wL + cM{augmentation}) + (1-\phi) (wL)$ $E[\Pi] = p * E[Y] – E[C{total}]$ The incentive for the firm to augment is: $pA^{\alpha} L^{1-\alpha}(\gamma -1) – CM{augmentation}$

This model represents that augmentation affects both output and costs, leading to a more complex incentive. A firm will augment when productivity gains $\gamma$ are significant enough to offset the cost of augmentation. Due to the Cobb-Douglas production function, the model suggests that larger firms have a greater incentive to augment. This points to a potential inefficiency, as smaller firms would be less likely to augment.

As $\mu$ and $\phi$, which represent probabilities, are distinct probabilities, the combined model would be the following:

$E[Y] = \phi (\gamma A^{\alpha} L^{1-\alpha}) + (1-\phi)(A^{\alpha} L^{1-\alpha})$ $E[C{total}] = \mu (w(L-M{automation} ) + f) + (1-\mu) (wL) + \phi (wL + cM{augmentation}) + (1-\phi) (wL)$ $E[\Pi] = p* E[Y] – E[C{total}]$ The incentives to automate and augment remain the same.

This model is a simple way to understand when firms decide to automate and when they decide to innovate. It can shed light on the distinct macro conditions that can influence a firm’s decision. Additionally, the model suggests that there is room for policy to incentivize firms to invest in augmentation AI, thus preventing the Turning Trap. For example, a targeted subsidy for augmentation AI targeted at small firms will encourage them to adopt augmentation, which could increase productivity, and thus economic growth.

joezxyz commented 4 months ago

Advancing Technology: More Opportunities But Less Opportunity to Utilize It?

What Happened to U.S. Business Dynamism? mentions but doesn’t delve too much farther in regards to the topic of technology and its role in inhibiting business dynamism. I will dive into the data study of how sectors having to involve themselves in not just more technology, but also more skill-intensive labor, drive the gap between dominant groups and the laggards behind them. LAGGARD FIRMS, TECHNOLOGY DIFFUSION AND ITS STRUCTURAL AND POLICY DETERMINANTS, directly studies the changing level of the gap between dominant market powers vs laggard firms through the level of an industry’s involvement in technologically advanced tools, the diffusion of how to utilize said technology, as well as the requisite skill in order to learn and operate such technologies.

The following table examines the relationship between the growths of top 10% firms vs 10-40% firms in the realm of Labor Productivity(LP), as well as Multi Factor Probability(MFP):

Image

The table is divided into two parts. One being strictly analyzing LP of MFP, while the other analyzes those factors multiplied by a variable X in each column. X can be anything from an industry’s use of digital goods as inputs for their production, or the degree in which digital skills are required for their tasks. It could be seen through this table that the LP and MFP gaps are positive with a highly significant p value of less than 0.01. This at least indicates that the laggard firms are growing and trying to catch up with the top frontier firms. The problem however, is once we analyze the dataset with X taken into account, every single column exhibits a negative value, implying that the gap while being closed from the laggard end of the industry, keeps growing and being outpaced by the leaders in front, widening this gap. Columns 7 and 8 involving High Skill-share and/or Knowledge Intensive industries based on the data exhibit these trends the most: along with that, columns 1 and 2 also generally apply to industries applicable to column 7 and 8. When it comes to innovation for these laggard firms, it is not as if technology is the only factor at play, nor is it the laggard firms fault for not being able to catch up to the frontline of the industry. As it currently stands, the technological, and high skill-demand industries present too high of a wall to just expect firms to catch up without any assistance. With technology constantly advancing, and means to use them becoming more exclusive and sometimes even unattainable, there needs to be reworks in the policies surrounding such competition to allow for adaptation, or even changes on the fundamental educational level in order for the new generations of incoming researchers and workers to be adjusted to the current technological landscape and its requirements. The lack of such changes will, as in examples shown before, only continue to increase the gap and fuel the stagnation of the economy.

siyakalra830 commented 4 months ago

Accelerating Science with Human-Aware Artificial Intelligence

The article "Accelerating Science with Human-Aware Artificial Intelligence" explores how integrating the distribution of human expertise into AI models can significantly enhance the prediction of future scientific discoveries. Traditional AI models focus solely on content from scientific literature, but this study demonstrates that incorporating data on human scientists and their networks boosts discovery precision by up to 400%—especially in areas with limited prior research. By modeling discovery as a network of concepts connected by scientists, the authors use random walks over a hypergraph to simulate plausible human inferences. These walks connect materials, properties, and experts through pathways that represent scientific familiarity, collaboration, and cognitive accessibility. This approach reveals the importance of situated expertise, where the density of human interactions influences the speed and likelihood of future discoveries.

One of the key insights from the study is that discovery speed is inversely proportional to expert density. Using the availability heuristic as a foundation, the authors show how repeated exposure to combinations of ideas through scientific discussions and literature fosters the conditions for novel discoveries. They formalize this relationship using random-walk-induced proximity metrics to predict which topics will be explored next.

The study’s findings can be generalized using the equation:

Image

Here, t is the time to discovery, N represents the number of experts, D is the diversity of expertise, and R reflects resource availability. This equation underscores how discoveries are accelerated through increased interdisciplinary collaboration and access to resources.

The development of mRNA vaccines for COVID-19 exemplifies this model. Prior to 2020, companies such as BioNTech and Moderna, along with a few academic labs, had a small but established expert base N in mRNA technology. When the COVID-19 pandemic began, the existing knowledge enabled a rapid pivot to vaccine development. The diversity D of the effort was crucial: molecular biologists provided RNA structure expertise, immunologists guided immune response strategies, formulation chemists optimized delivery systems, and manufacturing experts scaled up production. Effective collaboration across these domains significantly shortened the timeline for vaccine development.

Resource availability R played an equally critical role. Public and private funding surged in response to the pandemic, supporting large-scale clinical trials and rapid production. These resources enabled a global vaccine rollout in record time. This example aligns with the study’s claim that accelerating scientific discovery depends on leveraging scientists’ prior research experiences and networks. The human-aware AI model proposed in the article highlights the potential for identifying promising research areas and connecting experts across disciplines to drive faster, more impactful scientific advances.

grozdanickata commented 4 months ago

In the Turing Trap paper by Brynjolfsson, it is argued that excessive reliance or desire for AI to replicate human work, or in other words automate it rather than augment it, can be counterproductive in achieving genuine human productivity and economic growth and dynamism, and social and political welfare. Currently we have various economic indices and indicators to measure these aspects of an economy. Several of these measurements exist in different versions to allow for adjustments for various real world factors that could be contributing to the index value and interpretation in an inaccurate way. For example, GDP growth rate is one of the most common measures of an economy’s well being, growth, productivity etc. However, if we only observe the growth rate using nominal GDP data (which is not adjusted for inflation) we can falsely conclude that the economic health and growth is larger than it is, if prices are increasing but production itself is the same, or even decreasing. For this reason, economists use a GDP deflator metric, which can lead to accurate conclusions in real GDP (true output growth based on a base year’s prices). In the current environment of rising AI, where it is currently being (and will continue to be) used both to augment certain tasks and functions and to replace others, it is perhaps interesting to consider the development of a new metric which adjusts for automated AI production (allows for an interpretation about productivity, growth, and overall health in the “human economy” alone)— somewhat like an automation deflator. Similar to the way in which increasing price levels can inflate the perception of a flourishing economy, an increased level of AI-related automation can misconstrue a perception of a flourishing economy and its wellbeing. In a highly AI-automated economy, it is possible to have technically increasing output, but while actually displacing people out of their jobs with little power to improve their situation, concentrating wealth among AI owners while deepening economic inequality, and eroding social cohesion and community engagement— all which inhibit human innovation practices.

I propose the following equation for an AI-automation adjusted GDP deflator:

*Automation Deflator = (Non-AI- automated Real GDP in current year/ Real GDP in current year ) 100**

Where “Non-AI-automated Real GDP” does not take into account the dollar amount of any goods or services that are entirely performed by automated/ AI systems (ex: a taxi ride in a completely self-driving car).

The result of this deflator will measure the proportion of real output, isolating purely AI - automated output. Taking into account the claim of the Turing Trap paper that high levels of automation can lead to a trapped equilibrium (a form of stagnation), this deflator can also potentially be an indicator or predictor of incoming economic and social troubles before seeing a slowdown in the actual numbers in existing economic indices if it is found to be a relatively low value. A low Automation deflator would imply that only a small proportion of the current GDP comes from non-AI automated output. When we gather automation deflator data over several years, and even in specific industry sectors, we will also be able to analyze the automation growth rate year-over-year, which would also provide valuable insights for human workers, policymakers, entrepreneurs, etc.

aveeshagandhi commented 4 months ago

6.pdf

jessiezhang39 commented 4 months ago

AI-assisted Medical Diagnostics Improved Human Decision Making

Shin et al. (2022) illuminated how superhuman artificial intelligence can become an enabler and inspiration in human decision-making. Specifically, they examined historical changes in decision-making by professional Go players, especially after the advent of AlphaGo. They find that instead of hindering or replacing humans, the introduction of superhuman AI actually improved human decision-making quality. Because AI can optimal decisions free of human biases, it can unearth superior solutions previously illegible to human decision-makers who may be constrained by familiar solutions. Such superior solutions in turn create opportunities for humans to learn and innovate further.

In this week’s memo, I am inspired to explore other application areas where artificial intelligence can improve human decision-making or unlock superior, novel approaches that were previously inconceivable to humans.

A striking example is the use of deep-learning algorithms to detect anemia through retinal fundus images, as demonstrated by Mitani et al. (2020) in their study published in Nature Biomedical Engineering. Traditionally, anemia diagnosis relies on invasive blood tests, which can be expensive, painful, and generate bio-hazardous waste. However, the researchers hypothesized that AI could extract previously unknown diagnostic signals from retinal images—an area where human clinicians had not systematically looked for hemoglobin concentration indicators.

The study utilized over 114,000 fundus images from the UK Biobank and developed a deep-learning model based on the Inception-v4 architecture. The results were groundbreaking: the AI system could estimate hemoglobin levels and detect anemia with a level of accuracy comparable to invasive methods. The model ****predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). The model outperformed traditional non-invasive methods on all metrics.

Image

Notably, the AI identified anemia-linked biomarkers in retinal images that were previously not understood by human researchers. The below attention heat map shows that the neural network model is paying attention to vascular regions in the retina to predict variables associated with cardiovascular risk. Through saliency mapping techniques like GradCAM and Guided-backprop, the study revealed that AI focused on fine spatial features near the optic disc and blood vessels—areas not previously associated with anemia diagnosis. This suggests that deep learning can extract new biomedical insights beyond the scope of human expertise.

Image

The work by Mitani et al. has demonstrated the transformative potential of AI in healthcare, not just as an enhancement tool but as a catalyst for uncovering entirely new diagnostic approaches. AI-assisted diagnostics could provide scalable, non-invasive disease screening in resource-limited settings where blood tests are impractical. Furthermore, since fundus photography is already a routine procedure for diabetic patients, this innovation could seamlessly integrate anemia screening into existing healthcare workflows, particularly benefiting populations at higher risk of undiagnosed anemia.

dannymendoza1 commented 4 months ago

An Extension to the Knowledge Access Model of Market Power

The Knowledge Access Model of Market Power provides a strong framework for understanding declining business dynamism. It emphasizes that the distribution of knowledge matters more than the total R&D investment. However, I propose an extension that incorporates firm-specific learning capabilities and adaptive R&D efficiencies to better explain knowledge access asymmetries.

Model Extension and Rationale The paper Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory outlines how reduced knowledge diffusion (δ) contributes to increased market concentration and weakened competition. When laggard firms struggle to catch up, competition diminishes, leading to higher markups, greater concentration, and slower productivity growth. However, the model does not account for firm-level differences in knowledge assimilation.

To address this, I introduce a firm-specific learning coefficient (γ), capturing a firm's ability to assimilate and apply available knowledge. This coefficient depends on prior R&D investments, workforce expertise, and absorptive capacity. Reformulating the business dynamism equation:

Image

where:

Rearranging the equation, we obtain the following:

Image

This equation links γ to market concentration, knowledge stock, and business dynamism, reinforcing the need for policies that support firms with higher absorptive capacities.

Policy Implications To address the challenges posed by limited knowledge diffusion, policy interventions should focus on supporting firms with higher absorptive capacities. One approach is to provide targeted R&D support for adaptive firms, particularly those with high γ values, incentivizing their ability to bridge the innovation gap. Additionally, dynamic intellectual property regulations can play a crucial role by adjusting exclusivity periods based on firms’ knowledge-hoarding tendencies, thereby encouraging broader knowledge diffusion. Public-private knowledge transfer programs should also be promoted, facilitating structured knowledge-sharing between market leaders and laggard firms to ensure that knowledge flows more freely within the economy. By implementing these strategies, policymakers can create an environment that fosters competition, innovation, and long-term economic dynamism.

The graph below illustrates how firms with higher γ values sustain greater business dynamism despite increasing market concentration. The graph highlights how knowledge diffusion constraints impact firms differently based on their learning capabilities. This visualization supports the notion that policies should target firms with high absorptive capacities to maintain competition and innovation.

Image

By incorporating firm-specific learning capacities, this model extension refines the model’s predictive power and introduces actionable policy pathways to counteract stagnating business dynamism. Future research might focus on estimating γ values across industries to develop more effective innovation policies.

druusun commented 4 months ago

This week’s readings examine two critical tensions in innovation ecosystems: the need for deep specialization within established disciplines and the potential for transformative breakthroughs from interdisciplinary research. While specialization ensures steady, cumulative progress, it can lead to path dependency, where knowledge systems become constrained by existing paradigms. Conversely, interdisciplinary approaches promise novel solutions to complex problems but often face higher risks and uncertain outcomes. This memo explores how innovation ecosystems can strike a balance between these two approaches to maximize long-term growth and societal impact.

To model the trade-off between specialization and interdisciplinarity, I propose the following equation for innovation output (𝐼):

Image

Where: St = Investment in specialized research. Et = Investment in interdisciplinary research. α,β = Weights for specialization, reflecting its cumulative but incremental nature. γ,δ = Weights for interdisciplinarity, capturing its high-risk, high-reward dynamics.

This equation demonstrates that innovation ecosystems must balance the compounding effects of specialization (𝑆𝑡^𝛽) with the nonlinear, transformative potential of interdisciplinarity (Et^δ).

Policy Implications

chrislowzhengxi commented 4 months ago

Measuring Scientific Progress through "Correct" AI Reliance

As AI systems become more integral to scientific discovery, understanding how human reliance on AI impacts scientific progress is crucial. Professor Evans’ research on human-aware AI highlights the need for AI models to incorporate human expertise for optimal results, while the recent study on AI reliance in decision-making underscores how human adherence to AI affects decision quality (source). This memo explores a theoretical model that quantifies scientific progress as a function of AI accuracy, human reliance, and the quality of adherence to AI recommendations.

This paper examines how humans may either over-rely or under-rely on AI recommendations, leading to suboptimal decision-making. Over-reliance occurs when individuals blindly accept AI outputs, even when they are incorrect, whereas under-reliance happens when humans dismiss AI suggestions that are actually valid. The study suggests that achieving a balance between human judgment and AI assistance requires that humans selectively trust AI while maintaining critical oversight.

Theoretical Model

We propose an equation that models scientific progress ($\Psi$) as a function of AI accuracy ($Acc_{AI}$) and human adherence to AI ($A$). Our goal is to measure how well AI-assisted scientific discovery improves knowledge production.

$$ \Psi = \beta1 \cdot (Acc{AI} \cdot A_{correct}) + \beta2 \cdot O{correct} - \beta3 \cdot (A{wrong} + O_{wrong}) $$

where:

Here are some insights from the paper that leads to this equation:

Image

Figures 8 and 9 from the AI reliance study suggest that complementarity (i.e., human-AI collaboration outperforming AI alone) is most likely when human adherence aligns with AI accuracy. If $A{correct}$ and $O{correct}$ are maximized, scientific progress ($\Psi$) increases as AI enhances human discovery.

Image

The findings in Figure 10 demonstrate that incorrect adherence ($A{wrong}$) and incorrect overrides ($O{wrong}$) drastically reduce decision quality. In scientific research, this could mean reinforcing incorrect theories or rejecting valid hypotheses.

Image

Proposition 4 suggests that scientific progress requires an optimal level of adherence—neither blindly following AI nor ignoring its insights. The threshold conditions for $\Psi$ are satisfied when human reliance is selective and AI accuracy is high.

This framework can guide AI-assisted scientific research by training scientists to identify correct AI recommendations to avoid blind adherence, and scientists can also develop interventions that boost selective AI reliance, reducing $A{wrong}$ and $O{wrong}$. By integrating findings from human-aware AI and AI-assisted decision-making, we propose a model that quantifies scientific progress as a function of AI reliability and human decision-making quality.

ggracelu commented 4 months ago

Human Reviewers to Mitigate AI Bias

In Tuesday’s lecture, we revisited the question of what kind of AI we want to build: (1) replacing human intelligence, (2) complementing human intelligence to expand beyond human capabilities, (3) addressing human limitations (bias). I find a combination of 2 and 3 most compelling — using AI to account for systematic bias is a great way to complement human intelligence. However, this raises the question of how bias-proof AI can be — after all, AI models are only as good as the data they are trained on. If their datasets have inherent bias, even unintentionally, the AI will learn and reproduce the same bias. As datasets continue to scale exponentially, the risk of encoding bias becomes increasingly prevalent.

Moreover, I think it’s important to explore the limitations of AI in comparison to humans — authentic empathy and embodied sensory experiences are human characteristics that are not obtainable through AI (unless you want to entertain philosophical discussions of machine’s capacities for empathy or a Theseus’ Ship style problem of simulating human physiology). Building off our previous discussion of point solutions vs system solutions, AI models that function as miniature societies as opposed to singular individuals perform better. I liked the analogy of reinforcement learning as a group brainstorming session with multiple voices as opposed to an individual monologuing.

I am curious if it is possible to include additional steps in the AI training process to further mitigate the risk of encoding bias. I would like to propose an AI-training framework that increases the role of human-computer collaboration so that human reviewers can be used to reduce the likelihood of biased AI models. This can be extended to discussions about the regulatory landscape of AI as privacy and ethical concerns grow increasingly prevalent the more that the emerging technology spreads. It can also address concerns about AI replacing human laborers by emphasizing the importance of collaboration — not competition — between humans and AI.

The key to this proposed framework is including multiple rounds of different human reviewers to avoid introducing an additional risk of human bias. Each checkpoint would involve a large number of human reviewers evaluating the perceived fairness of the model on a numerical scale, and a fairness score can be computed to determine whether or not the steps preceding the checkpoint need to be re-done.

Image
rbeau12 commented 4 months ago

This week’s papers discussed AI collectives and hinted at the potential of AI in government. Inspired by this, I wanted to see how different models would respond to different situations that require their opinions. To examine interpersonal ability, I ran this week’s student questions through Claude, ChatGPT, and DeepSeek and asked each to rank them. I turned these results into a ranked list (usernames turned into userID so anonymous) from each LLM and examined the similarities using Kendall rank similarity. My results were interesting: KENDALL_TAU: Deepseek vs ChatGPT: 0.729 Deepseek vs Claude: 0.217 ChatGPT vs Claude: 0.251 Deepseek and ChatGPT are relatively closely aligned, but Claude’s list is different from both. This stat gives insight into the innovation process; DeepSeek’s training was rumored to be heavily influenced by ChatGPT while Claude is developed by Anthropic, a long-term AI specialist firm. Further, it gives credence to the idea that LLMs designed in different ways may “think” in different ways (and vice versa for similar methods). Next, I was interested to see LLM's political abilities and how prompting changed an LLM’s judgement. I asked Claude to evaluate a series of policy decisions, first as a Republican then as a Democrat. For both parties, I specified the model should make the best possible decision for communal good. Questions: Should the government prioritize renewable energy investments over fossil fuel subsidies? Should the federal government increase funding for public schools, or promote school choice through vouchers and charter schools? Should the U.S. prioritize stricter border security or create pathways to citizenship for undocumented immigrants? Should the government impose stricter regulations on AI development to ensure ethical use?. The results are presented in the table:

Image

Although I tried to frame the questions in a way that the LLM would recruit its knowledge in an unbiased manner, the party delineation still biased the LLM’s decisions and almost every answer was opposite each other (except AI regulation). This unfortunately highlights the deep divide in the country and the difficulty of satisfactory AI governance. Current AIs may be more oriented towards divisive partisan viewpoints rather than complementing them to find factual common ground. For this reason, we must work to use AI to augment our decision-making rather than automate it.

siqi2001 commented 4 months ago

Gender Collaboration and External Networks in Patent Teams

My previous analyses have shown that team size is positively correlated with the presence of women inventors, but the relationship varies across technological sectors. In this study, I extend the analysis to explore whether mixed-gender teams are more likely to engage in external collaborations. This investigation provides deeper insights into the interplay between team composition, collaboration networks, and technological sectors.

Hypothesis

The primary hypothesis tested is:"Mixed-gender teams are more likely to engage in external collaborations than single-gender teams." If mixed-gender teams demonstrate higher collaboration rates, it may suggest that gender-diverse teams have a broader reach, stronger networks, or a greater tendency to seek external expertise. This could inform policy decisions to encourage diversity in innovation and increase cross-institutional partnerships.

Methodology

1. Logistic Regression on External Collaboration

A logistic regression model was used to test whether mixed-gender teams are more likely to collaborate externally. The model specification is as follows:

logit_model <- glmmTMB(external_collaboration ~ mixed_gender, data = patents, family = binomial)
summary(logit_model)

Findings:

2. Chi-Square Test for Independence

A Chi-Square test was performed to determine whether there is a significant relationship between gender diversity and external collaboration:

chi_test <- chisq.test(table(patents$mixed_gender, patents$external_collaboration))
print(chi_test)

Findings:

3. Correlation Between Team Size and External Collaboration

A Pearson correlation test was used to examine the relationship between team size and external collaboration:

cor_test <- cor.test(patents$Team_size, patents$external_collaboration)
print(cor_test)

Findings:

4. Regression Model with Team Size and Technological Sector

An extended logistic regression model incorporating team size and technological sector was run:

regression_model <- glmmTMB(external_collaboration ~ mixed_gender + Team_size + first_wipo_sector_title, data = patents, family = binomial)
summary(regression_model)

Findings:

5. Descriptive Statistics

The summary statistics of the dataset provide additional insights:

Visualization

1) predicted probabilities from the regression analysis:

Image

2) the correlation between team size and external collaboration

Image

Conclusion and Implications

The findings of this study strongly support the hypothesis that mixed-gender teams are more likely to engage in external collaboration. Furthermore:

Implications:

joycecz1412 commented 4 months ago

This memo would like to consider the framework where business dynamism is driven by innovation competition between firms. It’s very common for companies in economies around the world to receive subsidies from the government. How can we adjust the model to account for such subsidies and their impacts? To assess the impact of direct government subsidies on an economy’s business dynamism, we can modify the model by incorporating subsidies into R&D cost functions and analyze their effects on market structure and competition.

The original R&D cost function is:

$$ R{ijt} = \alpha \frac{x{ijt}^2}{2} Y_t $$

Introducing a subsidy rate ( s ) (e.g., 20% cost reduction) adjusts this to:

$$ R{ijt} = \alpha \frac{x{ijt}^2}{2} Y_t (1 - s) $$

This lowers the marginal cost of innovation, incentivizing higher R&D investment. For neck-and-neck firms, the optimal innovation rate becomes:

$$ x_0 = \frac{v_1 - v_0}{\alpha (1 - s)} $$

Subsidies s > 0 increase x0, accelerating innovation. Similarly, follower innovation x{-1} rises if they receive subsidies, potentially closing technology gaps.

Effects on Market Concentration (\mu)

Market concentration depends on the balance between innovation by leaders (creating unleveled sectors) and followers (re-leveling sectors). The equilibrium share of unleveled sectors is:

$$ \mu = \frac{2x_0}{2x0 + x{-1} + \delta} $$

Uniform Subsidies? If all firms in a sector receive subsidies, both x0 and x{-1} increase. However, since neck-and-neck firms inherently innovate more x0 > x{-1} , mu could rise, increasing concentration and reducing competition.

Targeted Subsidies? If subsidies favor followers (e.g., laggard-focused policies), x_{-1} rises disproportionately, reducing mu and enhancing competition by re-leveling sectors.

There are multiple effects to subsidization: Positive Effects: Subsidies boost aggregate innovation, potentially raising productivity growth. This could increase dynamism if new entrants or smaller firms leverage subsidies to challenge incumbents. Negative Effects: If incumbents capture subsidies, they may widen productivity gaps (higher μ), stifling competition. Reduced churn (Fact 8) and lower entry (Fact 7) could follow. Subsidizing all firms uniformly may not enhance competition. While R&D increases, dominant firms often out-innovate due to scale advantages, exacerbating concentration. For example, China’s semiconductor subsidies have spurred R&D but also entrenched state-backed giants like SMIC, crowding out smaller players.

Thus, to enhance dynamism through policy economies should: Target laggards: Direct subsidies to followers to boost x-1, fostering catch-up. Link subsidies to entry: Encourage new firms through startup grants, countering declining entry rates (Fact 7). Monitor concentration: Use antitrust tools to prevent subsidy-driven monopolization.

In sum, subsidies’ impact hinges on design. Uniform subsidies risk entrenching leaders, while targeted policies could revive competition, depending on sectoral structure and firm heterogeneity. Empirical testing via the adjusted model would clarify these dynamics.

amulya-agrawal commented 4 months ago

Coca Cola and Business Dynamism

For decades, Coca-Cola has maintained a strong market share in the global beverage market, serving as a powerful example to demonstrate what I have learned about market concentration and business dynamism decline from this week’s texts. In this memo, I will examine Coca-Cola’s long-standing monopoly-like dominance in the beverage industry, as it offers a clear empirical case of declining business dynamism and reduced knowledge diffusion – ideas that are discussed in What Happened to Business Dynamism and Ten Facts on Declining Business Dynamism and Lessons From Endogenous Growth Theory. Historically, Coca-Cola has protected their competitive edge not only through their branding and scale, but also by withholding critical product knowledge – notably, the secret formula for its flagship drink.

Now, with the rise of artificial intelligence in consumer analytics and product development, Coca Cola is further entrenching its market power by strategically combining proprietary data, trade secrets, and AI-driven innovation to suppress competition and control knowledge diffusion. My memo explores Coca Cola’s AI-enhanced monopoly tactics, focusing on how AI is reinforcing information asymmetry in the beverage industry and limiting market dynamism.

What Happened to Business Dynamism identifies declining knowledge diffusion as a major contributor to slowing productivity growth, rising market concentration, and decreasing firm entry rates. In Coca-Cola’s case, this plays out in three key ways. First, they use their secret drink formula as a barrier to entry. Unlike using patent-based protections, which expire and eventually contribute to public knowledge, Coca-Cola’s trade secret protection is indefinite – meaning that no firm can replicate its precise formulation. This creates a structural knowledge diffusion gap, where smaller firms are unable to leverage incremental innovations to challenge Coca-Cola’s dominance.

Second, Coca-Cola has recently begun using AI and deep learning to develop new beverage flavors and predict consumer taste preferences before they emerge. By training models on exclusive internal consumption data, Coca-Cola keeps their AI-driven insights private, blocking competitors from benefiting from industry-wide innovation. Unlike traditional R&D, where industry knowledge diffuses through academic publications and shared patents, AI models that are trained on proprietary data contain very valuable insights.

Third, AI enables real-time pricing optimization, which reinforces Coca-Cola’s ability to bypass competitors before they gain market traction. By automating inventory and demand forecasting on a global scale, Coca-Cola has raised the barriers to entry for smaller firms – ensuring that they remain trapped in a competitive disadvantage.

For the analytical component of this memo, the figure below that I generated displays Coca-Cola’s market share and trade secret reliance compared to smaller beverage firms – tracing their relative R&D investments, market share, and firm entry rates from 2010 to 2024. While this data on R&D investments is not directly available because they do not publicly disclose granular data, I used estimates informed by industry reports. The data illustrates a negative correlation between Coca-Cola’s increasing AI investment and the diffusion of industry knowledge – supporting the arguments from the texts that knowledge concentration leads to declining business dynamism.

Image

To restore knowledge diffusion and business dynamism, policymakers ought to consider introducing limits on trade secret protections, mandating open data sharing, and strengthening antitrust measures on AI-driven pricing tactics. If these AI-powered trade secrets remain unregulated, it is scary to think that Coca-Cola’s monopoly on both its physical formula and digital knowledge will continue to reinforce declining business dynamism and limit competition.

yangkev03 commented 4 months ago

In this week's reading "The Turing Trap", we learned about how many technologists view the role of artificial intelligence, and technology as a whole, in either automating or augmenting human labor. In the case where AI plays the role of automation, Brynjolfsson argues that this will lead to human laborers becoming dispensable, thus losing bargaining power with employers. However, when AI plays the role of augmentation, technology is complementary to human labor, and thus the bargaining power of human laborers may in fact increase. Brynjolfsson also makes the claim that the value creation of AI is not dispersed equally in a society.   In this memo, I would like to understand the effects of automation and augmentation on wages for workers. However, I would also like to explore the variance of jobs within an industry to see if technological advances can play a role in increasing the productivity, and thus bargaining power, of "superstar" workers.  

Image

Firstly, I decided to pick two industries that would have a largely different response from AI. Often, AI plays the role of automation in the retail industry, with roles such as cashiers being replaced by machines. On the other hand, in the financial services space, workers are more often augmented with AI tools on the job. From the graph, we can see that both industries see growth over time with the advent of technological tools.

Image

Looking at the graph of the number of employees, we can see that both financial services and retail workers have a somewhat steady number of workers in relation to technological implementation in their fields.

Image

From this graph, we can see that although the number of finance jobs has increased, the number of tellers in proportion to all finance jobs has decreased. As tellers are becoming automated away by machines, we can deduce that the finance jobs that are remaining are ones that are less prone to replacement.   From these findings, we can see that the story told by average wage is more than a broadband increase in bargaining power on wages due to augmentation from technology. Rather, the jobs that remain in the industry are ones that are likely harder to automate away and therefore give greater bargaining power to laborers. As a result, wages have increased over time.

darshank-uc commented 4 months ago

AI "Priorities" Across Industries

In The Turing Trap, Erik Brynjolfsson discusses how businesses tend to view AI as a tool to replace humans rather than augment them. Apart from a few examples, Brynjolfsson remains mostly industry-agnostic when discussing how AI can enhance the current capabilities of workers. Implicit in his argument is the assumption that most industries will adopt, or have already adopted, AI-based solutions. This is a fair generalization to make: as AI becomes more accessible and trains to perform in different business settings, industries are also more likely to adopt AI to make workflows more efficient. However, I’m interested in the relevance of AI today across distinct industries––which would provide insight into the different growth trajectories for AI and whether HLAI integration is as imminent as it seems. Goldman Sachs CEO David Solomon said last month that AI can draft 95% of a S-1 form (IPO prospectus); Johnson & Johnson CIO Jim Swanson recently discussed AI software to discover new drugs. Regardless of whether these examples reflect tendencies of replacement or augmentation, they suggest very different use cases for AI and different levels of penetration into company operations.

One way to measure how AI priorities differ across industries is to assess the breakdown of R&D expenditure for baskets of companies––specifically the proportion of R&D allocated to AI solutions and not general product lines. Unfortunately, this data is difficult to collect apart from one-off comments made by company management during investor presentations. However, earnings calls are an interesting data vehicle for companies: their brief statements reflect what they’re most excited about, what they want investors to hear, and larger trends in their industry. The reference to “AI” in an earnings call could represent several things: AI solutions the company is using in-house, AI products/services the company is shipping to market, or current AI usage by clients––but always some important consideration to the company. For five high market-cap companies in the S&P 500 in each of the Tech, Financials, Healthcare, Consumer, Utilities, and Industrials industries, I scraped the transcripts of their quarterly earnings calls during 2024 for the number of references to “AI” and plotted the mean counts; each of the five bars within each industry represents a different company.

Image

As expected, references to AI in the Tech industry are incomparable to other industries, as the Tech industry has the closest proximity to creating and implementing AI solutions––AI is by far their greatest priority relative to other industries. The graph is more informative without the Tech industry:

Image

Clearly, priorities toward AI are not homogeneous within industries, moreover between. In the Industrials and Consumers industries, Lockheed Martin and Walmart lead with references to AI––the former with AI implementations in computer tracking, and the latter with a personal shopping assistant. While other companies considered “Industrial” don’t have the same operations as Lockheed Martin, many of Walmart’s peers––Costco or Home Depot––do, but still lag in AI references. On the other hand, Financials (banks) are more homogeneous across companies, which suggests similar priorities. While it’s intuitive to think that financial institutions mention AI in the context of their clients or the investment landscape (i.e. not as relevant to the company’s operations), this isn’t the case: most references are about customer-facing services (e.g. Wells Fargo’s Fargo AI assistant) or generating faster market insights for deal teams. Financials stands out as an area where AI is a more definitive hot topic. Between industries, mentions of AI also varies significantly, which suggests different priorities.

For more definitive results that approach an “industry standard” with less heterogeneity, this procedure would need to scrape data for a much larger collection of companies with more diverse operations across a longer timeframe. Still, this case study hints that industries are not discussing AI at the same magnitude, and blanket statements about AI implementation overlook finer industry dynamics.

pedrochiaramitara commented 4 months ago

After reading The Turing Trap: The Promise & Peril of Human-like Artificial Intelligence by Erik Brynjolfsson, I wondered what difficulties AI might have when complementing humans and how a system where AI and human interaction falls short. It is becoming more common to use AI to complement writing and address human problems like grammar and conciseness. The use of ChatGPT on essays has directly impacted famous companies like Chegg and Grammarly and has massively influenced the act of writing. One of the main promises of AI, ever since Alan Turing established the desired goal of imitating a human through AI, is the possibility of replacing the human writer. Indeed, there have been multiple books written with AI, but few have received wide acclaim. This indicates that while AI can generate text that mimics a human, there are still some issues with AI-generated writing, as humans can still identify something off with the work.

To investigate this issue, I analyzed data from an article called Real or Fake Text? by Dugan et al., which examined how well readers detect when AI-generated text continues a human-written part. They classified sentences into different categories based on why they suspected a sentence was machine-generated. The categories include:

Grammar: Grammatical errors. Repetition: Repeating information unnecessarily. Irrelevant: Sentences that are irrelevant or unrelated to the prior context. Contradicts_sentence: Sentences that contradict earlier parts of the text. Contradicts_knowledge: Sentences that contradict known facts. Common_sense: Sentences that violate common sense. Coreference: Sentences that confuse names. Generic: Sentences that are uninteresting.

In my analysis, I made a graph ranking the reasons that showed up the most. The most common issue was irrelevance, sentences that did not fit logically with prior content. This suggests that AI often generates good-sounding text but fails to maintain continuity and provide useful and interesting inputs. Humans, on the other hand, are good at identifying when a sentence does not belong where it is. Another common issue is repetition, which seems to indicate that AI engines always have to generate an answer, even when they lack new or relevant information, and thus might resort to repeating the same sentence in a different way.

Image

Overall, this shows us that AI can empower the writing process, but simply writing one paragraph and telling AI to finish it might be problematic, as it often fails to deliver, and humans realize it. The cooperation between humans and AI proposed in the paper could work when the human is supervising and making sure that typical AI errors do not occur.

florenceukeni commented 4 months ago

Ethics in Innovation

In “Accelerating Science with Human-Aware Artificial Intelligence” Sourati and Evans (2023) demonstrate that essentially, accounting for the cognitive accessibility of scientific ideas can really boost AI’s ability to predict future discoveries. Their human-aware approach accelerates scientific breakthroughs and uncovers “alien” hypotheses that are outside the prevailing conversations and consensus. This insight is directly relevant to what I am considering for my final project, which looks into how regulatory and cultural contexts influence ethical considerations in AI innovation.

In the same way that the human-aware model uses unsupervised learning over publication metadata to simulate human inference pathways, my project will develop an embedding model to map ethical discourse across diverse cultural and regulatory environments. The hypothesis is that regulatory frameworks and cultural traditions shape the language and priorities around AI ethics in measurable ways. By training an embedding model on diverse sources—ranging from policy documents and academic articles to social media and industry reports—we can quantify these differences. To support this approach is the Ethical Similarity Score, defined as

  S₍ᵢ,ⱼ₎= (Eᵢ · Eⱼ) / (||Eᵢ|| ||Eⱼ||),

where Eᵢ and Eⱼ are the embedding vectors representing the ethical narratives from two different contexts. A higher S₍ᵢⱼ₎ indicates that ethical language in the contexts is closely aligned, and lower scores reveal divergence in language. This final project approach would draw on inspiration from the human-aware science framework.

Dylanclifford commented 4 months ago

While much attention has been paid to AI's role in exacerbating wealth inequality between capital owners and workers, less explored is its potential effect on narrowing long standing gender wage disparities. In "The Turing Trap", Brynjolfsson argues that AI driven automation disproportionately impacts workers without college degrees, shifting economic rewards from labor to capital. This automation trend has contributed to declining real wages for Americans without higher education over the past 40 years, while simultaneously increasing the concentration of wealth among technology owners and entrepreneurs.

However, this dynamic intersects interestingly with evolving educational attainment patterns between genders. Women now significantly outpace men in college enrollment and completion where as of fall 2021, women comprised 58% of undergraduate enrollment compared to men's 42% (Source). Similarly, recent high school graduation rates show women graduating at 89.1% compared to men's 82.9% (Source). Given Brynjolfsson's argument that college education serves as a buffer against AI automation's negative wage effects, these educational trends suggest that women may be better positioned than men to weather AI's transformation of the labor market.

Image

An analysis of the relationship between global AI corporate investment and the female-to-male earnings ratio from 2015 to 2022 supports this hypothesis. The data reveals a strong correlation coefficient of 0.979, indicating that as AI investment has increased, the gender wage gap has consistently narrowed. While correlation doesn't prove causation, this relationship aligns with the theoretical framework that AI automation primarily displaces non college educated workers, a demographic that increasingly skews male.

This finding ultimately shows an important nuance to the broader narrative about AI's impact on inequality. While AI may indeed be widening the overall wealth gap between capital owners and workers, it appears to be simultaneously contributing to greater gender wage parity. This suggests that the distributional effects of AI are more complex than often portrayed, and their societal impact depends heavily on which dimensions of inequality we prioritize. If we value gender equality as a paramount social objective, then AI's apparent role in narrowing the gender wage gap could be seen as a positive force for social progress, even as it creates other forms of economic disparity. This complexity challenges us to move beyond simple narratives of AI as either purely beneficial or harmful to equality, and instead pushes us to develop more nuanced policy frameworks that can simultaneously address multiple dimensions of social and economic inequality.

Adrianne-Li commented 4 months ago

Memo: The Impact of Innovation Concentration on Business Dynamism

Introduction

Business dynamism in the U.S. has declined over the past few decades, with decreasing firm entry rates, rising market concentration, and a slowdown in knowledge diffusion. While technological advancements and strong intellectual property (IP) protections have spurred innovation, they have also contributed to the consolidation of power among dominant firms. This memo explores the impact of concentrated innovation and proposes a model to assess its effects on economic dynamism.

Empirical Case Study: Patent Concentration Trends

One indicator of innovation concentration is patent share among leading firms. Recent research suggests that fewer companies dominate patent filings, leading to reduced knowledge spillovers and higher barriers to entry for smaller firms. The implications of this trend extend beyond competition, as it influences market power, labor mobility, and long-term economic growth. When fewer firms control critical patents, they can dictate industry standards, extract higher rents, and create an environment where startups struggle to scale due to limited access to foundational technologies.

Additionally, patent concentration fosters innovation asymmetry, where only well-established firms with vast resources can sustain high R&D investment. This discourages smaller firms from entering high-tech sectors, leading to a decline in disruptive innovation. Countries that have balanced patent concentration with knowledge-sharing mechanisms, such as South Korea and Germany, demonstrate that strategic interventions in IP law can mitigate adverse effects and sustain business dynamism.

Proposed Model: Knowledge Stock and Diffusion

To analyze innovation concentration, I model the relationship between knowledge stock (K), diffusion rate (D), and firm productivity (P). The model follows:

P_t = \alpha K_t^\beta D_t^\gamma

where:

By adjusting ( D ), we can assess how policy interventions influence business dynamism.

Policy Implications and Recommendations

Given the empirical trends, several policy interventions can counteract excessive innovation concentration:

  1. IP Reform: Strengthening policies that encourage open licensing or time-bound exclusivity can enable smaller firms to benefit from existing technological advancements without excessive barriers.
  2. Incentivizing Knowledge Spillovers: Governments can offer tax incentives for firms that engage in technology-sharing initiatives, university-industry partnerships, and collaborative R&D.
  3. Strengthening Antitrust Measures: Enforcing stricter antitrust scrutiny on technology mergers can prevent monopolization of foundational innovations and promote competitive markets.
  4. Encouraging Decentralized Innovation: Supporting regional innovation clusters and venture funding for startups ensures that knowledge production is not centralized among a few dominant firms.

Image

Conclusion

The findings suggest that increased patent concentration correlates with reduced knowledge diffusion and lower business dynamism. If left unchecked, the growing control of innovation by a few firms may stifle economic diversity and technological progress. However, by implementing targeted policy reforms, regulators can foster a more dynamic and competitive innovation landscape. The experiences of countries that have effectively balanced innovation incentives with diffusion policies provide valuable lessons for mitigating the risks associated with concentrated intellectual property control.

References

nsun25 commented 4 months ago

As a follow up to Akcigit and Ates’ paper “Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory”, since market concentration may be endogenous (ex. higher concentration could be both a cause and consequence of declining labor share), we could try to use an instrumental variable (IV) that affects market concentration but is not directly correlated with labor share. One potential instrument is education policy changes, particularly those that influence workforce skill distribution across industries.

First-Stage Regression (Predicting Market Concentration): We use education-related variables as instruments for market concentration: 〖Market Concentration〗_it= α_0+ α_1 〖Education Policy〗_it+ α_2 〖STEM Share〗_it+ α_3 〖College Graduation Rate〗_it+ α_4 X_it+ ε_it Education policies can shape the supply of skilled workers, influencing the entry and competitiveness of firms. If education investments lead to a more skilled workforce, industries with high-skilled labor supply may see more firm entry and lower market concentration. The STEM workforce share may influence firm-level productivity and innovation, affecting whether firms become dominant players or face more competition. College graduation rates affect the bargaining power of workers, potentially shifting labor share and reducing markups. Some interesting alternatives for IV choices include: Historical Education Patterns, using lagged (e.g., 10-20 years) education variables to reduce potential contemporary correlations with labor share; Geographic Variation, exploit differences in state-level education policies or the presence of universities; Shift-Share Instrument, combine national-level education trends with initial local industry composition.

Second-Stage Regression (Effect on Labor Share): Using the predicted market concentration from the first stage, we estimate its effect on labor share: 〖Labor Share〗_it= β_0+ β_1 (Market Concentration) ̂_it+ β_2 〖Markups〗_it+ β_3 〖Productivity Growth〗_it+ β_4 X_it+ ε_it My hypothesis is that β_1<0. This means that higher market concentration (instrumented using education) reduces labor share, this suggests that rising concentration is not purely a consequence of declining labor bargaining power but may be structurally linked to shifts in firm dynamics. This IV approach helps separate the causal effect of market concentration from potential reverse causality. Also, we need to check for robustness by conducting F-test on the excluded instruments in the first stage. The F-statistic should exceed 10 for each endogenous regressor. We can test for endogeneity by comparing OLS and IV estimates.

yanhong-lbh commented 4 months ago

Over the past decade, AI research and commercialization have become increasingly concentrated in a handful of large technology firms. This phenomenon resonates with Akcigit and Ates’ findings in Ten Facts on Declining Business Dynamism, where rising market concentration is correlated with reduced entry of new, innovative players. Moreover, Brynjolfsson’s Turing Trap cautions that many organizations are deploying AI primarily for automation—substituting human labor—rather than augmentation that amplifies human creativity. Such concentrated and automation-focused AI risks restricting the diverse experimentation that fosters far-reaching technological breakthroughs.

One instructive area is the domain of large-scale language models (LLMs). Sourati and Evans’ “Accelerating Science through Human-Aware AI” stress that LLMs can stimulate broad innovation only if they are designed for complementarity: systems that partner with human researchers, amplify their strengths, and encourage novel thinking. Conversely, when LLMs remain siloed in only the most resource-rich labs, smaller institutions and open-source communities become marginalized from producing or even testing the next wave of AI methods. This dynamic constrains the breadth of innovation trajectories, curtailing the capacity for diverse, groundbreaking discoveries that often originate from new entrants.

Below is a simple custom analytical element illustrating how concentrated AI research can shape knowledge diffusion. The table shows hypothetical (but directionally representative) data on AI preprints in a major online repository from 2018 to 2023, distinguishing those authored by top-five tech labs versus universities and start-ups.

Year Total AI Preprints Top-5 Tech Labs Universities & Start-ups
2018 10,000 3,500 (35%) 6,500 (65%)
2019 14,000 6,200 (44%) 7,800 (56%)
2020 18,000 8,900 (49%) 9,100 (51%)
2021 25,000 13,500 (54%) 11,500 (46%)
2022 34,000 20,000 (59%) 14,000 (41%)
2023 44,000 27,000 (61%) 17,000 (39%)

The table suggests large AI labs’ share of AI research has risen significantly over time. While absolute totals grow for all groups, the increasing dominance of a few labs implies that research directions and intellectual property may become more centralized. Such an environment risks stifling competition and constraining “outside-the-box” innovation—the very dynamism that has historically propelled transformative breakthroughs.

Encouraging open data, open-source models, and collaborative frameworks can help diversify AI innovation. Policies that align with the framework proposed by Lai et al. in “Evolving AI Collectives” would nurture robust, self-regulating communities of researchers and developers. In parallel, adjusting patent and publication incentives to favor open dissemination can empower a broader set of actors—start-ups, universities, and nonprofits—to contribute complementary intelligence. By deliberately fostering open ecosystems, we can avoid the pitfalls of constrained innovation and ensure that AI’s benefits—and its creative possibilities—reach across society.

henrysuchi commented 4 months ago

Akcigit and Ates (2023) investigate a decline in business dynamism in the United States and identify a lack of knowledge-diffusion as a major contributor to this lag in the American economy. In short, innovations might be made by larger firms, but there is too much protection of this knowledge, to the point where it cannot be diffused and turned into a follow-on product.

I put this in the context of the demand for human capital in the United States. To provide a quick overview of this, I use ACS panel data from 1978 to 2024 to look at how human capital has changed over time. As seen in figure below, average education dipped leading into the 80s, but has grown over time. This is likely driven both by the number of people completing more school, ie by high school graduation and college attendance rates, and also slightly by the number of people pursuing graduate degrees. Both of these provide higher quality labor inputs that can be used by firms to both come up with new ideas and also design ways to implement technologies that are being diffused throughout the economy.

Image

As noted previously in our discussion about corporate R&D spending, returns to education increasing may be because firms use R&D subsidies to buy up the educated and prevent further innovation. On the other hand, there may also simply be a market failure occurring where despite the increase in supply of human capital in the overall economy, educated workers are not finding work and therefore cannot innovate. I regress employment rates to education, with fixed effects for year, race, sex, age, and interactions between year and education. What I find is that there is a significant return to employment with education, but that these returns slightly declined over time. See below table for output.

Image

If the returns to education still exist in terms of employability, and education is increasing, we can reasonably infer that the pipeline for human capital—that is, the market for human capital production—is working reasonably well. Our problem is not that people are not getting educated, and it is also not that people are not getting employed after being educated. Then the problem is likely that educated people are not producing the knowledge needed for growth and dynamism. Why? Innovation requires two forms of capital: human capital and knowledge capital. In the world where knowledge diffusion is being cut off by government policy or by unfair corporate action, then newer firms may not be able to innovate. Moreover, human capital may be being consolidated in specific firms with the intent to prevent other firms from using it. Thus, the issue is not the supply of human capital per se, but the use of market power to prevent its diffusion and the diffusion of its products.

pauline196 commented 4 months ago

Business Dynamism in Russia: Labor Share and Firm Entry Rate

In "Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory," the authors examine the declining dynamism of the U.S. economy, identifying ten key symptoms of this trend. I was interested in replicating their analysis for Russia, focusing on two of the most accessible indicators: the labor share in GDP and the firm entry rate.

For the labor share in GDP, I used total payments to workers from 1995 to 2023 and GDP data from Rosstat [1]. I estimated alpha by dividing total labor payments by GDP. For the firm entry rate, I utilized the newly published RFSD dataset (GitHub RFSD), which includes data on all active firms in Russia [2]. To calculate the entry rate, I measured the number of new firms entering each year relative to the cumulative number of active firms. Since firm exits were only recorded in 2023, I assumed that all firms established before that year remained active.

The data shows that Russia’s labor share is relatively low, remaining below 0.45. In comparison, the U.S labor share was around 0.6 in 2010, with a historical average of 0.667. Unlike the U.S, where labor share has been declining, Russia’s labor share has increased from 0.33 in 1995 to 0.425 in 2023. It is unclear how this trend compares to the historical average, as Soviet-era data may not be reliable, though it could be explored in future research.

The second figure shows that the entry rate has been declining rapidly. In 1992, the entry rate was exceptionally high as the transition from a planned economy allowed for the creation of private firms. A sharp decline in the entry rate during the 1990s is expected given the initial surge in firm creation. However, the continued decline after 2000 suggests that structural barriers, regulatory constraints, or economic uncertainty may have played a role in discouraging new business formation. Further analysis could explore whether this trend is driven by tightening state control over industries, reduced access to capital or broader macroeconomic factors affecting innovation and entrepreneurship.

Image

[1] Federal State Statistics Service of the Russian Federation (Rosstat). Labor market, employment, and salaries. Retrieved from https://rosstat.gov.ru/labor_market_employment_salaries [2] Bondarkov, S., Ledenev, V., & Skougarevskiy, D. (2025). Russian Financial Statements Database: A firm-level collection of the universe of financial statements. arXiv:2501.05841. https://doi.org/10.48550/arXiv.2501.05841

jacobchuihyc commented 4 months ago

The application of artificial intelligence in financial markets has transformed the way firms assess risk, detect fraud, and execute trades. While automation in finance has ushered in stunning improvements in efficiency, it also raises grave concerns about systemic risk and job loss. Erik Brynjolfsson's Turing Trap warns of the threat of applying AI to replace human labor rather than augment it, and that threat is extremely tangible in the field of algorithmic trading. Fully automated trading platforms have been responsible for flash crashes and market distortions, solidifying the fact that AI must be utilized to augment decision-making and not as a stand-alone function. It is here that augmentation is important—not as a speculative alternative but as an approach of self-evident empirical benefit, as demonstrated in the article Evaluating Data Augmentation for Financial Time Series Classification. According to the report, AI models are significantly improved when they are augmented with data augmentation rather than relying on past data alone.

In money markets, artificially intelligent trading software is increasingly taking over, and the question arises as to whether this is ultimately in the interests of market stability and efficiency. AI is very adept at recognizing patterns and trading at a pace quicker than any human, but, as past disruptions to markets have shown, it is poor at responding to and comprehending events outside of programmed patterns. The researchers in the study applied data augmentation techniques—like time-warping and noise injection—to artificially augment training sets for AI models. By doing so, they found that AI systems became more resilient to market movements and less prone to overfitting past trends. This means that the best path forward is not full automation but collaboration with AI, where analysts and traders employ AI as an advanced tool rather than a decision-making device.

Image

Image

The proof for this approach is robust. Figure 3 of the report shows that TLo-NBoF models trained on augmented data outperformed those trained without augmentation, generating higher cumulative financial returns. Likewise, Figure 4 shows LSTM models also learned from augmentation, further supporting the belief that AI systems function best when programmed to learn and adapt instead of performing independently. These findings show a valuable point: AI itself is not necessarily better at trading than people, but it can serve as a highly beneficial guide when provided with proper context. The question we should be asking is not how to replace human traders, but how to develop AI systems that enhance their ability.

This discussion has practical implications for business and policy. Regulators and financial institutions must realize that the pressure to fully automate is perilous for the market and the economy. Instead, they should support AI augmentation strategies that prioritize human oversight. Regulators can establish guardrails in completely automated trading platforms but encourage research into AI models improving human decision-making. Another urgent issue is the transparency of AI-led financial transactions as unregulated algorithmic trading can instill hidden danger. Banking institutions need to invest in AI training for their personnel so professionals are adept at cooperating with AI and not competing against them.

spicyrainbow commented 4 months ago

Empirical Case Study: Apple’s Knowledge Diffusion Strategy

Apple exemplifies how dominant incumbents limit knowledge diffusion, contributing to declining U.S. business dynamism. The article "Ten Facts on Declining Business Dynamism and Lessons from Endogenous Growth Theory” highlights that in highly competitive industries, firms invest aggressively in R&D to maintain their edge. Apple’s early years align with this model.

In the early 2000s, the mobile phone industry was fiercely competitive, with Nokia, Motorola, and BlackBerry as major players. To break into this market, Apple invested a higher proportion of its revenue into R&D, leading to revolutionary innovations such as the touch-based interface, fingerprint recognition, and completely new smartphone UI. These advancements disrupted the industry and set new standards.

Image

However, as Apple gained market dominance, its rate of radical innovation slowed. As I displayed in the table of apple from 2007 - 2019, we can see clearly how since 2007, as apple’s market share dramatically increased, its innovation went from novel to improvement driven. While absolute R&D spending has increased, as a percentage of revenue, it has declined relative to its early years of intense competition since the 2000s. The article’s "escape competition" effect explains this shift—when firms achieve dominance, they face less competitive pressure to innovate aggressively. The iPhone today follows this trend, with incremental updates rather than groundbreaking changes, we can see market concentration decreases dynamism by losing the incentive to innovate to stay competitive.

What sets Apple apart from other leading tech firms is its extreme secrecy and controlled knowledge diffusion strategy. Unlike open-source models like GitHub or Google’s flexible internal structure, Apple enforces strict confidentiality policies: Employees face serious consequences for leaks, and Apple Park is completely closed to the public, limiting external knowledge spillovers. In addition, I learned from my visit to the silicon valley that cross-departmental collaboration is very difficult—teams are closed off and employees are not allowed to share details about their work even within the company. In contrast, Google allows employees to flexibility move between teams and departments, fostering internal knowledge diffusion.

Apple’s approach illustrates how a company as it gains market concentration changes and adapts competitive strategy from innovation to limiting knowledge flows to protect its intellectual property and staying competitive against new entrants, and as a result contributes to the decrease in industry-wide incentive for innovation and market dynamism.

Data captured from https://dazeinfo.com/2019/09/27/apple-research-and-development-expenses-by-year-graphfarm/ https://www.statista.com/statistics/216459/global-market-share-of-apple-iphone/ https://www.statista.com/statistics/273006/apple-expenses-for-research-and-development/

rzshea21 commented 4 months ago

Firm entry, exit, and reallocation, indicates economic health and innovation, which I use here as a proxy for business dynamism in the United States. I used the U.S. Census Bureau’s Business Dynamics Statistics database, Economy-wide dataset, to plot these firm dynamism trends over time, focusing on firm entry rates, exit rates, and labor reallocation rates. The dataset provides a historical perspective of how these indicators have developed over the years, visualizing a moderate decline in firm entry rates, exit rates, and a more significant downward trend in labor reallocation rates. This analysis aligns with broader academic discussions on the decline of U.S. business dynamism (Akcigit & Goldschlag, 2022). The Census Bureau stats show modest declines in firm entry and exit rates since the 1980s. This trend suggests that fewer firms have entered or exited the marketplace over the last 40 years. The reading for this week seems to align well with these findings. It suggests that this resulting increase in market concentration negatively impacts knowledge diffusion, which reduces competition and the innovative pressures of creative destruction. Larger incumbent firms are better positioned to defend their market dominance through non-productive strategies like patent protection, capital accumulation, and political influence. The reading suggests these strategies create barriers for new entrants, while preserving older established firms for longer. The firm exit rate has shown relatively stable decline compared to declines in firm entry rates, suggesting that these non-productive strategies has allowed less efficient, large, incumbent firms to avoid failure for longer, while preventing new entrants, implying that competition and creative destriction / innovation is weaker, which leads to less firm turnover. The labor reallocation rate has also fallen measurably, reflecting decreasing movement of jobs across firms. Labor reallocation predicts economic efficiency and business dynamism through efficient labor markets. Decreasing job mobility also indicates concerns about declining dynamism in the U.S., which aligns with the idea that larger incumbent firms could be non-productively accumulating labor, as opposed to smaller businesses that are more dynamic and innovative.

Image

Source: Economy-wide dataset, U.S. Census Bureau, 2022 Business Dynamics Statistics https://www.census.gov/data/datasets/time-series/econ/bds/bds-datasets.html

jesseli0 commented 4 months ago

The Turing Trap in AI Art: A Brief Discussion.

Image

Generative image AI has posed an ethical and economic issue for artists, just another area in which the Turing Trap emerges in our contemporary society. To clarify a point about this discussion, AI has been used by many artists and animators as a method of saving effort on menial tasks. This falls under the umbrella of augmentation relative to the Turing Trap, and is not the issue that this post will be discussing. What we need to understand is the impact that image generators are having on artists when they are used purely for automation.

To start, the dataset that a lot of these "image generators" are trained on are composed of artworks that are unwittingly taken from many artists. The problem this creates is one of intellectual property, as acquiring art in this fashion is highly unethical. At the very least, we should provide policy that protects artists from having their art stolen for training data, as well as regulation in which those responsible for these models have to notify or compensate artists for using their works to train their models. Intellectual property to artists in this case is not that much unlike intellectual property in patent law; we should uphold it to ensure that people are justly compensated for their effort.

Beyond plagiarism, we are starting to see people increasingly use image generators as a replacement for artists. This ranges from certain people posing as artists but using image generators to sell their work, or people relying on image generators instead of paying artists. This is exactly the "automation" process that the Turing Trap paper warns of. The immediate consequence would be that many artists will struggle to pursue their profession as they are crowded out of the market by this generative AI, if left unchecked. Ironically, for those developing image generators, this is sawing the branch that they're sitting on: once few artists remain, there will be nowhere left to get new training data. We can see that for artists and animators, this could be avoided with a mindset of augmentation. AI should not be considered as anything more than a tool in this case, meant to save on menial labor rather than take the creative process out of making art. This would grant artists more bargaining power, and prevent us from destroying our creative output as a species. As the chart above represents, those with attitudes toward AI as a tool rather than an agent assign more credit to the artist.

Citations

Erik Brynjolfsson. 2022. The Turing Trap: The Promise & Peril of Human-like Artificial Intelligence. Daedalus. Epstein, Z., Levine, S., Rand, D.G. and Rahwan, I., 2020. Who gets credit for AI-generated art?. Iscience, 23(9). Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES '23). Association for Computing Machinery, New York, NY, USA, 363–374. https://doi.org/10.1145/3600211.3604681