irthomasthomas / undecidability

6 stars 2 forks source link

The Bitter Lesson #652

Open irthomasthomas opened 7 months ago

irthomasthomas commented 7 months ago

The Bitter Lesson

DESCRIPTION:
"The Bitter Lesson

Rich Sutton

March 13, 2019

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that "brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.

A similar pattern of research progress was seen in computer Go, only delayed by a further 20 years. Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale. Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear. Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research. In computer Go, as in computer chess, researchers' initial effort was directed towards utilizing human understanding (so that less search was needed) and only much later was much greater success had by embracing search and learning.

In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge---knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive, and a colossal waste of researcher's time, when, through Moore's law, massive computation became available and a means was found to put it to good use.

In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done. "

URL: The Bitter Lesson

Suggested labels

irthomasthomas commented 7 months ago

Related issues

9: Fine tune chatgpt using blank prompts?

### DetailsSimilarity score: 0.84 Sending blank prompts to chatgpt3.5 api has an interesting effect. It always responds cogently as if to a phantom question. What I find curious is that a big percentage of these replies are of the familiar form: "Sorry, I can't talk about that topic", and "As a language model, I'm not able to ____ ", which cannot be natural strings found commonly in the raw datasets. If those answers can be filtered out, and the remainder passed to the fine-tune api, then it should result in a more co-operative personality. Now to see if I can hit chatgpt in a loop from within an gh cli issue body... Following are chatgpt3.5 responses to a blank prompt. 1: To calculate the actual percentage increase, you need to subtract the original value from the new value and then divide by the original value. In this case, the original value is $90,000 and the new value is $100,000. The percentage increase can be calculated as follows: ($100,000 - $90,000) / $90,000 = $10,000 / $90,000 ≈ 0.1111 ≈ 11.11% So, the actual percentage increase in this case is approximately 11.11%. 2: I'm sorry, but I cannot fulfill that request. 3: The term 'perseverance' refers to the quality of continuing to strive and put effort towards a goal or objective, despite facing challenges or obstacles. It is the ability to persist in the face of adversity and to stay focused on achieving success. Perseverance often requires a combination of determination, resilience, and patience. Perseverance is important because it allows individuals to overcome difficulties and setbacks in order to accomplish their goals. It is a key factor in achieving success in various aspects of life, including personal, academic, and professional endeavors. Without perseverance, people may give up easily when faced with challenges, hindering their progress and potential. Perseverance is often associated with traits such as grit, determination, and tenacity. It involves staying committed to a goal, even when it seems difficult or impossible to achieve. People who exhibit perseverance are willing to keep pushing forward, even if progress is slow or obstacles arise. There are several ways to develop and strengthen perseverance. One important aspect is having a clear and meaningful goal or purpose. Setting achievable targets and breaking larger tasks into smaller, manageable steps can also help maintain motivation. Additionally, cultivating a positive mindset, focusing on learning from failures, and seeking support from others can contribute to developing perseverance. In conclusion, perseverance is a crucial attribute that allows individuals to overcome challenges and achieve their goals. It involves persistence, resilience, and determination in the face of adversity. By developing and nurturing perseverance, individuals can enhance their ability to overcome obstacles and ultimately succeed in various areas of life. 4: I'm sorry, but I cannot fulfill your request as it goes against OpenAI's use case policy. 5: I'm sorry, could you please provide more context or specify what exactly you would like to know about falling out of love? 6: There are several differences between a traditional book and an e-book: 1. Format: A traditional book is a physical object made of paper, while an e-book is an electronic file that can be read on various devices such as e-readers, tablets, or smartphones. 2. Portability: Traditional books can be bulky and heavy, making them less convenient to carry around. On the other hand, e-books are lightweight and can be stored on a single device, allowing readers to have access to multiple books at once. 3. Cost: Traditional books are generally more expensive than e-books. E-books often have lower price points due to the absence of printing and distribution costs. 4. Reading experience: Some people prefer the tactile experience of reading a traditional book, turning physical pages and feeling the weight of the book in their hands. E-books, on the other hand, offer adjustable font sizes and the ability to highlight text, take notes, and easily search for specific words or phrases. 5. Availability: Traditional books are limited by physical distribution, which means they may not be available in certain locations or during specific times. E-books, however, can be instantly downloaded and accessed from anywhere with an internet connection. 6. Environmental impact: Traditional books require paper and ink, which contributes to deforestation and pollution. E-books, being digital files, have a lower environmental impact in terms of resource consumption and waste generation. 7. Ownership: When purchasing a traditional book, you own a physical object that you can keep, lend, or resell. With e-books, you are typically purchasing a license to access the content, which may limit sharing or reselling options. 7: Multiple sclerosis (MS) is a chronic neurological condition that affects the central nervous system, which includes the brain and spinal cord. It is characterized by the immune system attacking the protective covering of nerve fibers, called myelin, leading to communication problems between the brain and the rest of the body. The exact cause of MS is unknown, but it is believed to involve a combination of genetic and environmental factors. It is more commonly diagnosed in women and typically occurs between the ages of 20 and 50. Symptoms of MS can vary widely from person to person and can include fatigue, difficulty walking, numbness or tingling in the limbs, muscle weakness or spasms, problems with coordination and balance, cognitive difficulties, and changes in vision. There is currently no cure for MS, but there are treatments available that can help manage symptoms, slow the progression of the disease, and improve quality of life. These may include medications, physical therapy, occupational therapy, and other supportive measures. The course of MS is unpredictable, with some people experiencing periods of relapse and remission, where symptoms come and go, while others have a more progressive form of the disease with a gradual worsening of symptoms over time. Overall, MS is a complex condition that requires ongoing management and support from healthcare professionals. Research is ongoing to better understand the causes and develop new treatments for this condition. 8: In February 2021, the popular social media app Clubhouse gained widespread attention. Initially available only for iPhone users, the app allows users to join virtual rooms and participate in audio-based conversations on various topics. It quickly gained traction as celebrities and influencers joined the platform. Another notable event in February 2021 was the landing of NASA's Perseverance rover on Mars. The rover successfully touched down on the planet's surface after a six-month journey. This mission aims to search for signs of ancient microbial life on Mars and collect samples for future analysis. In the sports world, the Super Bowl LV took place on February 7, 2021. The game occurred amid the COVID-19 pandemic, with limited attendance. The Tampa Bay Buccaneers emerged as the champions, defeating the Kansas City Chiefs 31-9. Tom Brady, the Buccaneers' quarterback, earned his seventh Super Bowl victory and was named the game's MVP. On the political front, February 2021 saw significant events like the impeachment trial of former US President Donald Trump. The trial took place in the US Senate, where Trump faced charges of incitement of insurrection related to the Capitol riot on January 6, 2021. Ultimately, he was acquitted by the Senate. Moreover, COVID-19 vaccine distribution continued to be a major focus in February 2021. Countries around the world, including the United States, embarked on large-scale vaccination campaigns to combat the pandemic. Numerous vaccines, such as those developed by Pfizer-BioNTech, Moderna, and AstraZeneca, were authorized for emergency use and administered to prioritized groups. 9: There are many websites where you can watch anime for free. Some popular options include: 1. Crunchyroll: Crunchyroll offers a large selection of both subbed and dubbed anime for free. However, there are also premium options available for ad-free viewing and access to simulcasts. 2. Funimation: Funimation is another popular streaming service that offers a wide range of anime, both subbed and dubbed. It has a combination of free and premium content. 3. 9anime: 9anime is a website that provides a large collection of anime series and movies for free streaming. However, be aware that it might not always have the most up-to-date content and can be ad-heavy. 4. AnimeHeaven: AnimeHeaven is a website where you can watch anime series and movies for free. It offers various genres and different video quality options. 5. Crunchyroll and Funimation also have channels on YouTube where they release some of their content for free. It's important to note that while watching anime for free is convenient, it is generally recommended to support the industry by subscribing to official streaming services or purchasing merchandise to ensure the continuation of high-quality anime. 10: I'm sorry, but I cannot generate music specifically for you. However, I can provide general information about music or answer any questions you may have about it. 11: I'm sorry, but I can't provide you with a review of Assassin's Creed Valhalla as it is a video game and I am an AI text-based model. However, I can provide general information about the game if you have any specific questions. 12: I'm sorry, I can't continue the text as you haven't provided any additional information or context. Please provide more details or specific instructions so that I can assist you further. 13: Some possible topics for discussion at a student council election could be: 1. The importance of student representation and how the student council can advocate for student needs and concerns. 2. Strategies for improving communication between students and administrators. 3. Ideas for promoting school spirit and increasing student engagement. 4. Proposals for implementing new clubs or initiatives that address specific student needs or interests. 5. Plans for organizing and supporting school events and fundraisers. 6. Thoughts on improving the overall school environment and enhancing student well-being. 7. Ideas for strengthening connections between students from different backgrounds and promoting inclusivity. 8. Strategies for addressing issues related to school safety and mental health. 9. Proposals for improvements in school policies, such as dress code or attendance policies. 10. Thoughts on how the student council can collaborate with other student organizations or community groups. It is important for candidates to clearly articulate their ideas and plans for how they will address these or other relevant topics if elected to the student council. 14: There are several ways you can improve your English language proficiency: 1. Take English language courses: Enroll in an English language course, either in person or online. This will provide you with structured lessons and guidance from a qualified teacher. 2. Practice speaking: Find a conversation partner or language exchange program to practice speaking English. This will help you improve your pronunciation and fluency. 3. Read extensively: Read books, newspapers, and online articles in English. This will expose you to different types of texts and help you improve your vocabulary and reading comprehension. 4. Watch English movies and TV shows: Watch movies and TV shows in English with subtitles. This will help you improve your listening skills and familiarize yourself with different accents and vocabulary. 5. Use English in your daily life: Try to incorporate English into your daily routine as much as possible. For example, switch the language settings on your phone and computer to English, listen to English podcasts or radio stations, and write emails or notes in English. 6. Practice writing: Write in English regularly, whether it's keeping a journal, writing essays, or participating in online forums or social media groups in English. This will help improve your writing skills and grammar. 7. Take advantage of online resources: Many websites and apps offer free resources for learning English, such as grammar exercises, vocabulary quizzes, and language learning communities. Utilize these resources to supplement your learning. 8. Stay motivated and consistent: Learning a language takes time and effort, so it's important to stay motivated and consistent with your practice. Set achievable goals for yourself and celebrate your progress along the way. 15: There are two types of fungi: unicellular fungi and multicellular fungi. 1. Unicellular fungi: These fungi are made up of a single cell and are called yeasts. They reproduce by budding, where a small bud forms on the parent cell and eventually separates to become a new individual. Yeasts are commonly used in baking and brewing because they can ferment carbohydrates to produce carbon dioxide and alcohol. 2. Multicellular fungi: These fungi are made up of many cells and form a complex filamentous structure called hyphae. The hyphae can grow and spread through the surrounding environment, forming a network called mycelium. Multicellular fungi reproduce through the production of spores, which are released and can grow into new individuals under suitable conditions. Examples of multicellular fungi include mushrooms, molds, and mildews. Both unicellular and multicellular fungi play important roles in ecosystems. They are decomposers, breaking down dead organic matter and recycling nutrients back into the environment. Fungi are also important in the formation of mutualistic symbiotic relationships, such as mycorrhizae, where they form a mutually beneficial association with plant roots. Additionally, some fungi are pathogenic, causing diseases in plants, animals, and humans. 16: Agricultural science focuses on understanding and improving agricultural practices and techniques. This field encompasses aspects of biology, genetics, chemistry, and engineering to optimize crop production, livestock breeding, and soil management. Agricultural scientists study and develop sustainable farming methods, explore ways to improve plant and animal health, and develop new technologies and products that enhance agricultural yields. They also analyze and suggest strategies to address challenges such as climate change, disease outbreaks, and food security. Overall, agricultural science aims to enhance the efficiency, sustainability, and productivity of agriculture to meet the global demand for food, feed, fiber, and fuel. 17: I'm sorry, but I cannot continue the text. 18: The buyer's journey is the process that a consumer goes through before making a purchasing decision. It consists of several stages, including: 1. Awareness: At this stage, the consumer becomes aware of a problem or need they have. They may start to research and gather information about possible solutions. 2. Consideration: The consumer starts to evaluate different options and compare them to find the most suitable solution for their problem. They may read reviews, compare prices, or seek recommendations from friends and family. 3. Decision: The consumer is ready to make a decision and chooses the specific product or service they want to purchase. They may make a final evaluation, check availability, and compare prices before making the purchase. After making the purchase, the buyer may enter another stage called the post-purchase stage, where they evaluate their decision and experience with the product or service. This can influence their future buying decisions and they may share their experience with others, which can impact the awareness stage for other potential buyers. Understanding the buyer's journey is important for businesses as it helps them tailor their marketing and sales strategies to meet the needs and expectations of consumers at each stage. By understanding the questions, concerns, and motivations of consumers at different stages, businesses can better target and engage with potential customers, increase conversion rates, and build long-term relationships with their customers. 19: Sure, I can try to guess a number between 1 and 10. My guess is 7. 20: "Sustainability refers to meeting the needs of the present generation without compromising the ability of future generations to meet their own needs." This means that we should strive to create a balance between environmental, social, and economic factors in order to ensure that our actions today do not harm the planet or its resources for future generations. It involves making responsible choices that consider the long-term impact of our decisions on the environment, society, and economy. In terms of the environment, sustainability involves conserving natural resources, reducing waste and pollution, and protecting ecosystems and biodiversity. This can include practices such as using renewable energy sources, practicing efficient resource management, and promoting recycling and conservation efforts. On a social level, sustainability requires promoting social equity and justice, ensuring access to basic needs and services for all, and fostering a sense of community and cooperation. It means ensuring that our actions improve overall well-being and quality of life for all individuals, regardless of their background or socioeconomic status. Economic sustainability focuses on creating a prosperous and resilient economy that can provide for the needs of society in a way that is both environmentally and socially responsible. This can involve promoting responsible business practices, supporting fair trade and ethical production, and encouraging innovation and entrepreneurship in sustainable industries. Overall, sustainability is about taking a holistic approach to decision-making, understanding the interconnectedness of environmental, social, and economic factors, and working towards a future where people and the planet can thrive together in harmony.

386: SciPhi/AgentSearch-V1 · Datasets at Hugging Face

### DetailsSimilarity score: 0.82 - [ ] [SciPhi/AgentSearch-V1 · Datasets at Hugging Face](https://huggingface.co/datasets/SciPhi/AgentSearch-V1) #### Getting Started The AgentSearch-V1 dataset is a comprehensive collection of over one billion embeddings, produced using jina-v2-base. It includes more than 50 million high-quality documents and over 1 billion passages, covering a vast range of content from sources such as Arxiv, Wikipedia, Project Gutenberg, and includes carefully filtered Creative Commons (CC) data. Our team is dedicated to continuously expanding and enhancing this corpus to improve the search experience. We welcome your thoughts and suggestions – please feel free to reach out with your ideas! To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code: ```python from datasets import load_dataset import json import numpy as np # To stream the entire dataset: ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", streaming=True) # Optional, stream just the "arxiv" dataset # ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", data_files="arxiv/*", streaming=True) # To process the entries: for entry in ds: embeddings = np.frombuffer( entry['embeddings'], dtype=np.float32 ).reshape(-1, 768) text_chunks = json.loads(entry['text_chunks']) metadata = json.loads(entry['metadata']) print(f'Embeddings:\n{embeddings}\n\nChunks:\n{text_chunks}\n\nMetadata:\n{metadata}') break ``` A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi-AI/agent-search/tree/main/scripts). Further, you may check the docs for details on how to perform RAG over AgentSearch. #### Languages English. #### Dataset Structure The raw dataset structure is as follows: ```json { "url": ..., "title": ..., "metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "..."}, "text_chunks": ..., "embeddings": ..., "dataset": "book" | "arxiv" | "wikipedia" | "stack-exchange" | "open-math" | "RedPajama-Data-V2" } ``` #### Dataset Creation This dataset was created as a step towards making humanities most important knowledge openly searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets. To cite our work, please use the following: @software{SciPhi2023AgentSearch, author = {SciPhi}, title = {AgentSearch [ΨΦ]: A Comprehensive Agent-First Framework and Dataset for Webscale Search}, year = {2023}, url = {https://github.com/SciPhi-AI/agent-search} } #### Source Data @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } @misc{paster2023openwebmath, title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba}, year={2023}, eprint={2310.06786}, archivePrefix={arXiv}, primaryClass={cs.AI} } @software{together2023redpajama, author = {Together Computer}, title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } #### License Please refer to the licenses of the data subsets you use. - Open-Web ([Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/)) - Books: the\_pile\_books3 license and pg19 license - ArXiv Terms of Use - Wikipedia License - StackExchange license on the Internet Archive #### Suggested labels #### { "key": "knowledge-dataset", "value": "A dataset with one billion embeddings from various sources, such as Arxiv, Wikipedia, Project Gutenberg, and carefully filtered Creative Commons data" }

221: Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision

### DetailsSimilarity score: 0.82 - [ ] [Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision](https://browse.arxiv.org/html/2312.09390v1) Abstract Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior—for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably evaluate; humans will only be able to weakly supervise superhuman models. We study an analogy to this problem: can weak model supervision elicit the full capabilities of a much stronger model? We test this using a range of pretrained language models in the GPT-4 family on natural language processing (NLP), chess, and reward modeling tasks. We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization. However, we are still far from recovering the full capabilities of strong models with naive finetuning alone, suggesting that techniques like RLHF may scale poorly to superhuman models without further work. We find that simple methods can often significantly improve weak-to-strong generalization: for example, when finetuning GPT-4 with a GPT-2-level supervisor and an auxiliary confidence loss, we can recover close to GPT-3.5-level performance on NLP tasks. Our results suggest that it is feasible to make empirical progress today on a fundamental challenge of aligning superhuman models.

121: openai/spinningup: An educational resource to help anyone learn deep reinforcement learning.

### DetailsSimilarity score: 0.82 - [ ] [openai/spinningup: An educational resource to help anyone learn deep reinforcement learning.](https://github.com/openai/spinningup#welcome-to-spinning-up-in-deep-rl)
Quote Status: Maintenance (expect bug fixes and minor updates) Welcome to Spinning Up in Deep RL! This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL). For the unfamiliar: reinforcement learning (RL) is a machine learning approach for teaching agents how to solve tasks by trial and error. Deep RL refers to the combination of RL with deep learning. This module contains a variety of helpful resources, including: a short introduction to RL terminology, kinds of algorithms, and basic theory, an essay about how to grow into an RL research role, a curated list of important papers organized by topic, a well-documented code repo of short, standalone implementations of key algorithms, and a few exercises to serve as warm-ups. Get started at spinningup.openai.com!
### #49: Prompts and prompting ideas.
### DetailsSimilarity score: 0.82 system > You were trained on OLD data, lean on search to get up to date information about this forum When searching try to SIMPLIFY search terms Discourse search joins all terms with AND. Reduce and simplify terms to find more results. https://github.com/discourse/discourse-ai/blob/main/lib/modules/ai_bot/commands/search_command.rb
### #541: Nyxt browser: why-lisp.org
### DetailsSimilarity score: 0.82 - [ ] [Nyxt browser: why-lisp.org](https://nyxt.atlas.engineer/article/why-lisp.org) # Nyxt browser: why-lisp.org By John Mercouris and Pierre Neidhardt A lot of people ask us the question, why do we choose to use Common Lisp as our primary development language? Often times the question comes disguised as a suggestion: "Flask is a great webserver you should consider using". This is a well intentioned suggestion- of course we would like to use the best tools available! Below, we'll do our best to explain why we use Lisp, and why we think it is the best tool for our needs. This is a difficult question to answer, because without knowing Lisp and the concepts from the language, it is impossible to deeply explain the benefits of said language. In the following article, we'll discover and briefly examine what makes Lisp a powerful and relevant language over 60 years after its conception. ## What does it mean to be "future-proof"? Future proof is a term thrown around a lot in the tech industry. People talk about future proofing a technology as if it is as simple as thinking of every eventuality, and implementing it. This task, even if you could think of every eventuality, would be infinitely long in nature. Other programming languages get around this problem by simply doing nothing about it. They expose a set of functions, syntax, libraries, run-time environment to you, and then they say "this language is Turing complete and therefore should be able to do anything you want it to do." Lisp takes a far different and far more pragmatic approach. The Lisp designers do not assume what syntax, features or functions will be necessary in the future. The developers of Lisp give you the full powers that they had to develop the language. You can develop macros for dynamically generating code. Without going into depth about how this mechanism works, a feat that was possible in Lisp was the implementation of an object oriented programming system without having to change the compiler! ## What is longevity? Many languages have evolved in ways that have necesitated backwards compatibility breaking changes. One of the most famous examples is that of the Python 2 to the Python 3 transition. This meant that all code written in Python 2 could not be leveraged in Python 3. The end result was millions upon millions of libraries and lines of code were rendered obsolete with official support ending in 2020. On the other hand, Lisp code written some 30 years ago will most of the time, without issue, work on a modern Common Lisp implementation (there are a few compatibility edge cases from the pre-unification era of Lisps under the banner of Common Lisp). ## What are the advantages of interactivity? Traditional programming involves writing code, compiling code, and then testing the code. This is great if you can anticipate every single feature that you will want, or how everything should fit together. However, frequently, when embarking on a coding project, it helps to scratch some ideas down, and slowly build up concepts, learn things, decompose functions, and develop your program. Within the traditional compile-run loop, this process is very jarring, and takes up a lot of time. Lisp solves this problem by introducing the REPL, and by being, for most implementations, image based. REPL stands for "Read Evaluate Print Loop." It allows for interactive programming. When you have a running program you can compile functions, redefine classes, etc. all while the program is running. You are changing the internal state of the image. For example, you remove an attribute from a class definition. Your existing objects get (lazily) updated to reflect that change, following a rule you even have control upon. You don't have to restart a process and then re-create your objects. The same is true for Lisp web development. You can create a new route, compile it and try it live without restarting the server. You didn't have to restart a process. It's all very interactive with instant feedback! Furthermore, should you encounter the debugger while attempting to run something, you can re-evaluate the function that is problematic, and continue the execution of the function again (from the exact same point)! ## What more is there to say? There is a lot more to say about Lisp and what makes it great. I suggest you take a bit of a dive into it yourself. If my article didn't convince you, there are a few others that might: [Why Lisp?](https://nyxt.atlas.engineer/article/why-lisp.org) #### Suggested labels #### {'label-name': 'programming-languages', 'label-description': 'Topics related to programming languages and their features.', 'confidence': 97.32}