daveshap / Raspberry

Create an open source toy dataset for finetuning LLMs with reasoning abilities
MIT License
403 stars 28 forks source link

My idea of what is reasoning (idea of how to cut current datasets, and minimise dataset size, +making more output on 0shot) #49

Closed ori1337 closed 1 month ago

ori1337 commented 1 month ago

(this message was not edited by LLM , you can tell because of grammer mistakes, and spelling errors [enjoy my lefthanded devinci schisofrenia])

please dont ban me for this nonsense grabge , part of creativity is considering flaws as truths (also the answer of what is reasoning)

Unused tokens LLM ,

There could be LLM that takes bigger datasets, so if the word "Write me a discord bot", it would automatically have access 5k tk which can take from the internet or such, for example you want it to develop, you want it to use different structures of codes tha work, and likely to work. if we write a python bot, I want it to dare to use a subfunction, and refect to the documentation with certain tasks. like a human can read, and find what he needs

the art of illusination ,

part of human creativity is when you illusinate , I think AI illusnation is a great thing, reasoning, is when you illsunate but end up with a working result/idea/concept. this is how we define the word "concept"

a sepcilised LLM to take chats, and just cut headers.

  1. for example a user corrected the LLM, I want the LLM to take user chats, and just summerize. "this summerization AI" would know to cut down on user dislogs, like if it took them 10 chats to achieve what they wanted from the LLM , to OUTPUT. I want to cut some of the dialog, causing the first shot to be cut down, causing a more inriched output on 0shot, which would help the model to illusinate into other topics and ideas

I want logic gates,

I want neurons in the LLM , I want those "parameters" to art like electrinic gates, I want gates that can compute. For example us dumb humans, illusinate pictures in our heads, and those pictures we can predict a new chess move, and on PAST INPUT our brains took. the end goal of what we do can be effect from even slight things of , if you been to a coffeshop and your smell smelled different people, and if other people we too loud, or the baby scream which will burn you memory, those things activate different things in your mind..

a musciain can instantly replay his Bach D minor BWV 565 , when his eyeballs see those scrolls, what you hear also connects, creating concepts, effect HUMAN OUTPUT;; ?how can we create a machine logic gate something random, that will cause an effect on concepts;;; !; macking concepts is being able to to disconnect something , and this something being made into a different form';;; "an idea"

for example, LLM being good at rust, doesnt make it great at Python for example , why does it automatically will use i64, when i128 would just work, it cannot form those concepts, that the idea of using something random would just work, its not randomize, its just loses tempure, its like predict out of datasets.... the first form of AGI would be , and the theory I built, would be this type of word prediction llm,

for example I can take a sentence in hebrew

ממעאעי כוררעע עכייייי

and even o1-preview , would fail this task, a human easily convert this into sound, and later into words, given he can compile this type of concept... the issue that you can you easily find a logcal way to solve this...

but converting this ממעאעי כוררעע עכייייי into sound isnt that easy for AI. it cannot form concepts,

another idea chapter: why cant we form some static noise like we form another form of compute, for example new chatper unlocked the LLM can form the word CAT, I also want it to also train another set of simpler neuron connections , that concert this letters, into vocal audio... and this vocal audio(input) , can we converted into(output)

Why LLM must be a big big big mathemtical formula,

clearly using math ideas and concepts is the way to achieve the AI we know today and love to hate, but for example didnt you noticed that every ai outthere the arititechture is based on GRU, some would be LSTM , some would use something different, pointistheyall use a single concept, a single type of transformers..

why cant you have many different arititechtures working side by side

you have one big transformers model, and one audio model (for example the literal geniusly cool of having some form of static noise into converting it into knowledge hubs in the LLMparameters.. isnt that how our brains works, you hear, you make ocncepts, you see you make concepts, those are different type of areas in your brain ,... even for just Proof of Concepts, I never seen a basic OCR having some type of different type of heriticatures. also LSTM and GRU and ESN , ;;;and few novel type of transformer(this idea of mine risen from the idea that testing few different heriticatures is a economical deciitions , how could benchmark everything, why dont have all of them work togther, what if this type of dataset up can be unlocked with X transformers, by we used so much money on A transformer llm ... in the end you need to choose =output but having all of those neurons in the human mind FIGHT FIGHT FIGHT is what makes us human, its like schisofrenia , it arrives from creativity )';;;)

LSTM GRU ESN

now sorry I got to wake up for my minimmum wage job tommrow i am always late lol , not hungry for i want some stablity Respidex + mixed with clonazolam ruined my life i guess , thanks mom

ori1337 commented 1 month ago

------------------- I've tried let o1-preview to fix my nonsense: (result, seems like a human who can very well reason tru text! I love hows it hears and reads, its prefect )

Summary of Points:

  1. LLMs Utilizing Unused Tokens and Larger Datasets

    • There could be LLMs (Large Language Models) that access larger datasets. For example, when a user asks, "Write me a Discord bot," the model could automatically have access to an additional 5,000 tokens sourced from the internet or elsewhere. This would enable the AI to develop code using different structures that are likely to work. When writing a Python bot, the model should dare to use subfunctions and refer to documentation for specific tasks, much like how a human would read and find what they need.
  2. The Art of "Illusination" in Creativity

    • Part of human creativity involves what the user refers to as "illusination." They believe that AI engaging in "illusination" is beneficial. Reasoning is when you "illusinate" but end up with a working result, idea, or concept. This process is how we define the word "concept."
  3. Specialized LLMs for Summarizing Conversations

    • The user proposes a specialized LLM designed to take chat conversations and summarize them by cutting down unnecessary dialogue. For example, if it takes a user ten exchanges to achieve the desired output from the AI, this "summarization AI" would condense the dialogue, resulting in a more enriched output from the first attempt (zero-shot). This approach would help the model to "illusinate" into other topics and ideas.
  4. Incorporating Logic Gates into LLMs

    • The user desires LLMs with neurons that act like electronic logic gates capable of computation. Humans can "illusinate" images in their minds and predict new chess moves based on past inputs. Our outputs can be affected by subtle factors, such as sensory experiences in a coffee shop (smells, sounds). For example, a musician can instantly replay Bach's D minor BWV 565 upon seeing the sheet music; sensory inputs connect to create concepts, affecting human output. The question posed is how to create a machine logic gate that introduces randomness to influence concepts. Making concepts involves disconnecting something and reforming it into a different entity—an idea.

    • Additionally, the user notes that an LLM being proficient in Rust doesn't necessarily make it great at Python. It might automatically use i64 when i128 would also work. The LLM cannot form the concept of using something different or random; it doesn't randomize but instead predicts based on datasets. The user's theory suggests that the first form of Artificial General Intelligence (AGI) would be a word prediction LLM capable of such conceptualization.

  5. Challenges in AI Concept Formation with Language Conversion

    • The user provides a sentence in Hebrew:

      ממעאעי כוררעע עכייייי

      They note that even advanced AI models might fail to process this task, while a human can easily convert it into sounds and then words by recognizing patterns and forming concepts. Converting such text into sound isn't easy for AI because it lacks the ability to form concepts in the way humans do.

  6. Transforming Static Noise into Computational Input

    • The user suggests forming static noise into another form of computation. For example, once an LLM learns to form the word "CAT," they propose training another set of simpler neural connections that convert these letters into vocal audio. This vocal audio input can then be converted into output, creating a new pathway for processing information.
  7. Diversity in AI Architectures

    • The user observes that most AI architectures are based on similar concepts, such as GRU (Gated Recurrent Unit), LSTM (Long Short-Term Memory), or transformers. They question why different architectures can't work side by side. For instance, you could have one big transformer model and one audio model. By integrating different forms of data (like converting static noise into knowledge hubs within LLM parameters), we might better emulate how the human brain works—processing sensory inputs to create concepts across different brain areas.

    • They note that even for proof of concepts, they haven't seen basic OCR (Optical Character Recognition) systems using varied architectures. Testing different architectures is an economic decision, but integrating multiple types could unlock new datasets and capabilities. In the end, human brains have many neurons interacting—sometimes conflicting—which contributes to creativity. They compare this to schizophrenia arising from creativity.

  8. Personal Reflection

    • The user concludes with a personal note, mentioning they need to wake up for their minimum wage job tomorrow and are often late. They express a desire for stability and reflect that "Respidex mixed with clonazolam ruined my life, I guess. Thanks, Mom."

Note: The term "illusination" appears to be a unique usage by the user, possibly blending "illusion" and "imagination." The summaries aim to preserve the original intent while correcting grammar and clarifying the concepts presented.

ori1337 commented 1 month ago

--- another thing , here is an idea of why I base my type of chain of thout like this

Why chess masters chose to be chess players, they could finish a doctorate in some type of cancers? medicale research isnt rewarding? is this a sociological issue, where you start learn at 20, and not 4 ? why not obssess on acaedemikal topics form age 7 obbsess and love it!

instead bobby fischer memorize every game pattern, cant you recognize patterns of how to slow cancer or whatever? cant you know how to make a quick mere super easy 5million$

to the idea of reasoning this concept+idea is importent, which he couldnt used his intellect into making easy money, if he was so genius? (plus funny thing, he mentioned a newspaper, where the newspaper says he had two jewish parents, and how someone behave like this, he never declined the idea that both of his parents were jews, he does thinks of his father a lot which is kinda sad if you hear his radio interviews, really can make you shade a tear for this great person, I cant imagine what it felt for him after he reach a very late age of atleast 50years old, not being a father to anyone... when he reached a late age, he should have stimulated himself with family (by late I mean 30years old for example) he stimulate his life with nonsense and couldnt take liablities... take boris spassky, which alltho winning to him, he still lost few rounds to him, and all of the top players were a challenge to him it wasnt easy, and he never really disrespect spassky he respected him )

image

(another fun thing to think, take kramnik a lot of kramnik ranting is like bobby fischer ones, they are on the same levels )