Watts-Lab / commonsense-platform

Commonsense platform
https://commonsense.seas.upenn.edu
1 stars 0 forks source link

Edits on front page text #139

Closed markwhiting closed 1 month ago

markwhiting commented 1 month ago

Currently:

Common sense is usually defined as “what all sensible people know,” but this definition is circular: how do we know someone is sensible other than that they possess common sense? As a result, most people believe that they themselves possess common sense, but have trouble articulating which of their beliefs are commonsense or how common they are.

This project seeks to resolve the intrinsic ambiguity of common sense empirically via a massive online survey experiment. Participants will rate short claims that span a wide range of knowledge domains, both in terms of their own agreement with the claim and their belief about others' agreement with it. We have developed novel methods to extract statements from several diverse sources including appearances in mass media, non-fiction books, and political campaign emails, as well as statements elicited from human respondents and generated by AI systems.

Ultimately, we hope to provide insight into the nature and limits of common sense, thereby aiding research communities (e.g. AI and ML) who wish to explore and simulate this ubiquitous yet frustratingly elusive concept. For more detail into this work, see paper A framework for quantifying individual and collective common sense, recently out at PNAS.

(waiting for @duncanjwatts to make some tweaks to this)

duncanjwatts commented 1 month ago

Common sense is often defined as “what all sensible people know,” but this definition is circular: how do we know someone is sensible other than that they possess common sense? As a result, most people believe that they themselves possess common sense, but can't articulate which of their beliefs are commonsensical or how common their beliefs are to others.

This project seeks to quantify common sense empirically via a massive online survey experiment. Participants will read a series of "claims" about the physical and social world (e.g. "Dropped pebbles fall to the ground" or "Fully automatic assault rifles should be banned"), state whether they agree with each claim, and also state what they think most other people think.

We have developed novel methods to extract statements from several diverse sources including appearances in mass media, non-fiction books, and political campaign emails, as well as statements elicited from human respondents and generated by AI systems. Our findings will shed light on the nature and limits of common sense, thereby aiding research communities (e.g. AI and ML) who wish to explore and simulate this ubiquitous yet frustratingly elusive concept.

For more detail into this work, see our recent paper A framework for quantifying individual and collective common sense, published in The Proceedings of the National Academy of Sciences.

markwhiting commented 1 month ago

@amirrr — can you put Duncan's version in production when you have a chance?