Open jdhodgkins opened 2 years ago
Issue Status: 1. Open 2. Started 3. Submitted 4. Done
This issue now has a funding of 0.15 ETH (491.44 USD @ $3276.27/ETH) attached to it as part of the https://github.com/foresight-org fund.
Issue Status: 1. Open 2. Started 3. Submitted 4. Done
Work has been started.
These users each claimed they can complete the work by 5 days, 17 hours ago. Please review their action plans below:
1) okeaguugochukwu has started work.
it's a pleasure to work on the bounty, I will make research and tell the story 2) ify01 has started work.
I am very much interested in sharing a story that i strongly believe would come to reality in future. 3) thiangm has started work.
4+ years of content creation, writing stories fascinates me 4) joggyjagz7 has started work.
I will tell an inspiring story about the Ethical Practices Around The Proceeds Of AI 5) digitea00 has started work.
I would love to write a response for this 6) nwakakukaks has started work.
Will write extensively on AI The subject remains an interesting one for me 7) bagusbrajamusti has started work.
One of the first things many people do in the morning is to open their phones. When unlocking a smartphone with biometrics, such as face ID, we are making use of AI.
Learn more on the Gitcoin Issue Details page.
Issue Status: 1. Open 2. Started 3. Submitted 4. Done
Work for 0.15 ETH (295.05 USD @ $1967.0/ETH) has been submitted by:
@jdhodgkins please take a look at the submitted work:
Issue Status: 1. Open 2. Started 3. Submitted 4. Done
The funding of 0.15 ETH (269.52 USD @ $1775.88/ETH) attached to this issue has been approved & issued to @shayanpr.
Bounty concept
In the Existential Hope-podcast, we invite scientists to speak about long-termism. Each month, we drop a podcast episode where we interview a visionary scientist to discuss the science and technology that can accelerate humanity towards desirable outcomes. One of the questions we always ask is for the scientist to provide an example of a potential eucatastrophe. The phrase “eucatastrophe” was originally coined by JRR Tolkien as “the sudden happy turn in a story which pierces you with a joy that brings tears”
In a paper published by the Future of Humanity Institute, written by Owen Cotton-Barratt and Toby Ord, they use Tolkien’s term to suggest that ”an existential eucatastrophe is an event which causes there to be much more expected value after the event than before.” I.e. the opposite of a catastrophe.
Telling stories can help us make what seems abstract become real and clear in our minds. Therefore we have now created a bounty based on this prompt. Use this event as a story-device to show us the picture of a day in a life where this happens. How would this make us feel? How would we react? What hope can people get from this?
An example of a Eucatastrophe by Anna Yelizarova (Future of Life Institute)
“An event where we collectively agree on more ethical practices around the proceeds of AI, so that all people can have their needs met by society.” To me, a eucatastrophe is not just something very good happening out of the blue. It's in storytelling: Everything is at the brink of collapse. Things are about to end very badly for all the characters we care deeply about. And then suddenly, there's a knight in shining armor with an army that comes and saves us. I think that's what Tolkien was referring to as a eucatastrophe.
Thinking about how this storytelling tool would look in relation to AI, it would be a lot of people that are fed up with their needs not being met by society, frictions building up, and perhaps even some agreement is breached. Then the eucatastrophe is an event that addresses the needs of the people who are complaining.
The paper “The Windfall Clause” puts forward this idea of an agreement where all companies that are building AGI agree beforehand that if they build an AI that accrues more than a certain percentage of the global GDP, their money will be reallocated in a trust and it will be communally decided on how this money is spent.
So the eucatastrophe would be an event where we collectively agree that enough is enough. Where we break out of this paradigm and become bigger people in a scenario where nobody would expect this to happen. Where you're pleasantly surprised and we put a stop to the machinery and agree on more ethical practices around the proceeds of AI.
Bounty prompt Describe a day in the life when we have an event where humanity collectively agrees on more ethical practices around the proceeds of AI, so that all people can have their needs met by society.
Submit your bounty-response below for your chance to be rewarded 0.15 ETH.