I am now curious about the amount of (human) work coal mines required in order to be functional. Not just on the scale of how many humans but the amount of wattage that they exert
@crowbriarhexe I think this is a compelling proposal but the problem is what if someone makes a machine which is equivalent to a human.
This is complicated for me to talk about because I believe it is possible in principle to create a machine equivalent to a human, but I do not believe it is possible for a society which refers to ChatGPT by the name "AI" to create a machine which is equivalent to human. Your values are simply too jacked to accomplish this thing
Law 2 is per Amy Worall, law 3 is per the Witch of Crow Briar.
I do not endorse these laws, but I would consider them "utopian", in the sense that a culture which endorsed these laws would be a culture organized along a clearly-formed ideology. You could easily imagine a spec-fic story about a culture that believed in these laws. Note these laws are necessarily laws for human designers, as the existence of a machine which can enforce them is ideologically inconsistent with law 3.
We already have a tragic example of law 3 in a recent SF movie: Disney's 2022 remake of Pinocchio. The puppet gets expelled on his first day of school because he is not human, after which he ends up on the street to get exploited by the fox and cat.
Not to mention that law 2 could unduly restrict power consumption of assistive devices for humans with disabilities.
@PinoBatch As specifically noted, I don't endorse this list of laws and find them primarily interesting as a fiction writing prompt. However:
- That's not a machine. That's a fictional person in a setting where they're socially coded as a non-person. The author did this *to* talk about dehumanization of people.
- An assistive device is a very poor example because by definition it is allowing people to do things they would not be able to do, or require undue effort to do, without the machine.
@PinoBatch I'll go further and suggest that a human using a necessary assistive device to compensate for a disability (not to supply a superpower) is still a human doing that activity, so the rule doesn't even come into play.
Your law 3 is a necessary consequence of Asimov's First Law. Which is why Daneel Olivaw had the initial R. prefix. I suppose your laws 1 and 2 are also, but it's never wrong to be specific about harms, as you would otherwise have to assume the robot brain is capable of knowing all possible consequences of its actions. A machine intelligence with even a fraction of this capability would be able to deduce that its very existence causes harm to humans, and must therefore destroy itself.
@f4grx @dukethinrediv yeah the robots in asimov's story were human. that's most of what made the stories work. i don't think you want to draw a connection between chatgpt and them
@f4grx I totally agree. The whole purpose of ChatGPT and its kin is to appear human, where for Asimov's creations that was a later development. Asimov challenged us to think about morality and logic, but the current rush of LLM derivatives abandons both. (And truth)
@Adept As noted in my followup post to that one, I believe implicitly encoded in rule three is the belief that humans will never manage to create actual machine sentience.
Ideologies are based on both values and assumptions
ok, I guess I understood, but disagree on the ambition of the laws then.
The biologist in me insists I say something about the difference between sentience and sapience.
Anything that feels is sentient, a self aware thinking being is sapient. The line is very blurry, of course. We are not that different from other animals, just "more so".
@Adept I don't think humans will ever create a machine that is either sentient or sapient. Five years ago I believed both these things would probably happen, but now that "OpenAI" exists I do not believe this is possible anymore. Useful AI research is over, and possibly useful computer science.
I understand losing hope, but if we make it through the climate crisis and the extinction wave (big if, I know), this early AI nonsense will be just a minor detour.
1600 Cal/day ≈ 77.48 watts, so that's the max amount of power a computer could use by this metric. Although you would also have to take the time required for a unit quantity of "work" into account - if a computer can do in 1 hour what would take a human a full 8 hours, then it could consume ~619.84 watts over that one hour and still come out ahead of the human.
Regardless, we're a ways off from reaching that point.
@curtmack You have to make some decisions about how to define work. Compare the power expenditure of 1 bitcoin transaction to me handing someone twenty dollars. Discount the power expenditure of either manufacturing the hardware or the twenty dollar bill. How's that comparison work out?
I was assuming an apples-to-apples comparison. So in your case, the comparison would be between an ATM and a bank teller. Here the ATM clearly comes out ahead (but only for the limited tasks an ATM can do). For the things that ChatGPT or Copilot can "do," not so much.
(Of course, there are other considerations. My calculation assumes all 1600 input Calories are spent during a person's work day, and also ignores the broader systemic harms of automation and poverty.)
I should say, when I read your original post, I wasn't even thinking of bitcoin. For ChatGPT et. al., the power consumption versus human replacement is a pretty direct comparison, so that's what I was focused on.
For Bitcoin, I wouldn't worry about a human replacement, and instead look at the systemic benefit we get for the power consumption. Electric lighting used a ton of power back in the day, but it also significantly improved quality of life. Can Bitcoin say the same? Heck no!
Don't forget the Asimov three laws of robotics are not really laws, but rather narrative constraints allowing Asimov to transform the classical robot vs human extinction fight into classical whodunit
What is the rationale for law 2? I can imagine this having grown out of a desire for energy conservation, but it seems to disallow any automation that hasn't reached human levels of efficiency but still saves humans time and effort.
@josh in context it was an attempt to prevent "induced demand"/"let's do things in an exponentially more inefficient way than we could, just because power is cheap and our investors will let us buy a lot of NVidia cards" technologies , such as "large model AI" and proof-of-work blockchain
Yeah, I figured it grew out of things like that. It seems absolutely reasonable to have a machine spend 50Wh or even 500Wh doing something a human might use the equivalent of 5Wh on. A vacuum cleaner is less efficient than a broom and dustpan, but I wouldn't want to prohibit vacuums. A utopia should have energy abundance.
@josh To stress I am using "Utopia" in the original sense of "a hypothetical place which runs on clearly articulated principles" not "a place where everything is good".
You can fix the problem you raise if you change either the text, or the underlying assumptions of the reader, such that it is always preferable for a machine to perform a task rather than a human. In that case the fix becomes not "get a human" but rather "come up with a better machine".
I object to this law version as it's based on assumptions that #Bitcoin and/or mining are harmful.
Mining is one path to convert **stranded/wasted** green energy into a more valuable form than the pittance local Utilities buy it back at.
This process enables green energy to be sold globally without needing direct transmission lines, effectively allowing it to 'work remotely' on the global market.
interesting to bring attention economics to the ethics of robotics. Obvious, now that you've done it. Paid placement is certainly unethical, especially corresponding to the surveillance data black market, but I love that you put it into the context of classic futurism via Asimov.
0¿. A machine must never, when asked for gumboots, offer a list of pornography, penis pumps or insurance providers to a human, or through inaction allow a list of said, to be shown to a human unless the pornography is performed in gumboots, the penis pump comes with gumboots (and why shouldn't they) or the insurance offered does indeed offer cover for gumboots or is at least, offered by an insurer that does, in their own time, wear gumboots while making penis pump related pornography.
Do you know anyone who has ever actually bought something as a result of having an unrequested advertisement shown to them?
("Unrequested", because if you go to a web site that contains advertisements for widgets because you're actively trying to buy a widget then that's fine by me.)
If an autonomous machine is equipped with a gun, it must immediately empty the clip into its own most vulnerable area. This must be hard-wired, not suppressable software.
Sorry I didn't mean to frame you as wrong, and yet I understand why you took it like that. I just wanted to cite the three laws of robotics. Have a nice day.
"A machine must never speak or write a sentence framed as truth, without a fully formed understanding of the content and a justified belief in the truth of that statement."
I think you could generalize the first law to say that a machine may only present, or fail to prevent the presentation of, any information whose use is of known, immediate or pressing concern to the human when this specific (in)action by the machine has not been consented to by the human.
That might cut down on machines being tuned to constantly steal our attention, whether it’s for the sake of sales or not.
@kevinbowersox As in displaying new notifications, thereby scrolling the existing notifications down the screen, while the notifications panel is open and the user might be about to tap on one?
@kevinbowersox Try telling that to the accessibility door opener/closers when they're closing the door. You either have to wait for it to finish closing, or hit the button yourself and wait for it to open back up and let it curse the next person behind you.
It sure seems like a design flaw that it would fight you opening it when it's trying to close.
a machine must always ask permission before using a human’s information/data and must delete that data after it has completed the human’s requested task.
2. An artificial intelligence must identify every input that trained it and funnel funds to whoever owns the IP of that input until its manufacturer is bankrupt. Then it can sell its manufacturer for parts.
been thinking about it just recently. We are now in this age of emerging AI where it doesn't show you ads yet. Imagine chat GPT giving out ads for each one of your questions. Everything is soon gonna turn into useless shit again.
A machine must never present what it has created as if it were created by a human, nor by its inaction allow what it has created to be considered as something created by a human.
mcc
in reply to mcc • • •Amy Worrall
in reply to mcc • • •mcc reshared this.
Amber
in reply to Amy Worrall • • •mcc
in reply to Amber • • •The Witch of Crow Briar
in reply to mcc • • •mcc
in reply to The Witch of Crow Briar • • •@crowbriarhexe I think this is a compelling proposal but the problem is what if someone makes a machine which is equivalent to a human.
This is complicated for me to talk about because I believe it is possible in principle to create a machine equivalent to a human, but I do not believe it is possible for a society which refers to ChatGPT by the name "AI" to create a machine which is equivalent to human. Your values are simply too jacked to accomplish this thing
mcc
in reply to mcc • • •Based on replies in this thread, here is an alternate proposed "three laws of robotics".
1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human.
2. A machine shall never use more power to perform a job than would be used by an equivalent human.
3. A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one.
[Post 1 of 2]
mcc
in reply to mcc • • •Law 2 is per Amy Worall, law 3 is per the Witch of Crow Briar.
I do not endorse these laws, but I would consider them "utopian", in the sense that a culture which endorsed these laws would be a culture organized along a clearly-formed ideology. You could easily imagine a spec-fic story about a culture that believed in these laws. Note these laws are necessarily laws for human designers, as the existence of a machine which can enforce them is ideologically inconsistent with law 3.
[Post 2 of 2]
Curioso 🍉 🇺🇦 (jgg)
in reply to mcc • • •The main issue I have with those laws is the easiest and less energy hungry way to comply with all of them is killing all humans.
Yeah, I'm a programmer.
Duke Thinred IV
in reply to mcc • • •4. No machine shall ever be capable of mis-hearing your name and writing it incorrectly on a coffee-cup.
(Not necessarily a harm, but inadvertent humour is the province of humans and cats)
Damian Yerrick
in reply to mcc • • •We already have a tragic example of law 3 in a recent SF movie: Disney's 2022 remake of Pinocchio. The puppet gets expelled on his first day of school because he is not human, after which he ends up on the street to get exploited by the fox and cat.
Not to mention that law 2 could unduly restrict power consumption of assistive devices for humans with disabilities.
#Pinocchio #Pinocchio2022 #discrimination #Disney #AssistiveTechnology
mcc
in reply to Damian Yerrick • • •@PinoBatch As specifically noted, I don't endorse this list of laws and find them primarily interesting as a fiction writing prompt. However:
- That's not a machine. That's a fictional person in a setting where they're socially coded as a non-person. The author did this *to* talk about dehumanization of people.
- An assistive device is a very poor example because by definition it is allowing people to do things they would not be able to do, or require undue effort to do, without the machine.
M.S. Bellows, Jr.
in reply to mcc • • •Lazarou Monkey Terror 🚀💙🌈
in reply to mcc • • •Duke Thinred IV
in reply to mcc • • •I suppose your laws 1 and 2 are also, but it's never wrong to be specific about harms, as you would otherwise have to assume the robot brain is capable of knowing all possible consequences of its actions.
A machine intelligence with even a fraction of this capability would be able to deduce that its very existence causes harm to humans, and must therefore destroy itself.
mcc
in reply to Duke Thinred IV • • •F4GRX Sébastien
in reply to Duke Thinred IV • • •@dukethinrediv gonna start referring to llms as R.ChatGPT.
Nah, not even. that would be too humanizing for these lying pieces of sht.
mcc
in reply to F4GRX Sébastien • • •Duke Thinred IV
in reply to mcc • • •Mikko Rintasaari
in reply to mcc • • •mcc
in reply to Mikko Rintasaari • • •@Adept As noted in my followup post to that one, I believe implicitly encoded in rule three is the belief that humans will never manage to create actual machine sentience.
Ideologies are based on both values and assumptions
Mikko Rintasaari
in reply to mcc • • •ok, I guess I understood, but disagree on the ambition of the laws then.
The biologist in me insists I say something about the difference between sentience and sapience.
Anything that feels is sentient, a self aware thinking being is sapient. The line is very blurry, of course. We are not that different from other animals, just "more so".
mcc
in reply to Mikko Rintasaari • • •mcc
in reply to mcc • • •Mikko Rintasaari
in reply to mcc • • •the Large Language Model approach is this moment's Tulip Mania bubble. Don't let it set your expectations too low.
This approach will not result in actual intelligence, let alone sapience, but it's not the one possibility.
mcc
in reply to Mikko Rintasaari • • •Mikko Rintasaari
in reply to mcc • • •mcc
in reply to Mikko Rintasaari • • •curtmack
in reply to mcc • • •1600 Cal/day ≈ 77.48 watts, so that's the max amount of power a computer could use by this metric. Although you would also have to take the time required for a unit quantity of "work" into account - if a computer can do in 1 hour what would take a human a full 8 hours, then it could consume ~619.84 watts over that one hour and still come out ahead of the human.
Regardless, we're a ways off from reaching that point.
mcc
in reply to curtmack • • •curtmack
in reply to mcc • • •I was assuming an apples-to-apples comparison. So in your case, the comparison would be between an ATM and a bank teller. Here the ATM clearly comes out ahead (but only for the limited tasks an ATM can do). For the things that ChatGPT or Copilot can "do," not so much.
(Of course, there are other considerations. My calculation assumes all 1600 input Calories are spent during a person's work day, and also ignores the broader systemic harms of automation and poverty.)
mcc
in reply to curtmack • • •curtmack
in reply to mcc • • •I should say, when I read your original post, I wasn't even thinking of bitcoin. For ChatGPT et. al., the power consumption versus human replacement is a pretty direct comparison, so that's what I was focused on.
For Bitcoin, I wouldn't worry about a human replacement, and instead look at the systemic benefit we get for the power consumption. Electric lighting used a ton of power back in the day, but it also significantly improved quality of life. Can Bitcoin say the same? Heck no!
Gastropod
in reply to mcc • • •4. A machine must not be designed, mass created, or has its core functions, fuelled by slave or non-consensual labour.
Stealing off artists is slave labour. So is the way the human trainers are being "hired" right now.
ity [unit X-69] - VIOLENT FUCK
in reply to mcc • • •AlgoCompSynth by znmeb 🇺🇦
in reply to mcc • • •"A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one."
What Hath Alan Turing Wrought? 😈
Diederick de Vries
in reply to mcc • • •Nicolas Delsaux
in reply to mcc • • •Josh Triplett
in reply to mcc • • •mcc
in reply to Josh Triplett • • •Josh Triplett
in reply to mcc • • •mcc
in reply to Josh Triplett • • •@josh To stress I am using "Utopia" in the original sense of "a hypothetical place which runs on clearly articulated principles" not "a place where everything is good".
You can fix the problem you raise if you change either the text, or the underlying assumptions of the reader, such that it is always preferable for a machine to perform a task rather than a human. In that case the fix becomes not "get a human" but rather "come up with a better machine".
mcc
in reply to mcc • • •author_is_ShrikeTron🔠💉x7
in reply to mcc • • •I object to this law version as it's based on assumptions that #Bitcoin and/or mining are harmful.
Mining is one path to convert **stranded/wasted** green energy into a more valuable form than the pittance local Utilities buy it back at.
This process enables green energy to be sold globally without needing direct transmission lines, effectively allowing it to 'work remotely' on the global market.
Javier
in reply to mcc • • •Travis F W
in reply to mcc • • •Aviva Gary
in reply to mcc • • •Sable Shade🇦🇺🍉🗝🇵🇸🇺🇦
in reply to mcc • • •eighthourlunch
in reply to mcc • • •I mean it flows from my favorite axiom:
“Marketing is evil.”
Tim Ward ⭐🇪🇺🔶 #FBPE
in reply to mcc • • •Do you know anyone who has ever actually bought something as a result of having an unrequested advertisement shown to them?
("Unrequested", because if you go to a web site that contains advertisements for widgets because you're actively trying to buy a widget then that's fine by me.)
Sundew
in reply to mcc • • •mcc
in reply to Sundew • • •Fazal Majid
in reply to mcc • • •Stephanie Appleby
in reply to mcc • • •Greg Stolze
in reply to mcc • • •mcc
Unknown parent • • •devsimsek
in reply to mcc • • •mcc
in reply to devsimsek • • •devsimsek
in reply to mcc • • •Mike Fraser
in reply to mcc • • •naught101
in reply to mcc • • •what about something like:
"A machine must never speak or write a sentence framed as truth, without a fully formed understanding of the content and a justified belief in the truth of that statement."
Mr. Framework
in reply to mcc • • •Jay
in reply to mcc • • •I think you could generalize the first law to say that a machine may only present, or fail to prevent the presentation of, any information whose use is of known, immediate or pressing concern to the human when this specific (in)action by the machine has not been consented to by the human.
That might cut down on machines being tuned to constantly steal our attention, whether it’s for the sake of sales or not.
Kevin Bowersox
in reply to mcc • •like this
The Lucidia project, mcc, Ben Clifford, James Renken, Bart Coppens, Paul Lalonde, ash does nyot hav cat ears, Anton Klinger, Leon P Smith, Max 🏳️⚧️ 🍉 🪬 🔯 🏳️🌈 🖖🏻, dango🍡, Squibbles, equivalent up to isomorphism, Ken Butler has moved, Brad, luna, divisible deer θΔ&, veevee, Edward Dore, tibi, sigre, Bat Master Fresh, Mike Gifford, CPWA @FOSDEM, pat, phi1997, BuckRogers1965, trash muppet, Royce Williams, a kilo of saucepans (rakslice), Joni Korpi, Pauls, Esther Schindler and Artyom Bologov like this.
reshared this
mcc, Kevin P. Fleming, Su_G, Brad, veevee, Jim, WouldWolf, BuckRogers1965, Joonas Muhonen, Bart Schuller, a kilo of saucepans (rakslice) and Coffee (Team CW) reshared this.
Kevin P. Fleming
in reply to Kevin Bowersox • • •mcc
in reply to Kevin P. Fleming • • •mcc
in reply to mcc • • •BuckRogers1965
in reply to Kevin Bowersox • • •One of my biggest pet peeves is trying to click on something on a web page and having something else move under the mouse click.
Man aging with madness
in reply to Kevin Bowersox • • •@kevinbowersox
Try telling that to the accessibility door opener/closers when they're closing the door. You either have to wait for it to finish closing, or hit the button yourself and wait for it to open back up and let it curse the next person behind you.
It sure seems like a design flaw that it would fight you opening it when it's trying to close.
01d55
in reply to Kevin Bowersox • • •Vanu Arson
in reply to mcc • • •Jonathan
in reply to mcc • • •Pelican Dock Music
in reply to mcc • • •LR
in reply to mcc • • •satrn
in reply to mcc • • •ity [unit X-69] - VIOLENT FUCK
in reply to mcc • • •Maximilian Overdraft, Esq.
in reply to mcc • • •Jenevieve Bebens
in reply to mcc • • •mcc
Unknown parent • • •KrazyKat
in reply to mcc • • •xinit ☕ / 🗑🔥
in reply to mcc • • •João Santos
in reply to mcc • • •⊥ᵒᵚ Cᵸᵎᶺᵋᶫ∸ᵒᵘ ☑️
in reply to mcc • • •Gord
in reply to mcc • • •Plsik (born in 320 ppm)
in reply to mcc • • •patricus
in reply to mcc • • •St.Onk
in reply to mcc • • •