in reply to The Witch of Crow Briar

@crowbriarhexe I think this is a compelling proposal but the problem is what if someone makes a machine which is equivalent to a human.

This is complicated for me to talk about because I believe it is possible in principle to create a machine equivalent to a human, but I do not believe it is possible for a society which refers to ChatGPT by the name "AI" to create a machine which is equivalent to human. Your values are simply too jacked to accomplish this thing

in reply to mcc

Based on replies in this thread, here is an alternate proposed "three laws of robotics".

1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human.

2. A machine shall never use more power to perform a job than would be used by an equivalent human.

3. A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one.

[Post 1 of 2]

in reply to mcc

Law 2 is per Amy Worall, law 3 is per the Witch of Crow Briar.

I do not endorse these laws, but I would consider them "utopian", in the sense that a culture which endorsed these laws would be a culture organized along a clearly-formed ideology. You could easily imagine a spec-fic story about a culture that believed in these laws. Note these laws are necessarily laws for human designers, as the existence of a machine which can enforce them is ideologically inconsistent with law 3.

[Post 2 of 2]

in reply to mcc

We already have a tragic example of law 3 in a recent SF movie: Disney's 2022 remake of Pinocchio. The puppet gets expelled on his first day of school because he is not human, after which he ends up on the street to get exploited by the fox and cat.

Not to mention that law 2 could unduly restrict power consumption of assistive devices for humans with disabilities.

#Pinocchio #Pinocchio2022 #discrimination #Disney #AssistiveTechnology

in reply to Damian Yerrick

@PinoBatch As specifically noted, I don't endorse this list of laws and find them primarily interesting as a fiction writing prompt. However:

- That's not a machine. That's a fictional person in a setting where they're socially coded as a non-person. The author did this *to* talk about dehumanization of people.

- An assistive device is a very poor example because by definition it is allowing people to do things they would not be able to do, or require undue effort to do, without the machine.

in reply to mcc

Your law 3 is a necessary consequence of Asimov's First Law. Which is why Daneel Olivaw had the initial R. prefix.
I suppose your laws 1 and 2 are also, but it's never wrong to be specific about harms, as you would otherwise have to assume the robot brain is capable of knowing all possible consequences of its actions.
A machine intelligence with even a fraction of this capability would be able to deduce that its very existence causes harm to humans, and must therefore destroy itself.
in reply to mcc

1600 Cal/day ≈ 77.48 watts, so that's the max amount of power a computer could use by this metric. Although you would also have to take the time required for a unit quantity of "work" into account - if a computer can do in 1 hour what would take a human a full 8 hours, then it could consume ~619.84 watts over that one hour and still come out ahead of the human.

Regardless, we're a ways off from reaching that point.

This entry was edited (1 year ago)
in reply to mcc

I was assuming an apples-to-apples comparison. So in your case, the comparison would be between an ATM and a bank teller. Here the ATM clearly comes out ahead (but only for the limited tasks an ATM can do). For the things that ChatGPT or Copilot can "do," not so much.

(Of course, there are other considerations. My calculation assumes all 1600 input Calories are spent during a person's work day, and also ignores the broader systemic harms of automation and poverty.)

in reply to mcc

I should say, when I read your original post, I wasn't even thinking of bitcoin. For ChatGPT et. al., the power consumption versus human replacement is a pretty direct comparison, so that's what I was focused on.

For Bitcoin, I wouldn't worry about a human replacement, and instead look at the systemic benefit we get for the power consumption. Electric lighting used a ton of power back in the day, but it also significantly improved quality of life. Can Bitcoin say the same? Heck no!

in reply to Josh Triplett

@josh To stress I am using "Utopia" in the original sense of "a hypothetical place which runs on clearly articulated principles" not "a place where everything is good".

You can fix the problem you raise if you change either the text, or the underlying assumptions of the reader, such that it is always preferable for a machine to perform a task rather than a human. In that case the fix becomes not "get a human" but rather "come up with a better machine".

in reply to mcc

I object to this law version as it's based on assumptions that #Bitcoin and/or mining are harmful.

Mining is one path to convert **stranded/wasted** green energy into a more valuable form than the pittance local Utilities buy it back at.

This process enables green energy to be sold globally without needing direct transmission lines, effectively allowing it to 'work remotely' on the global market.

in reply to mcc

0¿. A machine must never, when asked for gumboots, offer a list of pornography, penis pumps or insurance providers to a human, or through inaction allow a list of said, to be shown to a human unless the pornography is performed in gumboots, the penis pump comes with gumboots (and why shouldn't they) or the insurance offered does indeed offer cover for gumboots or is at least, offered by an insurer that does, in their own time, wear gumboots while making penis pump related pornography.
in reply to mcc

I think you could generalize the first law to say that a machine may only present, or fail to prevent the presentation of, any information whose use is of known, immediate or pressing concern to the human when this specific (in)action by the machine has not been consented to by the human.

That might cut down on machines being tuned to constantly steal our attention, whether it’s for the sake of sales or not.

in reply to mcc

in reply to Kevin Bowersox

@kevinbowersox
Try telling that to the accessibility door opener/closers when they're closing the door. You either have to wait for it to finish closing, or hit the button yourself and wait for it to open back up and let it curse the next person behind you.

It sure seems like a design flaw that it would fight you opening it when it's trying to close.