Skip to main content


How people think AI is going to kill them: terminator robots.

How AI is actually going to kill them: by destroying their habitat and drinking all their water.

in reply to Aral Balkan

will be used to justify crimes against minorities as well, after all AI said it was best from some opaque utalitarian–inspired reasoning disguising racism, seksism, ...
in reply to Aral Balkan

you need to make another one where its badly paid humans under the hood 😅 (for clarification: youtu.be/huu_9rAEiQU?si=ZAC0-V…)
This entry was edited (2 months ago)
in reply to Aral Balkan

*driven by millions of users who don't care about artist's rights.
in reply to Aral Balkan

the greatest trick AI ever pulled was creating the cultural perception that it would be the „slaving machines“ that kills us, instead of the indifferent wills of the money-men who built the machines…

And of course, now they keep recycling the same fears with classism and racism.

And so many sci-fi writers have been blatantly uninformed and ignorant Ito these issues throughout.

in reply to Aral Balkan

And by polluting all the information they need to prevent it from happening.
in reply to Aral Balkan

And "assisting" internet searches, so we know we should eat:

- petrol in our spaghetti sauce
- glue in our pizza cheese
- rocks
- those mushrooms that melt down your liver

in reply to Aral Balkan

I’m prepared to bet people have already been killed by AI giving them really bad advice. But your story is so much worse.
in reply to Aral Balkan

Also terminator robots on top of that if you live in Middle East or Africa.
in reply to Aral Balkan

BS hyperbole.

Techonlogy develops all the time and the AI resource demand is going down.

OR, is it like #Bitcoin where the resource price is the bottle neck throttled by Bitcoin price? 🤔

@aral

in reply to Aral Balkan

Don't count out the killer robots. They are coming along nicely. Just because they will be deployed by humans and not by SkyNet and won't be self-replicating doesn't make them not a threat.

All those "human in the loop" systems are being developed with the knowledge that it would be more profitable to take the human out of the loop at some point. Palantir drools about it.

#AI

#AI
in reply to Aral Balkan

Why not both? Terminator robots / automated targeting systems that drink all the water?
in reply to Aral Balkan

I don't know. Mass suicide and artificial negligence might win this one long before anyone notices all the water is gone. Does anyone actually drink that boring stuff anyway?
in reply to Aral Balkan

@pezmico
Don't forget telling them to put glue on pizza and giving them deadly medical advice.
in reply to Aral Balkan

My current boss is using AI regularly, I'm currently concerned with the climate expense and not the LLM
in reply to Aral Balkan

And here I thought AI was going to save us from the oncoming global cholera epidemic by boiling all our water for us.
in reply to Aral Balkan

and they still won't get *general* AI out of in it. Just hallucinating piece-of-shit LLMs that can only churn out spam and incomprehensible text..

At the expense of what little climate stability remains

in reply to Aral Balkan

That’s the 20-40 year plan.

For the average person, they’re more likely to have their life threatened by an AI:
- Rejecting their medical needs - organ recipient, insurance (US)
- Deciding that cost cutting measures on the product factory floor are worth the risk to safety relative to likely blowback
- Rejecting their job application
- Devaluing the only work they are able to do, being disabled
- Stealing time in productivity’s name from others who might have seen your pain

in reply to Aral Balkan

There's also the people who'll die to it but not by anything bombastic but because it decided to cut some bureaucratic thread that unravels someone's life entirely.

Making a massive misinformation generator means it'll misinform in small and big ways, some very noticeable but others will be subtle enough to cause some real damage.

in reply to Aral Balkan

Remember when we thought search engines would use more all the electricity and water?
Then it was social media.

It’s not that the technology is unproblematic — glorified predictive text of dubious origin being wildly and widely misused to support a fantasist tech bubble — but there is a pattern here.

in reply to Just Another Amy

@justanotheramy Almost as if there might be detrimental cumulative effects of the ever-increasing resource demands of Big Tech’s (capitalism’s) hype and bust cycles?
in reply to Aral Balkan

or moral panics are recycled when they’re proven distractions.

OpenAI’s Superalignment team was pushing AI as existential threat when distraction was needed from IP… complications…, labour abuses, and bias.
Now that grounding is creating new issues like impersonation risk and exploit vulnerabilities, suddenly we’re supposed to be looking away at the water?

It’s too convenient and too recurring.

in reply to Aral Balkan

and denying them life-saving health care (bureaucracy saying, "the AI did it. i was just following its orders")
in reply to Aral Balkan

A thousand years from now aliens visit Earth and sift through the remnants of humanity...

"Our archaeologists have discovered that apparently the downfall of their civilisation started with something they referred to as 'Clippy'."

in reply to Aral Balkan

The waste heat from AI could be used to drive desalination operations in coastal nations. It won't be.
in reply to Aral Balkan

there is a push toward using AI to "automate away" jobs involving critical thinking like contract negotiation and decion making. The problem is that humans are much more driven by emotion than logic. Ever noticed that many people who claim to be "data driven" are actually balls of emotion.
This entry was edited (2 months ago)
in reply to Aral Balkan

Exactly! I had a discussion on AI with a colleague and when I said I see an overall danger in AI, without being specific, he just threw in the argument„yeah killer robots are terrible but we can regulate them, see AI is just a tool like any other“…

It’s like you say. People have been effectively gaslighted that THAT is the real danger.

in reply to Aral Balkan

And destroying all capability for human interaction and information exchange that would be necessary to solve the problems it creates.
in reply to Aral Balkan

By absorbing all their energy. AI consomes copious amounts of energy, the equivalent of a small nation, I've heard it reported? Any reports to the contrary?
in reply to Aral Balkan

Pol & Technology AI

Sensitive content