will be used to justify crimes against minorities as well, after all AI said it was best from some opaque utalitarian–inspired reasoning disguising racism, seksism, ...
the greatest trick AI ever pulled was creating the cultural perception that it would be the „slaving machines“ that kills us, instead of the indifferent wills of the money-men who built the machines…
And of course, now they keep recycling the same fears with classism and racism.
And so many sci-fi writers have been blatantly uninformed and ignorant Ito these issues throughout.
Don't count out the killer robots. They are coming along nicely. Just because they will be deployed by humans and not by SkyNet and won't be self-replicating doesn't make them not a threat.
All those "human in the loop" systems are being developed with the knowledge that it would be more profitable to take the human out of the loop at some point. Palantir drools about it.
I don't know. Mass suicide and artificial negligence might win this one long before anyone notices all the water is gone. Does anyone actually drink that boring stuff anyway?
For the average person, they’re more likely to have their life threatened by an AI: - Rejecting their medical needs - organ recipient, insurance (US) - Deciding that cost cutting measures on the product factory floor are worth the risk to safety relative to likely blowback - Rejecting their job application - Devaluing the only work they are able to do, being disabled - Stealing time in productivity’s name from others who might have seen your pain
There's also the people who'll die to it but not by anything bombastic but because it decided to cut some bureaucratic thread that unravels someone's life entirely.
Making a massive misinformation generator means it'll misinform in small and big ways, some very noticeable but others will be subtle enough to cause some real damage.
Remember when we thought search engines would use more all the electricity and water? Then it was social media.
It’s not that the technology is unproblematic — glorified predictive text of dubious origin being wildly and widely misused to support a fantasist tech bubble — but there is a pattern here.
@justanotheramy Almost as if there might be detrimental cumulative effects of the ever-increasing resource demands of Big Tech’s (capitalism’s) hype and bust cycles?
or moral panics are recycled when they’re proven distractions.
OpenAI’s Superalignment team was pushing AI as existential threat when distraction was needed from IP… complications…, labour abuses, and bias. Now that grounding is creating new issues like impersonation risk and exploit vulnerabilities, suddenly we’re supposed to be looking away at the water?
there is a push toward using AI to "automate away" jobs involving critical thinking like contract negotiation and decion making. The problem is that humans are much more driven by emotion than logic. Ever noticed that many people who claim to be "data driven" are actually balls of emotion.
Exactly! I had a discussion on AI with a colleague and when I said I see an overall danger in AI, without being specific, he just threw in the argument„yeah killer robots are terrible but we can regulate them, see AI is just a tool like any other“…
It’s like you say. People have been effectively gaslighted that THAT is the real danger.
By absorbing all their energy. AI consomes copious amounts of energy, the equivalent of a small nation, I've heard it reported? Any reports to the contrary?
or by blocking all their attempts to get help, as it's already being used by department of social services to process SNAP paperwork, social security paperwork, new patient paperwork for many medical clinics & some banks.
*If you can't use any money & can't get any medical care, then you're not going to survive very long in our society.
Vinnie (any)
in reply to Aral Balkan • • •Aral Balkan
in reply to Aral Balkan • • •LPS likes this.
reshared this
Ashenwave✨ and Tommi 🤯 reshared this.
Johanna Janiszewski
in reply to Aral Balkan • • •Itchi5 illustrations
in reply to Aral Balkan • • •Sevoris
in reply to Aral Balkan • • •the greatest trick AI ever pulled was creating the cultural perception that it would be the „slaving machines“ that kills us, instead of the indifferent wills of the money-men who built the machines…
And of course, now they keep recycling the same fears with classism and racism.
And so many sci-fi writers have been blatantly uninformed and ignorant Ito these issues throughout.
Dzso
in reply to Aral Balkan • • •David Marshall
in reply to Aral Balkan • • •And "assisting" internet searches, so we know we should eat:
- petrol in our spaghetti sauce
- glue in our pizza cheese
- rocks
- those mushrooms that melt down your liver
Bill Bennett
in reply to Aral Balkan • • •🇵🇸 Álvaro González
in reply to Aral Balkan • • •LPS likes this.
HuK
in reply to Aral Balkan • • •BS hyperbole.
Techonlogy develops all the time and the AI resource demand is going down.
OR, is it like #Bitcoin where the resource price is the bottle neck throttled by Bitcoin price? 🤔
@aral
Aral Balkan
in reply to HuK • • •Mireina
in reply to Aral Balkan • • •The Servitor
in reply to Aral Balkan • • •Don't count out the killer robots. They are coming along nicely. Just because they will be deployed by humans and not by SkyNet and won't be self-replicating doesn't make them not a threat.
All those "human in the loop" systems are being developed with the knowledge that it would be more profitable to take the human out of the loop at some point. Palantir drools about it.
#AI
Aral Balkan
in reply to The Servitor • • •Eli Wallach's favorite Bass
in reply to Aral Balkan • • •Bbmin7b5
in reply to Aral Balkan • • •OpticalNail 🇵🇸
in reply to Aral Balkan • • •crazyeddie
in reply to Aral Balkan • • •Walter Basil
in reply to Aral Balkan • • •@flargh I think it will be a dance off.
npr.org/2024/10/04/g-s1177-261…
Jeremy Mallin
in reply to Aral Balkan • • •Don't forget telling them to put glue on pizza and giving them deadly medical advice.
Loran
in reply to Aral Balkan • • •Dаn̈ıel Раršlow 🥧
in reply to Aral Balkan • • •Kaladin 🧑🦽💨
in reply to Aral Balkan • • •and they still won't get *general* AI out of in it. Just hallucinating piece-of-shit LLMs that can only churn out spam and incomprehensible text..
At the expense of what little climate stability remains
Jay
in reply to Aral Balkan • • •That’s the 20-40 year plan.
For the average person, they’re more likely to have their life threatened by an AI:
- Rejecting their medical needs - organ recipient, insurance (US)
- Deciding that cost cutting measures on the product factory floor are worth the risk to safety relative to likely blowback
- Rejecting their job application
- Devaluing the only work they are able to do, being disabled
- Stealing time in productivity’s name from others who might have seen your pain
Nini
in reply to Aral Balkan • • •There's also the people who'll die to it but not by anything bombastic but because it decided to cut some bureaucratic thread that unravels someone's life entirely.
Making a massive misinformation generator means it'll misinform in small and big ways, some very noticeable but others will be subtle enough to cause some real damage.
Bishop Whitewind
in reply to Aral Balkan • • •Just Another Amy
in reply to Aral Balkan • • •Remember when we thought search engines would use more all the electricity and water?
Then it was social media.
It’s not that the technology is unproblematic — glorified predictive text of dubious origin being wildly and widely misused to support a fantasist tech bubble — but there is a pattern here.
Aral Balkan
in reply to Just Another Amy • • •Just Another Amy
in reply to Aral Balkan • • •or moral panics are recycled when they’re proven distractions.
OpenAI’s Superalignment team was pushing AI as existential threat when distraction was needed from IP… complications…, labour abuses, and bias.
Now that grounding is creating new issues like impersonation risk and exploit vulnerabilities, suddenly we’re supposed to be looking away at the water?
It’s too convenient and too recurring.
your auntifa liza 🇵🇷 🦛 🦦
in reply to Aral Balkan • • •LPS likes this.
Level 98
in reply to Aral Balkan • • •A thousand years from now aliens visit Earth and sift through the remnants of humanity...
"Our archaeologists have discovered that apparently the downfall of their civilisation started with something they referred to as 'Clippy'."
J.H.Noyes
in reply to Aral Balkan • • •Lars Fischer
in reply to Aral Balkan • • •Ms. Understood
in reply to Aral Balkan • • •viq
in reply to Aral Balkan • • •_noelamac_
in reply to Aral Balkan • • •Exactly! I had a discussion on AI with a colleague and when I said I see an overall danger in AI, without being specific, he just threw in the argument„yeah killer robots are terrible but we can regulate them, see AI is just a tool like any other“…
It’s like you say. People have been effectively gaslighted that THAT is the real danger.
🇨🇦🇩🇪🇨🇳张殿李🇨🇳🇩🇪🇨🇦
in reply to Aral Balkan • • •Dave. #FBPE #NAFO
in reply to Aral Balkan • • •invertebrate roofer
in reply to Aral Balkan • • •Sensitive content
or by blocking all their attempts to get help, as it's already being used by department of social services to process SNAP paperwork, social security paperwork, new patient paperwork for many medical clinics & some banks.
*If you can't use any money & can't get any medical care, then you're not going to survive very long in our society.