The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.
It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
This entry was edited (5 days ago)
reshared this
tante
in reply to tante • • •Home of the WOPR
in reply to tante • • •tante
in reply to Home of the WOPR • • •ıpuɐɯznʞ
in reply to Home of the WOPR • • •nojhan à la douleur en double
in reply to tante • • •Vasilis
in reply to tante • • •Benoît B.
in reply to tante • • •+1
> "destroying the ecological, informational or social environment"
As good as this generated code may be, it remains unacceptable because of that. And that should be the ultimate reason (as the code quality may rise, but at the cost of more destruction)
★ Zagara ★
in reply to tante • • •A very mediocre dystopia indeed.
Fabian Transchel
in reply to ★ Zagara ★ • • •@Nausipoule I'd argue that instead (or, if you'd like: additionally), it is the terminal form of stochastic terrorism:
You will be randomly denied services, participation and dignity. Now isn't that quite a future.
mʌp ᯅ
in reply to tante • • •tante reshared this.
tante
in reply to mʌp ᯅ • • •Poligofsky 🇨🇦 .. 🚫🤖
in reply to tante • • •mxey
in reply to mʌp ᯅ • • •The Final Bottleneck
Armin Ronacher (Armin Ronacher's Thoughts and Writings)GK
in reply to tante • • •Ray McCarthy
in reply to GK • • •I've a bridge over the river Shannon you can buy.
"make wonderful, creative, unique software using AI."
No. An LLM can't create at all and if it actually works and meets the spec it's likely copied.
GK
in reply to Ray McCarthy • • •Ray McCarthy
in reply to GK • • •A compiler implements it. The LLM/Gen is a rubbish search engine, database and statistical engine. It regurgitates based on prompts, not formal specifications.
GK
in reply to Ray McCarthy • • •Roni Rolle Laukkarinen
in reply to GK • • •Cogito ergo mecagoendios
in reply to tante • • •And I cannot even begin to emphasize how much *it will cost about the same despite it being of lower quality*.
Once credit entities realize that GPUs get obsolete very fast and five years down the line the early-mover needs to buy again just as much new processing hardware as a late-comer, they will stop subsidizing today's AI as a gamble to capture the market for tomorrow.
And then your 45-minutes-saving boilerplate machine will cost 5$ per run.
Niels Abildgaard
in reply to tante • • •Good take! But also, like "can you create software" is not really an accurate framing of what the hard part of software was.
Most people could "create software" by looking up a Hello world example. That wouldn't help them solve amy real problems tho.
LLMs produce software that *looks more like* it solves problems... but security, integrity, legality were kind of always implied parts of the problem.
Like, it takes a weird subtle reframing of the goal to make LLMs look at all useful.
Max von Webel
in reply to tante • • •Max von Webel
in reply to Max von Webel • • •@tante
Ray McCarthy
in reply to Max von Webel • • •Then, Max, you have no understanding of LLM/Gen AI, or maybe of specifying requirements, designing system (modules, APIs etc) and then writing the code test & debug. If it's any size of project you need a team & management.
There is also documentation.
Actually writing the code is the easiest bit & the only bit the current LLM/Gen AI does, and does badly as it relies on code scraped from elsewhere & statistical shuffling of fragments.
Can't work. It's a technological dead end.
Christian Kruse
in reply to Max von Webel • • •@343max I think that's kind of the wrong question. Skill degradation and the moral implications (crawling of copyrighted material, climate, etc.) don't go away just because the generated code is good.
But I'm pretty sure you are aware 🙂
Count Holdem
in reply to tante • • •Alper Çuğun-Gscheidel
in reply to tante • • •Stefan Frede
in reply to tante • • •Ray McCarthy
in reply to tante • • •degenerating degenerate
in reply to tante • • •Well, well done for admitting that demonstrably, the dog can play the piano. Now we are just talking about how well it plays.
FWIW these LLMs have no need for being consistent with what happened in a previous context. The same LLM, in a new context, will usefully critique and find and fix flaws in what it itself did in the previous context.
The "slop" aspect of LLM output seems to come from just going with what one context produced blind, when it can iterate as, eg, QA manager.
John
in reply to tante • • •I mean, you can also "build a house" by using deck screws to connect some wet doug fir 2x4s into a "frame" and then staple on some drywall and siding and drape the whole thing in a plastic tarp.
You will die when it falls on you, but for a time, it was a "house".
Anton Gerasimov
in reply to tante • • •Grymt
in reply to Anton Gerasimov • •Anton Gerasimov
in reply to Grymt • • •Grymt
in reply to Anton Gerasimov • •Alessandro Corazza 🇨🇦
in reply to tante • • •Bogdan Buduroiu
in reply to tante • • •The best engineers I know just became more ambitious, and so should all of us.
I'll keep repeating this, there's tonnes of proprietary binary blobs in all of our tech. You can shout from the rooftops about how much you love your /e/OS phone, if your phone modem relies on a proprietary driver, it's pretty much worthless as a "resistance against big tech". European digital sovereignty is equally worthless.
LLMs are good at staring at hexdumps, humans aren't. Use their advantage to build actually open tech.
> But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.
Skill issue, idk what more to say. I don't find it any different to managing juniors and reviewing their PRs. Bad code is bad code.
Kevin Huigens🕯
in reply to tante • • •Cassandrich
in reply to tante • • •wasabi brain
in reply to tante • • •LΞX/NØVΛ 🇪🇺
in reply to tante • • •Wild Eyed Boy From Freecloud
in reply to tante • • •Really Lazy Bear
in reply to tante • • •I think LLM's are only good at learning programming language's basics or getting yourself interested in some random topic. That's what I do with a local LLM. I do the actual research and learning more than the basics myself.
Idk. I've never liked being spoonfed information anyway and I learn best when my effort of learning pays off. I always liked reading and learning stuff on my own even before LLM's started popping up but idk why people started decided to become really lazy.
Lars Marowsky-Brée 😷
in reply to tante • • •I don't believe that to be universally true. I *wish* it was, because it'd be so much easier to argue against them.
Unfortunately, the "mere" fact that all currently existing incarnations are fundamentally evil does not mean they must lead to lower quality software.
A velocity-first mindset has *always* led to lower quality, regardless of GenAI. And they make that rush accessible to everyone, regardless of expertise/skill.
[1/2]
Lars Marowsky-Brée 😷
in reply to Lars Marowsky-Brée 😷 • • •I'm also unsure skill decay is real as such. I also would struggle for a few moments before I could do long division again, or implementing a sorting algorithm.
We get the lower quality not because people use LLMs.
But because they are pressured to ever faster velocity by capitalism/fascism that wants to deregulate everything.
LLMs, used right, can be *useful*.
The problem is they are currently a) evil, b) used badly at scale.
One *can* use them for high quality results. [2/3]
tante
in reply to Lars Marowsky-Brée 😷 • • •Lars Marowsky-Brée 😷
in reply to tante • • •Yes, but is that actually untrue? I know that even Anthropic has shown that people learn less (of what they'd have learned via the traditional method) when completing a task using GenAI, sure.
But are they maybe learning *other* things? Is their use of that tool/method improving, for example? e.g., the Anthropic paper showed that this varied widely for different Usage Patterns.
IDNK. I think it's truly too early to truly understand those mid- to long-term effects.
condret
in reply to tante • • •LoseFriendsandAlienatePeople
in reply to tante • • •I could see a world where coders stop sharing data online, ceemreat and lock their code in a new "Internet." Making the old, open Internet dead.