The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

This entry was edited (5 days ago)
in reply to GK

@gklka @raymaccarthy @map I agree with GK on this. Not all AI is the same, and it's definitely not black and white. With the right expertise and detailed specs, you can achieve great results while keeping the code maintainable and retaining ownership. I really dislike the mindset that everything has to be either absolutely good or 100% bad.
in reply to tante

And I cannot even begin to emphasize how much *it will cost about the same despite it being of lower quality*.

Once credit entities realize that GPUs get obsolete very fast and five years down the line the early-mover needs to buy again just as much new processing hardware as a late-comer, they will stop subsidizing today's AI as a gamble to capture the market for tomorrow.

And then your 45-minutes-saving boilerplate machine will cost 5$ per run.

in reply to tante

Good take! But also, like "can you create software" is not really an accurate framing of what the hard part of software was.

Most people could "create software" by looking up a Hello world example. That wouldn't help them solve amy real problems tho.

LLMs produce software that *looks more like* it solves problems... but security, integrity, legality were kind of always implied parts of the problem.

Like, it takes a weird subtle reframing of the goal to make LLMs look at all useful.

in reply to Max von Webel

@343max
Then, Max, you have no understanding of LLM/Gen AI, or maybe of specifying requirements, designing system (modules, APIs etc) and then writing the code test & debug. If it's any size of project you need a team & management.
There is also documentation.
Actually writing the code is the easiest bit & the only bit the current LLM/Gen AI does, and does badly as it relies on code scraped from elsewhere & statistical shuffling of fragments.
Can't work. It's a technological dead end.
This entry was edited (5 days ago)
in reply to tante

Well, well done for admitting that demonstrably, the dog can play the piano. Now we are just talking about how well it plays.

FWIW these LLMs have no need for being consistent with what happened in a previous context. The same LLM, in a new context, will usefully critique and find and fix flaws in what it itself did in the previous context.

The "slop" aspect of LLM output seems to come from just going with what one context produced blind, when it can iterate as, eg, QA manager.

in reply to tante

The best engineers I know just became more ambitious, and so should all of us.

I'll keep repeating this, there's tonnes of proprietary binary blobs in all of our tech. You can shout from the rooftops about how much you love your /e/OS phone, if your phone modem relies on a proprietary driver, it's pretty much worthless as a "resistance against big tech". European digital sovereignty is equally worthless.

LLMs are good at staring at hexdumps, humans aren't. Use their advantage to build actually open tech.

> But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

Skill issue, idk what more to say. I don't find it any different to managing juniors and reviewing their PRs. Bad code is bad code.

This entry was edited (5 days ago)
in reply to tante

I think LLM's are only good at learning programming language's basics or getting yourself interested in some random topic. That's what I do with a local LLM. I do the actual research and learning more than the basics myself.

Idk. I've never liked being spoonfed information anyway and I learn best when my effort of learning pays off. I always liked reading and learning stuff on my own even before LLM's started popping up but idk why people started decided to become really lazy.

in reply to tante

I don't believe that to be universally true. I *wish* it was, because it'd be so much easier to argue against them.

Unfortunately, the "mere" fact that all currently existing incarnations are fundamentally evil does not mean they must lead to lower quality software.

A velocity-first mindset has *always* led to lower quality, regardless of GenAI. And they make that rush accessible to everyone, regardless of expertise/skill.

[1/2]

in reply to Lars Marowsky-Brée 😷

I'm also unsure skill decay is real as such. I also would struggle for a few moments before I could do long division again, or implementing a sorting algorithm.

We get the lower quality not because people use LLMs.

But because they are pressured to ever faster velocity by capitalism/fascism that wants to deregulate everything.

LLMs, used right, can be *useful*.

The problem is they are currently a) evil, b) used badly at scale.

One *can* use them for high quality results. [2/3]

in reply to tante

Yes, but is that actually untrue? I know that even Anthropic has shown that people learn less (of what they'd have learned via the traditional method) when completing a task using GenAI, sure.

But are they maybe learning *other* things? Is their use of that tool/method improving, for example? e.g., the Anthropic paper showed that this varied widely for different Usage Patterns.

IDNK. I think it's truly too early to truly understand those mid- to long-term effects.