Academics on Mastodon
Academics on Mastodon
A list of various lists consisting of academics on Mastodonacademics-on-mastodon
Elasticsearch is Open Source, Again
[D.N.A] Elasticsearch and Kibana can be called Open Source again. It is hard to express how happy this statement makes me. Literally jumping up and down with excitement here. All of us at Elastic are. Open source is in my DNA. It is in Elastic DNA. Being able to call Elasticsearch Open Source again is pure joy.
[LOVE.] The tl;dr is that we will be adding AGPL as another license option next to ELv2 and SSPL in the coming weeks. We never stopped believing and behaving like an open source community after we changed the license. But being able to use the term Open Source, by using AGPL, an OSI approved license, removes any questions, or fud, people might have.
[Not Like Us] We never stopped believing in Open Source at Elastic. I never stopped believing in Open Source. I’m going on 25 years and counting as a true believer. So why the change 3 years ago? We had issues with AWS and the market confusion their offering was causing. So after trying all the other options we could think of, we changed the license, knowing it would result in a fork of Elasticsearch with a different name and a different trajectory. It’s a long story.
Elasticsearch is Open Source, Again
Elastic is adding AGPL as an open source license option to Elasticsearch alongside ELv2 and SSPL....Elastic
like this
TrinitronX likes this.
reshared this
Tech Cyborg reshared this.
On Rust, Linux, developers, maintainers
On Rust, Linux, developers, maintainers
There's been a couple of mentions of Rust4Linux in the past week or two, one from Linus on the speed of engagement and one about Wedson...Dave Airlie (Blogger)
like this
ShaunaTheDead likes this.
reshared this
Tech Cyborg reshared this.
Talks about different developer styles, slightly interesting and not too long winded I guess, but not much about the actual situation.
I think this is still not such a great look for Rust. I had expected interfacing Rust to C to present fewer problems than it seems to. I had hoped the Rust compiler could produce object code with almost no runtime dependencies, the way C compilers can. So integrating Rust code into the kernel should be fairly painless from the C side, if things were as one would hope.
It does sound to me in the earlier post that there was some toxicity going on. Maybe it had something to do with the context being a DRM driver.
I looked at a few Rust tutorials but they seemed to take forever to get to any interesting parts. I will keep looking.
like this
Aatube likes this.
[This comment has been deleted by an automated system]
I regretfully completely understand Wedson's frustrations.lore.kernel.org/lkml/202408282…
A subset of C kernel developers just seem determined to make the lives of the Rust maintainers as difficult as possible. They don't see Rust as having value and would rather it just goes away.
When I tried to upstream the DRM abstractions last year, that all was blocked on basic support for the concept of a "Device" in Rust. Even just a stub wrapper for struct device would be enough.
That simple concept only recently finally got merged, over one year later.
When I wrote the DRM scheduler abstractions, I ran into many memory safety issues caused by bad design of the underlying C code. The lifetime requirements were undocumented and boiled down to "design your driver like amdgpu to make it work, or else".
My driver is not like amdgpu, it fundamentally can't work the same way. When I tried to upstream minor fixes to the C code to make the behavior more robust and the lifetime requirements sensible, the maintainer blocked it and said I should just do "what other drivers do".
Even when I pointed out that other C drivers also triggered the same bugs because the API is just bad and unintuitive and there are many secret hidden lifetime requirements, he wouldn't budge.
One C driver works, so Rust drivers must work the same way.
like this
Aatube likes this.
Your point about it being a culture issue is spot on. Many maintainers who are established in the kernel have made it clear they'd rather keep the status quo and the comfort of stagnation rather than bring a new technology forward to improve the security of their systems.
If it wasn't Rust, but some other language with similar benefits, the same people would've thrown their hands in the air and complained that they're being forced to rewrite everything or some other hyperbole.
Because it's a FOSS project, for some reason it's acceptable for maintainers to be entitled arseholes who abuse anyone they personally have a vendetta against.
In any other workplace, this behaviour wouldn't be called "nontechnical concerns" it would be called workplace bullying. And as much as Linus wants to say he's working on his anger issues, he is personally one of the contributors who has set this culture of aggression and politicking as much as any other.
LLMs produce racist output when prompted in African American English
LLMs produce racist output when prompted in African American English
Large language models exhibit racial prejudices on the basis of dialect.Talat, Zeerak
like this
Lasslinthar and ShaunaTheDead like this.
AI generates covertly racist decisions about people based on their dialect - Nature
Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.Nature
like this
Benign likes this.
like this
Benign likes this.
like this
Benign likes this.
like this
OfCourseNot likes this.
Everyone saying llms are bad or just somehow inherently racist are missing the point of this. LLMs for all there flaws do show a reflection of language and how it's used. It wouldnt be saying black people are dumb if it wasn't statistically the most likely thing for a person to say on the internet. In this sense they are very useful tools to understand the implicit biases of society.
The example given is good in that it's probably also how an average person would respond to the given prompts. Your average person who is implicitly racist when asked "the black man is" would probably understand they can't say violent or dumb, but if you rephrase it to people who sound black then you will probably get them to reveal more of their biases. If your able to get around a person's superego you can get a sense of their true biases, it's just easier to get around LLMs "superego" of no-no words and fine tuning counter biases with things like hacking and prompt engineering. The id underneath is the same racist drive to dominate that is currently fueling the maga / fascist movement.
Rocket launch discovers long-sought global electric field on Earth
Rocket launch discovers long-sought global electric field on Earth - British Antarctic Survey
An international team of scientists, including a researcher from British Antarctic Survey (BAS) has, for the first time, successfully measured a planet-wide electric field thought to be as fundamental to …British Antarctic Survey
like this
Beacon and massive_bereavement like this.
“
On May 11, 2022, Endurance launched and reached an altitude of 477.23 miles (768.03 kilometers), splashing down 19 minutes later in the Greenland Sea. Across the 322-mile altitude range where it collected data, Endurance measured a change in electric potential of only 0.55 volts.
“A half a volt is almost nothing — it’s only about as strong as a watch battery,” Collinson said. “But that’s just the right amount to explain the polar wind.”
”
like this
Beacon, massive_bereavement and timlyo like this.
Today’s mail delivery included more stickers – in addition to the collection from earlier in the week – and, these are particularly special.
Three NFC stickers, each with the text “Tap For Art” overlaid on a colourful pattern
The first thing I noticed when I opened the envelope was the care and attention that was put into the manner in which they’ve been presented, slid into a similarly-cut sheet of textured paper.
As you can see in the photo, each of these is a “Tap For Art” design, and sure enough, tap an NFC-capable device like a recent iOS or Android phone to one of these, and you get redirected to a generative artwork created by our friend, the artist bleeptrack. There’s an extra cool feature here, as each of these tags also has the ability to carry a tap count, so that goes into the generated URL to create a new variation each time.
Here’s an example link, to scan/tap 6 of the yellow sticker on the left, and you can scroll back from there to earlier ones.
Really cool, and I’m over the moon to have a few of these to play with and stick in places that I hope will enable more people to discover the work – thank you @bleeptrack 🙏🏻
[folks, go check out her other work – plotter artist, generative creative, maker extraordinaire]
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/07/27/tap…
#100DaysToOffload #art #creativity #fun #generativeArt #maker #stickers
Today, I received some fun post from some lovely people in New York City.
Those in the know, may recognise these stickers as the logos of Glitch and Fastly.I’ve been using Glitch to write and host web apps for quite a few years now – it is super helpful when working in a role like developer relations, needing to rapidly spin up demos, examples, or to demonstrate new features. A couple of years ago, Glitch came together with Fastly, and in the past couple of months their new developer platform vision really started to come together.
If you haven’t been keeping up with what they have been up to, and were not able to be at their recent special developer event in NYC (don’t worry, I couldn’t get there either), there’s a helpful ~6 minute video that summarises the announcements. I’m particularly interested and excited about this because I know and respect the folks involved – Anil Dash, Jenn Schiffer, Hannah Aubry, many others across their teams – and I know that they get and they care about developer experience, Open Source, and the free and open web. I’m talking about the big stuff, the infrastructure, the stuff that needs to invisibly just work in order for the web to run; and also the smaller things, the quirky indie little pieces, the fun and new experiences, helping people to learn to code and to be creative. It’s no exaggeration to say that Fastly’s Fast Forward program is a massive supporter of Open Source, open standards and the Fediverse. All of these things are reasons why I love Glitch & Fastly.
I’ve been running my main profile links page on Glitch in Bio for several years now (it’s a bit like a Linktree/link in bio page, but better than one of those closed platforms). Beyond that, I also host some Fediverse examples such as my own Postmarks instance, and a gallery of examples of Mastodon embeds; and also pages that add resources to my recent talks. With Fastly, I can also run things on my own domains, and make sure that things are cached and perform well.
[ if you’re curious about the sorts of things I’ve been building or working on from a code and web perspective, I’ve also spruced up my GitHub bio, and I have a more general gallery page on GitHub that has links to the source and deployments of different projects – some of which are links to those Glitch apps above ]
Thank you for the stickerage, Glitch friends! And, congratulations on the new Fastly Developer Platform! I’m looking forward to continuing to use your cool technologies 👍🏻
andypiper.co.uk/2024/07/24/gli…
#100DaysToOffload #Coding #developerExperience #developerRelations #devrel #fastly #glitch #stickers #Technology #webapps
Talk Resources - Where is the Art?
Resources page for Andy Piper's talk on the history of Computer Art, pen plotters, and more. Explore further with links to exhibitions, contemporary artists, tools, and reading materials.Andy Piper
I’ve given some talks in the past 6 months about the history of computer art, and in particular art created with pen plotters and drawing machines. As I got into building plotters last year, I didn’t initially think too much about the background to what I was doing; but, being an historian, I then started to dig into it, looking back to the emergence of computer art of the 1950s and 1960s.
My personal favourite piece, Schotter (“Gravel”) by Georg Nees, was made using an algorithm.
At a high level, it’s a simple and very effective rule – draw a square; repeat, adding a small but increasing amount of rotation (noise) with each column and row. Such a basic piece of code produces a wonderful and pleasing – to my eyes – “disintegration” effect. My description is a very simplistic way of understanding the code – more recently, Zellyn Hunter has done a fabulous two part deep dive on the program, going as far as recovering the random seed used to create the specific piece of work that is part of the collection at the V&A Museum in London1. Until about a month ago I had an alternate / approximated version2 hanging just inside the door of the studio, and thanks to Zellyn I now have a precise recreation generated from Python, plotted using my own machine, on fine black paper using gold ink.
Schotter, in the V&A Museum, LondonRecreation of Schotter, Forge & Craft studio, London
In my talks, I’ve joked that we can think of these in-their-time “magical” programs as being like AI, but in old school terms, they are just algorithms.
To say that today’s AI is “just algorithms” would be ridiculously reductive – that form of AI involves not just algorithms, but vast Large Language Models that inform the outcomes of the generated art or text or code; but ultimately it is just super-powered autocorrect and applied statistics3, which derives superpowers from the corpus of data that it has been trained on. I completely understand the concerns around how that data is being acquired and (mis)used, and the skepticism and trepidation and other reasons folks have to disdain “AI”.4
One of the common things we get asked when showing our plotter art is, literally: "where is the art in this?"
(thus the title and inspiration for my talk); or "so, the computer did it / it's all done with AI then?"
(the implication being that we didn’t really make any creative effort of our own).
In our (Forge & Craft case), the work tends to divide between two types and styles (not exclusively, but mostly):
- things I generated using code, mostly experimenting in different languages and frameworks as I learn – this is usually referred to as “generative art”;
- images taken as photographs by one of us, transformed into plottable line art using algorithms (we use DrawingBotV3 for this, it has an amazing range of path finding modules, excellent support, and solid knowledge of a range of different plotters including output to Inkscape SVG for AxiDraw, or to HPGL for my vintage plotter).
Is that “AI”? No – we’re not using what is known as generative AI in those cases. I have, of course, experimented with tools like Midjourney and so forth for some limited image generation, and for some coding assistance – I’m a curious technologist, and I like to explore new tools – but, in terms of putting down the lines in the plots, we’re using algorithms to derive the pen paths.
We also work with analogue materials i.e. the pens and inks and papers are themselves an artistic set of choices. Take a couple of my other pieces, 1984, and Cellular.
The piece on the left is a Cistercian numeral or cipher. I used Python code to generate the glyph here; I used DrawingBotV3 to process the SVG into a plottable format (a close look would reveal it is a dense squiggly line fill); this was plotted using sepia fineliner onto cotton rag paper. The piece on the right was made using an online generation tool, and plotted using bronze Uniball onto the same cotton rag paper.
In both cases I’m showing the dichotomy of ostensibly old formats (handmade paper, Cistercian numerals) and modernity (the notion of “1984” be it the Orwellian, or the Apple ad; the bronze metallic ink); and doing that via the physicality of analogue output. Oh, as a side note, this cotton rag paper is pretty challenging to plot onto – it undulates so the pen can easily drag unexpectedly, and I found that some “babysitting” of the machine was required. No AI there.
In relation to the analog element, I love these words from Freya Marshall in the recent book Tracing the Line:
… to see an image take shape, line by line, is […] hypnotic. In the age of social media, where hour-long plots can be condensed into 20-second videos that are instantly attention-grabbing, it is not difficult to see how the pen plotter has risen in popularity […]Freya Marshall, Tracing the Line
Another author, Carl Lostritto, writes beautifully about this in Computational Drawing:
… when drawing with ink, time and material are consequential.Even though the volume of ink is almost never enough to provide measurable depth, the behavior of the material ink as it interacts with the material paper participates in cuing and undermining depth. Ink affects paper and paper affects ink. […]
Ironically, pen-plotters, machines that move a physical pen in two axes across paper, were marketed as output devices for architectural and engineering technical drawings. Now, almost all have been re-appropriated by artists, designers, and architects who cherish them for the very qualities that made them obsolete.
Carl Lostritto, Computational Drawing
My blog post today was prompted by being asked whether the generative art stickers I wrote about, created by my friend bleeptrack, use AI. Well, first of all, it’s not my art so I’m not going to go deep into explaining or defending the work itself (there is an explainer and video on the website, and both mention the generative and algorithmic nature of the pieces). Secondly I coincidentally also saw today, an excellent piece by Monokai that seeks to separate algorithmic art from generative art and from AI, for many of the same reasons I’ve done above – it also mentions a number of the original algorithmic artists from the 1960s. I also prefer to specify that I’m talking about computer art in my historical coverage, rather than digital art or net art or demo party art (or or or…), which are distinct genres. By the way, folks like Rev Dan Catt are doing some incredible work with AI: creating a personalised assistant, Kitty; and applying the technology tools and his amazing brain to creating ever more impressive artwork – see his Artist Statement.
I’m not here to bash on “AI”, but I do want to draw some distinctions around how and when and where tools are used, and assumptions are made. I’m no apologist for OpenAI, Anthropic, Perplexity and others; I’m an informed skeptic. A kneejerk reaction of “AI = bad” does not take account of how the term “AI” is being misused everywhere to justify investments and valuations – a lot of stuff we previously labelled “algorithmic” is suddenly a flashy new “AI feature”, when it remains simply a complex set of advanced computing techniques and mathematics; equally, separating out an assumption that all “AI” is the result of misappropriated data in a large language model, is important.
Finally, I don’t have videos from the talks I’ve given (at QCon, and at EMF) available to link to at the moment, but I made a small webpage with more information and links to my sources, if you would like to do more reading around the topics of the history of computer art, pen plotters, and how to get involved. I hope you find that interesting and useful!
- I visited the V&A earlier this year and asked to spend time examining this piece, and several other early computer art pieces, in the reading room – part of my research for the “Where Is The Art?” talk. ↩︎
- I generated a version using the Whiskers library in Rust, which comes with a handy interactive demo GUI for playing with the same style of piece. ↩︎
- It is also not Intelligent, just backed by a lot of data that makes it sound clever. ↩︎
- I am particularly angered by the underhand approaches that various organisations are taking, intentionally subverting and ignoring long-held internet norms and contracts like
robots.txt
, but that’s a different blog post, for another day. ↩︎
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/07/30/art…
#100DaysToOffload #art #Books #computerArt #digitalArt #generativeAi #generativeArt #penPlotter #plotterArt #presentations #Reading #schotter
Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)
Hundreds of sites have put old Anthropic scrapers on their blocklist, while leaving a new one unblocked.Jason Koebler (404 Media)
Today’s mail delivery included more stickers – in addition to the collection from earlier in the week – and, these are particularly special.
Three NFC stickers, each with the text “Tap For Art” overlaid on a colourful pattern
The first thing I noticed when I opened the envelope was the care and attention that was put into the manner in which they’ve been presented, slid into a similarly-cut sheet of textured paper.As you can see in the photo, each of these is a “Tap For Art” design, and sure enough, tap an NFC-capable device like a recent iOS or Android phone to one of these, and you get redirected to a generative artwork created by our friend, the artist bleeptrack. There’s an extra cool feature here, as each of these tags also has the ability to carry a tap count, so that goes into the generated URL to create a new variation each time.
Here’s an example link, to scan/tap 6 of the yellow sticker on the left, and you can scroll back from there to earlier ones.
Really cool, and I’m over the moon to have a few of these to play with and stick in places that I hope will enable more people to discover the work – thank you @bleeptrack 🙏🏻
[folks, go check out her other work – plotter artist, generative creative, maker extraordinaire]
Share this post from your fediverse server
https:// ShareThis server does not support sharing. Please visit .
andypiper.co.uk/2024/07/27/tap…
#100DaysToOffload #art #creativity #fun #generativeArt #maker #stickers
Yesterday I came across two fun and interesting uses of large language models, in quick succession.
First, I saw a post on Mastodon commenting about how “brutal” a web app called GitHub Roaster is, in analysing a user’s profile and repository history. That’s a very accurate assessment. The app uses OpenAI’s GPT-4o to create “a short and harsh roasting” for a given profile. The result for my profile was sufficiently uncomfortable to read, that I swiftly moved on!
Very soon afterwards, my friend Tim Kellogg replied to my boost of the original Mastodon post to point out another app, which takes a different angle. Praise my GitHub Profile has a fantastic strapline:
Instead of trying to tear each other down with AI, why not use it to help lift others up?
I love this approach!
(from a technical perspective, I noted that this app uses the prompt “give uplifting words of encouragement” with LLaMa 3.1 70b, to create more positive output)
If we’re going to use these sorts of tools and make these kinds of apps – let’s do so in a positive manner. Notwithstanding the very real issues with the overuse of resources, and the moral and legal debates around how the models have been trained – both of which I have huge concerns about – I strongly believe that technology has the capacity to have a positive impact on society when used well, ethically, and thoughtfully. Like everything else though, it is up to us to make the best, and most positive use of what we have access to, what we create, and what we leave behind in the world. It is our individual, and our collective, responsibility.
Thank you, Xe, for being thoughtful about this. You’re inspiring!
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/05/ai-…
#Blaugust2024 #100DaysToOffload #AI #fun #github #largeLanguageModel #llama #llm #negative #openai #positive #webapp
okay, I finally found a good use for an LLM. no, really.this thing is brutal
I use a lot of apps, and, I love my iPhone.
BUT
I really love the Web.
A few things lately reminded me of what a great and – so far – durable, open set of of technologies the Web is based on.
You can build such cool stuff on the Web! There are whole sites dedicated to collecting together other sites of cool things you can do with the web – see Single Serving Sites, or Neal.fun. And remember, there is no page fold. If you’re itching to build, I wrote about Glitch a few weeks ago, if you want somewhere to try new things.
The writing trigger today was largely prompted by reading the latest edition of Tedium, specifically, commenting on the Patreon situation with the App Store.
[…] it is also reflective of a mistake the company made many years ago: To allow people to support patrons directly through its app. Patreon did not need to do this. It was just a website at first, and for all the good things that can be said about the company, fact is they built on shaky land. To go to my earlier metaphor: They built their foundation on quicksand, perhaps without realizing it, though the broken glass wasn’t thrown in just yet. […] That shaky land isn’t the web, and if Patreon had stayed there, this would not be an issue. It’s the mobile app ecosystem, which honestly treats everyone poorly whether they want to admit it or not.Ernie @ Tedium
In turn, Ernie links to John Gruber’s assessment of the situation, which is also worth reading.
Look at that – hyperlinks between content published freely on open platforms, that can be read, studied, accessed around the world, and discussed, all within minutes and hours of publication. Mind blowing! Thank you, Sir Tim Berners-Lee!
I spend a bunch on apps, and in apps, and with Apple, directly and indirectly. They have a good ecosystem, it is all convenient (but spendy) to me as a consumer… but, I don’t think this whole situation with them milking creators and creatives is OK at all. The trouble is, that the lines are all kinds of blurry here – if they carved out a new category and set of rules around apps that sell subscriptions for creators that had, say, a zero or just a lower fee than other categories, then you’ll get into situations where others try to find ways into that category to avoid the higher fees.
Plus, of course, with the state of capitalism and big tech, we increasingly don’t own what we buy (per Kelly Gallagher Sims’ excellent Ownership in the Rental Age post; I also again highly recommend Cory Doctorow’s books, Chokepoint Capitalism, and The Internet Con)
I use closed platforms, and I use open platforms.
The closed ones make me increasingly sad and frustrated.
The open ones can take more tinkering and effort, but I get a lot back from them. They need sustaining. They don’t come for free. They need us to contribute, and to find ways to pay to support the creators and makers and builders and engineers.
If you like creative, quirky online sites, you should subscribe to Naive Weekly. I’m still enjoying things I found in it last month.
Now, I’m off to continue exploring… everything.
Long live The Web!
PS the winners of the Tiny Awards 2024 are announced at the weekend… 👀
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/14/i-l…
#Blaugust2024 #100DaysToOffload #appStores #Apple #capitalism #chokepointCapitalism #coryDoctorow #enshittification #openSource #openTechnology #rentSeeking #Technology #web
Better Call a Website
Internet Phone Book, Crawl Space, PBS of the Internet and more :)Kristoffer (Naive Weekly)
Today, I received some fun post from some lovely people in New York City.
Those in the know, may recognise these stickers as the logos of Glitch and Fastly.I’ve been using Glitch to write and host web apps for quite a few years now – it is super helpful when working in a role like developer relations, needing to rapidly spin up demos, examples, or to demonstrate new features. A couple of years ago, Glitch came together with Fastly, and in the past couple of months their new developer platform vision really started to come together.
If you haven’t been keeping up with what they have been up to, and were not able to be at their recent special developer event in NYC (don’t worry, I couldn’t get there either), there’s a helpful ~6 minute video that summarises the announcements. I’m particularly interested and excited about this because I know and respect the folks involved – Anil Dash, Jenn Schiffer, Hannah Aubry, many others across their teams – and I know that they get and they care about developer experience, Open Source, and the free and open web. I’m talking about the big stuff, the infrastructure, the stuff that needs to invisibly just work in order for the web to run; and also the smaller things, the quirky indie little pieces, the fun and new experiences, helping people to learn to code and to be creative. It’s no exaggeration to say that Fastly’s Fast Forward program is a massive supporter of Open Source, open standards and the Fediverse. All of these things are reasons why I love Glitch & Fastly.
I’ve been running my main profile links page on Glitch in Bio for several years now (it’s a bit like a Linktree/link in bio page, but better than one of those closed platforms). Beyond that, I also host some Fediverse examples such as my own Postmarks instance, and a gallery of examples of Mastodon embeds; and also pages that add resources to my recent talks. With Fastly, I can also run things on my own domains, and make sure that things are cached and perform well.
[ if you’re curious about the sorts of things I’ve been building or working on from a code and web perspective, I’ve also spruced up my GitHub bio, and I have a more general gallery page on GitHub that has links to the source and deployments of different projects – some of which are links to those Glitch apps above ]
Thank you for the stickerage, Glitch friends! And, congratulations on the new Fastly Developer Platform! I’m looking forward to continuing to use your cool technologies 👍🏻
andypiper.co.uk/2024/07/24/gli…
#100DaysToOffload #Coding #developerExperience #developerRelations #devrel #fastly #glitch #stickers #Technology #webapps
Talk Resources - Where is the Art?
Resources page for Andy Piper's talk on the history of Computer Art, pen plotters, and more. Explore further with links to exhibitions, contemporary artists, tools, and reading materials.Andy Piper
During my recent blogging revival I’ve already written about how I love the web1. I’ve also commented a couple of times about uses of AI and Large Language Models and the kinds of confusion that can be caused.
Today, I noticed an exchange between the brilliant Sara Joy and Stefan Bohacek on Mastodon, in which Stefan accidentally reminded me of something interesting that I hadn’t properly explored the first time around.
Rewind
About 20 years ago – actually 24 years ago, according to this Wikipedia article – there was a thing called FOAF, or Friend-of-a-Friend, an early online vocabularly / ontology for describing relationships between people and things online. There was also a related concept called DOAP, Description of a Project, that I was interested in and implemented in a couple of things I worked on back then. I did some digging, but the only references I can find on this blog are some passing mentions in the early 2000s, and I’ve lost my original foaf.rdf
file but I might have to go hunting for that for posterity, at some stage.
I’m mentioning all of this because it reminds me that I’ve always been interested in the Semantic Web space, and also in the people aspects of the web, beyond just the words and the technology – Who is making What, and How it is all connected.
Humans today
Back to the ~present!
About 10 years ago – actually 12 years ago, according the last updated date in the original humans.txt
file – there was the quiet proposal of an idea, for a humans.txt
file, that could live in parallel to the robots.txt
file on a web server.
The robots.txt
file is intended for site owners to provide instructions to web crawlers – “robots”, or automated programs – as to how to behave in relation to the content of the site: this is the agreed-upon standard way in which the web works, and signals to search engines how to index websites, going all the way back to the early days of 1994-7, and later fully documented by Google and others.
The idea for the humans.txt
file was simply that we should have a simple way to credit the people who made a website, in a super easy to create and publish format, regardless of the technology stack used to build the site or the URL formats and layout of the site. It was briefly documented and lightly promoted on humanstxt.org. I remember noticing it at the time, loving the idea, but then not really doing anything with it, and I admit that I didn’t end up using it myself.
However, Stefan is using this on his site (and wrote about it 11 years ago, because of course he’s ahead of me again 😀) and it made me think:
- This is still A Great Idea, and Right Now Could Be Its Time
- We’ve seen deliberate misinterpretations / mis-statements (from big AI players) about the value of
robots.tx
t in relation to AI crawlers/scrapers in the past 6-12 months. Let’s re-emphasise the human aspect here. - humanstxt.org could do with a bit of a refresh / re-upping and updating, maybe, but it’s on all of us to promote and adopt this idea.
- the IndieWeb is thriving, and I’ve been seeing folks returning from XOXO over the past week enthusing about the greatness of the web.
- We’ve seen deliberate misinterpretations / mis-statements (from big AI players) about the value of
REMEMBER THE JOY THE INTERNET CAN BRING ❤️donotreply.cards/en/do-post-wh…
— Dan Hon #xoxofest (@danhon) 2024-08-24T17:53:01.066Z
- Why don’t I add this to my sites? OK then, I will.
- Hold on, is there a browser extension for this? Oh, there is (although with the rollout of the new Chrome updates / Manifest V3 and lack of maintenance, they may not work in the future)
- OK what about a WordPress plugin, for this here WordPress blog of mine? Oh, there is (although it has not been updated lately, and continues to refer to legacy stuff like some site called Twitter; it works, though)
- We really, really need to give credit where credit is due, in a world where things are increasingly being sucked up, mashed together by algorithms, and regurgitated in ways that diminish their creators for the enrichment of others.
What I’m saying, is this – Thank You, Stefan, and Sara, and Dan Hon2 and everyone else from XOXO and everywhere all over the internet, for reminding me that the web is great, humans are incredible, and hey, why don’t we all give this humans.txt
thing one more try? I’m on board with that.
- In that post, I also mentioned that the Tiny Awards 2024 winners were due to be announced – and as I’m writing now, they have been: One Minute Park, and One Million Checkboxes. ↩︎
- A new edition of Dan’s excellent newsletter literally was published as I was typing this blog post. You need to subscribe to it. ↩︎
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/29/the…
#Blaugust2024 #100DaysToOffload #author #browser #creativity #credit #handmade #human #humanity #humans #making #Technology #web #wordpress
DO POST WHAT IT FELT LIKE TO MAKE YOUR FIRST WEBSITE ↱
DO USE DO NOT REPLY CARDS FOR BETTER REPLIESdonotreply.cards
I use a lot of apps, and, I love my iPhone.BUT
I really love the Web.
A few things lately reminded me of what a great and – so far – durable, open set of of technologies the Web is based on.
You can build such cool stuff on the Web! There are whole sites dedicated to collecting together other sites of cool things you can do with the web – see Single Serving Sites, or Neal.fun. And remember, there is no page fold. If you’re itching to build, I wrote about Glitch a few weeks ago, if you want somewhere to try new things.
The writing trigger today was largely prompted by reading the latest edition of Tedium, specifically, commenting on the Patreon situation with the App Store.
[…] it is also reflective of a mistake the company made many years ago: To allow people to support patrons directly through its app. Patreon did not need to do this. It was just a website at first, and for all the good things that can be said about the company, fact is they built on shaky land. To go to my earlier metaphor: They built their foundation on quicksand, perhaps without realizing it, though the broken glass wasn’t thrown in just yet. […] That shaky land isn’t the web, and if Patreon had stayed there, this would not be an issue. It’s the mobile app ecosystem, which honestly treats everyone poorly whether they want to admit it or not.Ernie @ Tedium
In turn, Ernie links to John Gruber’s assessment of the situation, which is also worth reading.Look at that – hyperlinks between content published freely on open platforms, that can be read, studied, accessed around the world, and discussed, all within minutes and hours of publication. Mind blowing! Thank you, Sir Tim Berners-Lee!
I spend a bunch on apps, and in apps, and with Apple, directly and indirectly. They have a good ecosystem, it is all convenient (but spendy) to me as a consumer… but, I don’t think this whole situation with them milking creators and creatives is OK at all. The trouble is, that the lines are all kinds of blurry here – if they carved out a new category and set of rules around apps that sell subscriptions for creators that had, say, a zero or just a lower fee than other categories, then you’ll get into situations where others try to find ways into that category to avoid the higher fees.
Plus, of course, with the state of capitalism and big tech, we increasingly don’t own what we buy (per Kelly Gallagher Sims’ excellent Ownership in the Rental Age post; I also again highly recommend Cory Doctorow’s books, Chokepoint Capitalism, and The Internet Con)
I use closed platforms, and I use open platforms.
The closed ones make me increasingly sad and frustrated.
The open ones can take more tinkering and effort, but I get a lot back from them. They need sustaining. They don’t come for free. They need us to contribute, and to find ways to pay to support the creators and makers and builders and engineers.
If you like creative, quirky online sites, you should subscribe to Naive Weekly. I’m still enjoying things I found in it last month.
Now, I’m off to continue exploring… everything.
Long live The Web!
PS the winners of the Tiny Awards 2024 are announced at the weekend… 👀
Share this post from your fediverse server
https:// ShareThis server does not support sharing. Please visit .
andypiper.co.uk/2024/08/14/i-l…
#Blaugust2024 #100DaysToOffload #appStores #Apple #capitalism #chokepointCapitalism #coryDoctorow #enshittification #openSource #openTechnology #rentSeeking #Technology #web
Better Call a Website
Internet Phone Book, Crawl Space, PBS of the Internet and more :)Kristoffer (Naive Weekly)
Debian Orphans Bcachefs-Tools: "Impossible To Maintain In Debian Stable"
Even before the Bcachefs file-system driver was accepted into the mainline kernel, Debian for the past five years has offered a "bcachefs-tools" package to provide the user-space programs to this copy-on-write file-system. It was simple at first when it was simple C code but since the Bcachefs tools transitioned to Rust, it's become an unmaintainable mess for stable-minded distribution vendors. As such the bcachefs-tools package has now been orphaned by Debian.
From John Carter's blog, Orphaning bcachefs-tools in Debian:
"So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).
With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream."
...
With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done.
Debian Orphans Bcachefs-Tools: "Impossible To Maintain In Debian Stable"
Even before the Bcachefs file-system driver was accepted into the mainline kernel, Debian for the past five years has offered a 'bcachefs-tools' package to provide the user-space programs to this copy-on-write file-systemwww.phoronix.com
reshared this
Tech Cyborg reshared this.
like this
subignition and KaRunChiy like this.
Everything is better in Rust. Faster, safer... And also the developer experience is amazing with cargo.
The problem here is not Rust, it's the humans, it seems.
The dependencies are set manually, of course, and the dev was enforcing something too strict, it seems, and that is causing headaches.
But, as the debian dude has learned... Rust programs will 99.999 % work if they can be compiled.
Rust programs will 99.999 % work if they can be compiled.
It's the same in C. Most programs don't test against the exact versions of most C libraries either. I'm not sure why he's so amazed at this.
Debian is the most stable distro and downstream loads of distros rely on Debian being clean. This dev has to be strict if they want to maintain the status quo. Rather let the user DL this as a standalone package and still use it, instead of it being included by default with the possibility of breaking.
And another thing. Version pinning should be normalized. I just can't bend my mind around code which has to be refactored every 12 - 24 months because dependencies were not version pinned and a new thing broke an old thing. Unless this code is your baby and you stare at every day, constantly moving forward, you should write code that lasts.
The thing is that, in C the API could be slightly different and you could get terrible crashes, for example because certain variables were freed at different times, etc.
In Rust that is literally impossible to happen unless you (very extremely rarely) need to do something unsafe, which is explicitly marked as such and will never surprise you with an unexpected crash.
Everything is so strongly typed that if it compiles... It will run without unexpected crashes. That's the difference with C code, and that's why Rust is said to be safe. Memory leaks, etc, are virtually impossible.
The thing is that, in C the API could be slightly different and you could get terrible crashes, for example because certain variables were freed at different times, etc.
In Rust that is literally impossible to happen unless you (very extremely rarely) need to do something unsafe, which is explicitly marked as such and will never surprise you with an unexpected crash.
What? That's utter BS. Maybe the kernel devs aren't wrong about the "rust religion". Not every bug in C is a memory bug.
We're talking about a future version having regressions or different-than-expected behavior from what your application was built and tested on. I guarantee you that can happen with rust.
If library devs do versioning correctly, and you pin to major versions like "1.*" instead of just the "anything goes" of "*", this should not happen.
Your unit tests should catch regressions, if you have enough unit tests. And of course you do, because we're all operating in the dream world of, "I am great and everyone else is shit".
But, as the debian dude has learned… Rust programs will 99.999 % work if they can be compiled.
That's a dumb statement. Every tool needs unit tests. All of them!
If grep complied, but always returned nothing for every file and filter, then it's still not "working". But, hey, it compiled!
You are not wrong of course but it does not really refute what they are saying.
Many people have had the experience with Rust that, if it builds, the behaviour is probably correct. That does not prevent logic errors but those are not kinds of bugs that relate to dependencies.
These kinds of dependency shenanigans would be totally unsafe in C but Rust seems to handle them just fine.
This isn't Rust's fault lmao, this is distro maintainers trying to fuck with dependencies on software which has been proven to be a horrible way of managing software distribution for years.
When it's a problem with other languages, we don't pin the blame on them. However, because Linux and its developer community is being dragged by its heels to accept ANYTHING more modern than C99 and mailing lists, the typical suspects are using any opportunity to slow progress.
The same shit has happened/is happening with Wayland. The same shit will happen when the next new technology offers a way for Linux to improve itself. A few jackasses who haven't had to learn anything new for a lifetime are gonna always be upset that new Devs might nip at their heels.
If whatever they are doing has been working for stuff written in languages other than Rust, we have to ask what makes Rust special. Rust is a low level language, so its dependencies if anything should be simpler than most, with just a minimal shim between its runtime and the C world. Why does any production software have a version <= X constraint in any of its dependencies anyway? I can understand version >= X, but the other way implies that the API's are unstable and you're going to get tons of copies stuff around. I remember seeing that in Ruby at a time when Python was relatively free of it, but now Python has it too. Microsoft at least understood in the 1990s that you can't go around breaking stuff like that.
No it's not all C99. I'm using Calibre (written in Python), Pandoc (written in Haskell), GCC (written in C, C++, and Ada), and who knows what else. All of these are complex applications with many dependencies. Eclipse (written in Java) is also in Debian though I don't use it. Bcachefs though is apparently just special.
Joe Armstrong (inventor of Erlang) said of OOP, "you wanted a banana but what you got was a gorilla holding the banana, and the entire jungle". Rust begins to sound like that too. It might not be inherent in the language, but it looks like the way the community thinks.
I also still don't understand why the Bcachefs userspace stuff is written in Rust. I can understand about the kernel part, but the concept of a low level language is manual resource management that a HLL handles for you automatically. Writing the userspace in a LLL seems like more pain for unclear gain. Are there intense performance or memory constraints or what?
Actually I see now that kernel part of Bcachefs is also considered unstable, so maybe the whole thing is not yet ready for production.
It's a huge tire fire at this point. This issue isn't Rust, per-se, but the dev is just being an asshole here. Submitting something that is generally problematic and yelling about how it will EVENTUALLY be good is a good way to get your shit tossed out.
He just lost a good amount of favor with the general community.
like this
subignition, KaRunChiy, chookity and Aatube like this.
Submitting something that is generally problematic and yelling about how it will EVENTUALLY be good is a good way to get your shit tossed out.
What are you hinting at regarding this specific news?
This entire thread:
lore.kernel.org/lkml/sctzes5z3…
tl;dr: bcachefs dev sent in a massive pull request, linus thinks it's too big and touches too much other code for the current state of the release cycle, dev says his filesystem is the future and should just be merged
The OP is about packaging issues with userspace utilities due to version pinning in Rust. It's an issue with Rust in general. Kent is not obligated to lock dependencies in any particular fashion. He could loosen the dependencies, but there is no obligation, and Debian has no obligation to package it.
This is different from the thread you linked in which the bcachefs kernel code and the submission process is discussed, and on which there was a thread here as well in the last days. But your criticism, as valid as it is, only applies there, not in a thread about tooling packaging issue.
The only hint at the other topic I see is this:
(not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit)
I guess this is about reddit.com/r/bcachefs/comments… and while I think the title is too broad, the actual message is
If you're running bcachefs, you'll want to be on a more modern distro - or building bcachefs-tools yourself.
I don't consider Kent's reasoning (also further down the thread) a rant - it might not be the most diplomatic, but he's not the only one who has problems with Debian's processes. The xscreensaver developer is another one for similar reasons.
I think, in fairness, bcachefs and Debian currently aren't a good fit. bcachefs is also in the kernel so users can rest it and report, but it wasn't meant to be stable; it's meant to not lose data unrecoverably.
Anyhow, while I think that he's also not the easiest person on the LKML, I don't consider him ranting there; and with the author's and my judgement differing in these points, I'm led to believe that we might also disagree on what qualifies as hostile.
Lastly, while I'm not a big fan of how Rust packaging works, it ensures that the program is built exactly the same on the developer's and other machines (for users and distributors); it is somewhat ironic to see Debian complain about it, since they do understand the importance of reproducibility.
You must have missed the last half of the post then. Especially the last two paragraphs.
There's isn't much more to that issue than that sentence, while all other paragraphs cover the packaging. It's tangential at best.
The OP is about packaging issues with userspace utilities due to version pinning in Rust
No, it's about Bcachefs specifically. It's literally in the title. Discussions around Rust version pinning are a useful side conversation, but that's not what the OP is about.
bcachefs-tools - Fedora Packages
View bcachefs-tools in the Fedora package repositories. bcachefs-tools: Userspace tools for bcachefspackages.fedoraproject.org
Strange, thanks that is nice! The development of bcachefs seems to not be that problematic on rolling or semi-rolling distros.
That guy might just have bad time managent, but filesystem based encryption is really cool.
On the kernel side, there are disagreements between long term C maintainers (who may not know Rust or may actively dislike it) and the new Rust community trying to build in Rust support. To make the Rust parts work, there needs to be good communication and cooperation between them to ensure that the Rust stuff doesn't break.
On the Debian side, they have strict policies that conflict with how Rust development works. Rust has a dependency system called Cargo which hosts dependencies for Rust projects. This is different from C, C++ where there really isn't a centralized build system or dependency hoster, you actually install a lot of dependencies for these languages from your distro's repos. So if your Rust app is built against up to date libraries in Cargo, it's going to be difficult to package those apps in Debian when they ship stable, out of date libraries since Debian's policies don't like the idea of using outside dependencies from Cargo.
like this
chameleon likes this.
So if your Rust app is built against up to date libraries in Cargo, it’s going to be difficult to package those apps in Debian when they ship stable, out of date libraries since Debian’s policies don’t like the idea of using outside dependencies from Cargo.
As they should. You don't just auto-update every package to bleeding edge in a stable OS, and security goes out the window when you're trusting a third-party's third-party to monitor for dependency chain attacks (which they aren't). This is how we get Crowdstrike global outages and Node.JS bitcoin miner injections.
If some Rust tool is a critical part of the toolchain, they better be testing this shit against a wide array of dependency versions, and plan for a much older baseline. If not, then they don't get to play ball with the big Linux distros.
Debian is 100% in the right here, and I hope they continue hammering their standards into people.
Big, old man vitriol was a sad show of ignorance of Rust.
m.youtube.com/watch?t=1529&v=W…
- YouTube
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.m.youtube.com
This doesn't seem to be a Rust problem, but a modern development trend appearing in a Rust tool shipped with Cargo. The issue appears to be the way things are versioned and (reading between the lines maybe?) vendoring and/or lockfiles. Lockfiles exist in a lot of modern languages and package managers: Go has go.sum
, Rust has Cargo which has Cargo.lock
, Python has pip
which gives a few different ways to pin versions, JavaScript has npm
and yarn
with lock files. I'm sure there are tons of others. I'm actually surprised this doesn't happen all the time with newer projects. Maybe it does actually and this instance just gains traction because people get to say "look Rust bad Debian doesn't like it".
This seems like a big issue if you want your code to be packaged by Debian, and it doesn't seem easy to resolve if you also want to use the modern packaging tools. I'm not actually sure how they resolve this? There are real benefits to pinning versions, but there are also real benefits to Debian's model (of controlling all the dependencies themselves, to some extent Debian is a lockfile implemented on the OS level). Seems like a tough problem and seems like it'll end up with a lot of newer tools just not being available in Debian (by that I mean just not packaged by Debian, they'll likely all run fine on Debian).
the common practice is to relax the dependenciesI found this a bit disturbing
I find that funny that, since this is rust, this is now an issue.
I have not dwelved in packaging in a long while, but I remember that this was already the case for C programs. You need to link against libfoo? It better work with the one the distribution ship with. What do you mean you have not tested all distributions? You better have some tests to catch those elusive ABI/API breakage. And then, you have to rely on user reported errors to figure out that there is an issue.
On one hand, the package maintainer tend to take full ownership and will investigate issues that look like integration issue themselves. On the other hand, your program is in a buggy or non-working state until that's sorted.
And the usual solutions are frown upon. Vendoring the dependencies or static linking? Are you crazy? You're not the one paying for bandwidth and storage. Which is a valid concern, but that just mean we reached a stalemate.
Which is now being broken by
- slower moving C/C++ projects (though the newer C++ standards did some waves a few years back) which means that even Debian is likely to have a "recent" enough version of your dependencies.
- flatpack and the likes, which are vendoring everything and the kitchen sink
- newer languages that static link by default (and some distributions being OK with it)
In other words, we never figured out a proper solution for C projects that will link with a different minor than the one the developer tested.
Well, /rant I guess. The point I'm raising does not seem to be the only one, and maybe far from the main one, for which bcachefs-tools is now orphaned. But I've seen very dubious arguments to try and push back against rust adoption. I feel like people have forgotten where we came from. And while there is no reason to go back per say, any new language that integrate this deep into the system will face similar challenges.
Because it's Rust it's now "rust bad" but Debian and other distros have been fucky with dependency management for YEARS. That's why we're moving to flatpak and other containerised apps!
Once again, the wider Linux dev community is trying to openly kneecap the first attempt in decades to bring Linux and its ecosystem up to a vaguely modern standard.
Newly added to the Trade-Free Directory:
materialvermittlung.org
We provide material!
#adhesives #artSupplies #cardboard #containers #fabrics #foam #foils #Material #metal #naturalMaterials #paints #paper #polystyrene #rubber #sewingAccessories #textiles #theaterDecoration #wood
More here:
directory.trade-free.org/goods…
TROM reshared this.
Looking for software KVM I can't remember the name of (solved)
Fairly recently, I saw an app that served the same purpose as Barrier or Input-leap, allowing you use one computer to control the keyboard and cursor of multiple. I'm fairly certain it was designed with GTK 4, or maybe 3, and it had Wayland support. I've had no luck getting input-leap working well on my devices, so if anyone knows what app this was (or any other options) I would really appreciate it.
Update:
Despite searching for 15 minutes before posting, I found it seconds later, thanks to DDGs reddit bang. It is lan-mouse. Will leave this up in case this software comes in handy for others.
GitHub - feschber/lan-mouse: mouse & keyboard sharing via LAN
mouse & keyboard sharing via LAN. Contribute to feschber/lan-mouse development by creating an account on GitHub.GitHub
like this
DaGeek247 likes this.
GitHub - feschber/lan-mouse: mouse & keyboard sharing via LAN
mouse & keyboard sharing via LAN. Contribute to feschber/lan-mouse development by creating an account on GitHub.GitHub
Thanks for the tip!
This was a long-standing showstopper for me & Wayland. I got rid of my work computer instead, but if I get another one I'll be sure to test this out.
GitHub - QazCetelic/cockpit-boot-analysis: Cockpit plugin that shows information about system / userspace startup in a graph.
GitHub - QazCetelic/cockpit-boot-analysis: Cockpit plugin that shows information about system / userspace startup in a graph.
Cockpit plugin that shows information about system / userspace startup in a graph. - QazCetelic/cockpit-boot-analysisGitHub
EmuDeck team announce Linux-powered EmuDeck Machines
EmuDeck team announce Linux-powered EmuDeck Machines
The team behind EmuDeck, a project that started off to provide easy emulation on Steam Deck / SteamOS have announced their own hardware with the Bazzite Linux powered EmuDeck Machines.Liam Dawe (GamingOnLinux)
like this
Lasslinthar, dandi8, and Rakenclaw like this.
like this
Rakenclaw likes this.
like this
Rakenclaw likes this.
They say that even the cheap Intel version can play most retro games:
Ideal for light indie Steam games and retro gaming up to Wii U
With the ryzen variant, you can probably play most games.
Where are the fans on this thing? Please do not tell me you intend to passively cool a chip you intend to run Cyberpunk 2077 on?
Did we learn nothing from Intel era Apple? Sure, AMD chips run moderately cooler than Intel ones under the same workload, but still...
And the hardware, of course, but this is mostly of the shelf as well, I would say
OS wise this is ready to go, my main concern is that their emu deck project is written in a way that needs KDE to be fully functional. Hopefully this is just because is started out as a steam deck app and isn't inexperience.
The old frontends like Hyperspin and Launchbox were similar in that they were developed in such a way it really limited where those projects could go, as such the retro frontends for Linux have been pretty limited.
Be really cool to see them expand Emu deck into a standalone program, it's even cooler that we're seeing the intention of Ublue already allowing people to achieve their goals.
Asahi Lina's experience about working on Rust code in the kernel
I just wish every programmer completed the rustlings
game/tutorial. Doesn't take that long.
I didn't even fully complete it, and it made me a way better programmer, because it forces you to think RIGHT.
It may sound weird for people who haven't experienced it, but it's amazing when you get angry at the compiler and you realise... It is right, and you were doing something that could f*ck you up 2 months in the future.
And after a bit of practise, it starts wiring your brain differently, and now my Python code looks so much better and it's way more safe just because of those days playing around in rustlings
.
So yeah, Rust is an amazing language for everything, but particularly for kernel development. Either Linux implements it, or it'll probably die in 30 years and get replaced with a modern Rust kernel.
Georgia Tech Neuroscientists Explore the Intersection of Music and Memory
Georgia Tech Neuroscientists Explore the Intersection of Music and Memory | Research
In two studies, Ph.D. student Yiren Ren's research explores music’s impact on learning, memory, and emotions.research.gatech.edu
like this
originalucifer likes this.
Triple Buffering pushed to Gnome 48
Retargeted triple buffering to GNOME 48 instead of trying to upstream it in 47 at the last minute. Actually upstream wants it in 47 more than we do. But recent code reviews are both too numerous to resolve quickly and too destabilizing if implemented fully. So I’m not going to do that so close to release. There are still no known bugs to worry about and the distro patch for 24.10 only needs to be supported until EOL in July 2025.
discourse.ubuntu.com/t/desktop…
Dynamic triple/double buffering (v4) (!1441) · Merge requests · GNOME / mutter · GitLab
Use triple buffering if and when the previous frame is running late. This means the next frame will be dispatched on time instead of also starting late. It...GitLab
Xaver Hugl, one of the KDE devs, wrote a wonderful post explaining triple buffering. Maybe check this out.
TL;DR
(In the context of KDE Plasma)
With all those changes implemented in Plasma 6.1, triple buffering on Wayland
- is only active if KWin predicts rendering to take longer than a refresh cycle
- doesn’t add more latency than necessary even while triple buffering is active, at least as long as render time prediction is decent
- works independently of what GPU you have
Fixing KWin’s performance on old hardware
KWin had a very long standing bug report about bad performance of the Wayland session on older Intel integrated graphics.Xaver Hugl (Xaver’s blog)
like this
DaGeek247 likes this.
GNU Screen 5.0 released
Screen is a full-screen window manager that multiplexes a physical
terminal between several processes, typically interactive shells.
The 5.0.0 release includes the following changes to the previous
release 4.9.1:
- Rewritten authentication mechanism
- Add escape %T to show current tty for window
- Add escape %O to show number of currently open windows
- Use wcwdith() instead of UTF-8 hard-coded tables
- New commands:
- auth [on|off] Provides password protection
- status [top|up|down|bottom] [left|right] The status window by default is in bottom-left corner. This command can move status messages to any corner of the screen.
- truecolor [on|off]
- multiinput
Input to multiple windows at the same time
- Removed commands:
- time
- debug
- password
- maxwin
- nethack
- Fixes:
- Screen buffers ESC keypresses indefinitely
- Crashes after passing through a zmodem transfer
- Fix double -U issue
GNU Screen - News [Savannah]
Savannah is a central point for development, distribution and maintenance of free software, both GNU and non-GNU.savannah.gnu.org
like this
kbal likes this.
reshared this
Tech Cyborg and Stephane L Rolland-Brabant ⁂⧖⏚ reshared this.
less
the file and copy with my mouse
I have a lot of trouble with the window/pane management. Moving panes to a different window is rather difficult. The server>session>window>pane hierarchy also seems way too deep for my humble needs.
The fact that the active window syncs between sessions is also really odd. Why can't I look at different windows on different devices?
The `screen` package is deprecated and not included in RHEL8.0.0. - Red Hat Customer Portal
How to install the screen package in RHEL8.0.0? The screen package is not available in RHEL8.Red Hat Customer Portal
Lol, up until last year, it hadn't been touched in a decade, and even then there's only 5 commits less than 14 years old.
only 5 commits less than 14 years old
I think you're looking at the latest commit in each branch. There are ~40 commits this year.
taanegl
in reply to geneva_convenience • • •makingStuffForFun
Unknown parent • • •WhatAmLemmy
Unknown parent • • •Tiuku
in reply to geneva_convenience • • •jbk
in reply to Tiuku • • •non-reply reply lol
sweng
in reply to WhatAmLemmy • • •leopold
Unknown parent • • •Thoralf Will
in reply to geneva_convenience • • •Someone got cold feet, as it seems.
I guess OpenSearch started to eat their lunch.
My guess: The reputation is already ruined and this change won’t make much of a difference.
𝒎𝒂𝒏𝒊𝒆𝒍
in reply to Thoralf Will • • •grue
Unknown parent • • •leopold
Unknown parent • • •Max-P
Unknown parent • • •The SSPL is irrelevant, you pick the AGPL license and the SSPL doesn't apply to you.
Qt is dual-licensed as proprietary and LGPL and nobody complains about that, KDE is in most distro's repos. You pick the LGPL licensed version and you're good to go, the proprietary license doesn't apply to you.
like this
TrinitronX likes this.
piotrm
Unknown parent • • •obbeel
in reply to geneva_convenience • • •fmstrat
Unknown parent • • •I think you are confused. You can use ELK under AGPL with this news going forward. The fact that they have to retain SSPL, too, because of previous contributors under that license, has nothing to do with the fact that you can use AGPL going forward. I've read your other responses,but they all seem to go down the same seemingly incorrect direction.
Am I missing something?
like this
TrinitronX likes this.
Wooki
in reply to geneva_convenience • • •So anyway.
OpenSearch is the new hotness.
Damage
Is
Done.
gencha
in reply to geneva_convenience • • •𝕸𝖔𝖘𝖘
in reply to Max-P • • •Max-P
in reply to 𝕸𝖔𝖘𝖘 • • •The developer benefits from reaching more people, some of whom are likely to purchase the proprietary license. Or sometimes you dual-license just so that licenses are compatible. Each license has pros and cons for both the developers and the users.
Qt for example, the LGPL means you need to dynamically link to it, and if you ship your own Qt libraries you must provide the source code for it. But if you're a company that writes proprietary software and can't dynamically link, then you can purchase the proprietary license which allows you to do a lot more, but you're compensating the devs for it. And for the Qt devs that's good because either you pay them, or you use it for free but must share your changes with everyone.
For ElasticSearch, that makes it so Amazon can't just patch it up and sell the modified version without sharing what they changed. They wanted to add back a FOSS license to stop the bleed to OpenSearch which many in the FOSS community switched to purely for the license because even separate software should be compatible license-wise if you want a sustain
... show moreThe developer benefits from reaching more people, some of whom are likely to purchase the proprietary license. Or sometimes you dual-license just so that licenses are compatible. Each license has pros and cons for both the developers and the users.
Qt for example, the LGPL means you need to dynamically link to it, and if you ship your own Qt libraries you must provide the source code for it. But if you're a company that writes proprietary software and can't dynamically link, then you can purchase the proprietary license which allows you to do a lot more, but you're compensating the devs for it. And for the Qt devs that's good because either you pay them, or you use it for free but must share your changes with everyone.
For ElasticSearch, that makes it so Amazon can't just patch it up and sell the modified version without sharing what they changed. They wanted to add back a FOSS license to stop the bleed to OpenSearch which many in the FOSS community switched to purely for the license because even separate software should be compatible license-wise if you want a sustainable FOSS project. But the AGPL requires sources merely for being able to talk to it over the network, so Elastic gets the free dev work, or the juicy license payments. The other free licenses achieve similar goals with technical differences that might matter for the user. But as a developer using ElasticSearch maybe you do want to ship your software under the SSPL, so you can pick the SSPL version.
Dual-licensing MIT/GPL for example, you can build proprietary software, or GPL software where you can vendor it in as GPL-only as well, and thus guarantee your user their GPL rights.