During my recent blogging revival I’ve already written about how I love the web1. I’ve also commented a couple of times about uses of AI and Large Language Models and the kinds of confusion that can be caused.
Today, I noticed an exchange between the brilliant Sara Joy and Stefan Bohacek on Mastodon, in which Stefan accidentally reminded me of something interesting that I hadn’t properly explored the first time around.
Rewind
About 20 years ago – actually 24 years ago, according to this Wikipedia article – there was a thing called FOAF, or Friend-of-a-Friend, an early online vocabularly / ontology for describing relationships between people and things online. There was also a related concept called DOAP, Description of a Project, that I was interested in and implemented in a couple of things I worked on back then. I did some digging, but the only references I can find on this blog are some passing mentions in the early 2000s, and I’ve lost my original foaf.rdf
file but I might have to go hunting for that for posterity, at some stage.
I’m mentioning all of this because it reminds me that I’ve always been interested in the Semantic Web space, and also in the people aspects of the web, beyond just the words and the technology – Who is making What, and How it is all connected.
Humans today
Back to the ~present!
About 10 years ago – actually 12 years ago, according the last updated date in the original humans.txt
file – there was the quiet proposal of an idea, for a humans.txt
file, that could live in parallel to the robots.txt
file on a web server.
The robots.txt
file is intended for site owners to provide instructions to web crawlers – “robots”, or automated programs – as to how to behave in relation to the content of the site: this is the agreed-upon standard way in which the web works, and signals to search engines how to index websites, going all the way back to the early days of 1994-7, and later fully documented by Google and others.
The idea for the humans.txt
file was simply that we should have a simple way to credit the people who made a website, in a super easy to create and publish format, regardless of the technology stack used to build the site or the URL formats and layout of the site. It was briefly documented and lightly promoted on humanstxt.org. I remember noticing it at the time, loving the idea, but then not really doing anything with it, and I admit that I didn’t end up using it myself.
However, Stefan is using this on his site (and wrote about it 11 years ago, because of course he’s ahead of me again 😀) and it made me think:
- This is still A Great Idea, and Right Now Could Be Its Time
- We’ve seen deliberate misinterpretations / mis-statements (from big AI players) about the value of
robots.tx
t in relation to AI crawlers/scrapers in the past 6-12 months. Let’s re-emphasise the human aspect here. - humanstxt.org could do with a bit of a refresh / re-upping and updating, maybe, but it’s on all of us to promote and adopt this idea.
- the IndieWeb is thriving, and I’ve been seeing folks returning from XOXO over the past week enthusing about the greatness of the web.
- We’ve seen deliberate misinterpretations / mis-statements (from big AI players) about the value of
REMEMBER THE JOY THE INTERNET CAN BRING ❤️donotreply.cards/en/do-post-wh…
— Dan Hon #xoxofest (@danhon) 2024-08-24T17:53:01.066Z
- Why don’t I add this to my sites? OK then, I will.
- Hold on, is there a browser extension for this? Oh, there is (although with the rollout of the new Chrome updates / Manifest V3 and lack of maintenance, they may not work in the future)
- OK what about a WordPress plugin, for this here WordPress blog of mine? Oh, there is (although it has not been updated lately, and continues to refer to legacy stuff like some site called Twitter; it works, though)
- We really, really need to give credit where credit is due, in a world where things are increasingly being sucked up, mashed together by algorithms, and regurgitated in ways that diminish their creators for the enrichment of others.
What I’m saying, is this – Thank You, Stefan, and Sara, and Dan Hon2 and everyone else from XOXO and everywhere all over the internet, for reminding me that the web is great, humans are incredible, and hey, why don’t we all give this humans.txt
thing one more try? I’m on board with that.
- In that post, I also mentioned that the Tiny Awards 2024 winners were due to be announced – and as I’m writing now, they have been: One Minute Park, and One Million Checkboxes. ↩︎
- A new edition of Dan’s excellent newsletter literally was published as I was typing this blog post. You need to subscribe to it. ↩︎
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/29/the…
#Blaugust2024 #100DaysToOffload #author #browser #creativity #credit #handmade #human #humanity #humans #making #Technology #web #wordpress
DO POST WHAT IT FELT LIKE TO MAKE YOUR FIRST WEBSITE ↱
DO USE DO NOT REPLY CARDS FOR BETTER REPLIESdonotreply.cards
I use a lot of apps, and, I love my iPhone.BUT
I really love the Web.
A few things lately reminded me of what a great and – so far – durable, open set of of technologies the Web is based on.
You can build such cool stuff on the Web! There are whole sites dedicated to collecting together other sites of cool things you can do with the web – see Single Serving Sites, or Neal.fun. And remember, there is no page fold. If you’re itching to build, I wrote about Glitch a few weeks ago, if you want somewhere to try new things.
The writing trigger today was largely prompted by reading the latest edition of Tedium, specifically, commenting on the Patreon situation with the App Store.
[…] it is also reflective of a mistake the company made many years ago: To allow people to support patrons directly through its app. Patreon did not need to do this. It was just a website at first, and for all the good things that can be said about the company, fact is they built on shaky land. To go to my earlier metaphor: They built their foundation on quicksand, perhaps without realizing it, though the broken glass wasn’t thrown in just yet. […] That shaky land isn’t the web, and if Patreon had stayed there, this would not be an issue. It’s the mobile app ecosystem, which honestly treats everyone poorly whether they want to admit it or not.Ernie @ Tedium
In turn, Ernie links to John Gruber’s assessment of the situation, which is also worth reading.Look at that – hyperlinks between content published freely on open platforms, that can be read, studied, accessed around the world, and discussed, all within minutes and hours of publication. Mind blowing! Thank you, Sir Tim Berners-Lee!
I spend a bunch on apps, and in apps, and with Apple, directly and indirectly. They have a good ecosystem, it is all convenient (but spendy) to me as a consumer… but, I don’t think this whole situation with them milking creators and creatives is OK at all. The trouble is, that the lines are all kinds of blurry here – if they carved out a new category and set of rules around apps that sell subscriptions for creators that had, say, a zero or just a lower fee than other categories, then you’ll get into situations where others try to find ways into that category to avoid the higher fees.
Plus, of course, with the state of capitalism and big tech, we increasingly don’t own what we buy (per Kelly Gallagher Sims’ excellent Ownership in the Rental Age post; I also again highly recommend Cory Doctorow’s books, Chokepoint Capitalism, and The Internet Con)
I use closed platforms, and I use open platforms.
The closed ones make me increasingly sad and frustrated.
The open ones can take more tinkering and effort, but I get a lot back from them. They need sustaining. They don’t come for free. They need us to contribute, and to find ways to pay to support the creators and makers and builders and engineers.
If you like creative, quirky online sites, you should subscribe to Naive Weekly. I’m still enjoying things I found in it last month.
Now, I’m off to continue exploring… everything.
Long live The Web!
PS the winners of the Tiny Awards 2024 are announced at the weekend… 👀
Share this post from your fediverse server
https:// ShareThis server does not support sharing. Please visit .
andypiper.co.uk/2024/08/14/i-l…
#Blaugust2024 #100DaysToOffload #appStores #Apple #capitalism #chokepointCapitalism #coryDoctorow #enshittification #openSource #openTechnology #rentSeeking #Technology #web
Better Call a Website
Internet Phone Book, Crawl Space, PBS of the Internet and more :)Kristoffer (Naive Weekly)
Debian Orphans Bcachefs-Tools: "Impossible To Maintain In Debian Stable"
Even before the Bcachefs file-system driver was accepted into the mainline kernel, Debian for the past five years has offered a "bcachefs-tools" package to provide the user-space programs to this copy-on-write file-system. It was simple at first when it was simple C code but since the Bcachefs tools transitioned to Rust, it's become an unmaintainable mess for stable-minded distribution vendors. As such the bcachefs-tools package has now been orphaned by Debian.
From John Carter's blog, Orphaning bcachefs-tools in Debian:
"So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).
With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream."
...
With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done.
Debian Orphans Bcachefs-Tools: "Impossible To Maintain In Debian Stable"
Even before the Bcachefs file-system driver was accepted into the mainline kernel, Debian for the past five years has offered a 'bcachefs-tools' package to provide the user-space programs to this copy-on-write file-systemwww.phoronix.com
reshared this
Tech Cyborg reshared this.
Newly added to the Trade-Free Directory:
materialvermittlung.org
We provide material!
#adhesives #artSupplies #cardboard #containers #fabrics #foam #foils #Material #metal #naturalMaterials #paints #paper #polystyrene #rubber #sewingAccessories #textiles #theaterDecoration #wood
More here:
directory.trade-free.org/goods…
TROM reshared this.
Looking for software KVM I can't remember the name of (solved)
Fairly recently, I saw an app that served the same purpose as Barrier or Input-leap, allowing you use one computer to control the keyboard and cursor of multiple. I'm fairly certain it was designed with GTK 4, or maybe 3, and it had Wayland support. I've had no luck getting input-leap working well on my devices, so if anyone knows what app this was (or any other options) I would really appreciate it.
Update:
Despite searching for 15 minutes before posting, I found it seconds later, thanks to DDGs reddit bang. It is lan-mouse. Will leave this up in case this software comes in handy for others.
GitHub - feschber/lan-mouse: mouse & keyboard sharing via LAN
mouse & keyboard sharing via LAN. Contribute to feschber/lan-mouse development by creating an account on GitHub.GitHub
like this
DaGeek247 likes this.
GitHub - feschber/lan-mouse: mouse & keyboard sharing via LAN
mouse & keyboard sharing via LAN. Contribute to feschber/lan-mouse development by creating an account on GitHub.GitHub
Thanks for the tip!
This was a long-standing showstopper for me & Wayland. I got rid of my work computer instead, but if I get another one I'll be sure to test this out.
GitHub - QazCetelic/cockpit-boot-analysis: Cockpit plugin that shows information about system / userspace startup in a graph.
GitHub - QazCetelic/cockpit-boot-analysis: Cockpit plugin that shows information about system / userspace startup in a graph.
Cockpit plugin that shows information about system / userspace startup in a graph. - QazCetelic/cockpit-boot-analysisGitHub
EmuDeck team announce Linux-powered EmuDeck Machines
EmuDeck team announce Linux-powered EmuDeck Machines
The team behind EmuDeck, a project that started off to provide easy emulation on Steam Deck / SteamOS have announced their own hardware with the Bazzite Linux powered EmuDeck Machines.Liam Dawe (GamingOnLinux)
like this
Lasslinthar, dandi8, and Rakenclaw like this.
like this
Rakenclaw likes this.
like this
Rakenclaw likes this.
They say that even the cheap Intel version can play most retro games:
Ideal for light indie Steam games and retro gaming up to Wii U
With the ryzen variant, you can probably play most games.
Where are the fans on this thing? Please do not tell me you intend to passively cool a chip you intend to run Cyberpunk 2077 on?
Did we learn nothing from Intel era Apple? Sure, AMD chips run moderately cooler than Intel ones under the same workload, but still...
And the hardware, of course, but this is mostly of the shelf as well, I would say
OS wise this is ready to go, my main concern is that their emu deck project is written in a way that needs KDE to be fully functional. Hopefully this is just because is started out as a steam deck app and isn't inexperience.
The old frontends like Hyperspin and Launchbox were similar in that they were developed in such a way it really limited where those projects could go, as such the retro frontends for Linux have been pretty limited.
Be really cool to see them expand Emu deck into a standalone program, it's even cooler that we're seeing the intention of Ublue already allowing people to achieve their goals.
Asahi Lina's experience about working on Rust code in the kernel
I just wish every programmer completed the rustlings
game/tutorial. Doesn't take that long.
I didn't even fully complete it, and it made me a way better programmer, because it forces you to think RIGHT.
It may sound weird for people who haven't experienced it, but it's amazing when you get angry at the compiler and you realise... It is right, and you were doing something that could f*ck you up 2 months in the future.
And after a bit of practise, it starts wiring your brain differently, and now my Python code looks so much better and it's way more safe just because of those days playing around in rustlings
.
So yeah, Rust is an amazing language for everything, but particularly for kernel development. Either Linux implements it, or it'll probably die in 30 years and get replaced with a modern Rust kernel.
Georgia Tech Neuroscientists Explore the Intersection of Music and Memory
Georgia Tech Neuroscientists Explore the Intersection of Music and Memory | Research
In two studies, Ph.D. student Yiren Ren's research explores music’s impact on learning, memory, and emotions.research.gatech.edu
like this
originalucifer likes this.
Triple Buffering pushed to Gnome 48
Retargeted triple buffering to GNOME 48 instead of trying to upstream it in 47 at the last minute. Actually upstream wants it in 47 more than we do. But recent code reviews are both too numerous to resolve quickly and too destabilizing if implemented fully. So I’m not going to do that so close to release. There are still no known bugs to worry about and the distro patch for 24.10 only needs to be supported until EOL in July 2025.
discourse.ubuntu.com/t/desktop…
Dynamic triple/double buffering (v4) (!1441) · Merge requests · GNOME / mutter · GitLab
Use triple buffering if and when the previous frame is running late. This means the next frame will be dispatched on time instead of also starting late. It...GitLab
Xaver Hugl, one of the KDE devs, wrote a wonderful post explaining triple buffering. Maybe check this out.
TL;DR
(In the context of KDE Plasma)
With all those changes implemented in Plasma 6.1, triple buffering on Wayland
- is only active if KWin predicts rendering to take longer than a refresh cycle
- doesn’t add more latency than necessary even while triple buffering is active, at least as long as render time prediction is decent
- works independently of what GPU you have
Fixing KWin’s performance on old hardware
KWin had a very long standing bug report about bad performance of the Wayland session on older Intel integrated graphics.Xaver Hugl (Xaver’s blog)
like this
DaGeek247 likes this.
GNU Screen 5.0 released
Screen is a full-screen window manager that multiplexes a physical
terminal between several processes, typically interactive shells.
The 5.0.0 release includes the following changes to the previous
release 4.9.1:
- Rewritten authentication mechanism
- Add escape %T to show current tty for window
- Add escape %O to show number of currently open windows
- Use wcwdith() instead of UTF-8 hard-coded tables
- New commands:
- auth [on|off] Provides password protection
- status [top|up|down|bottom] [left|right] The status window by default is in bottom-left corner. This command can move status messages to any corner of the screen.
- truecolor [on|off]
- multiinput
Input to multiple windows at the same time
- Removed commands:
- time
- debug
- password
- maxwin
- nethack
- Fixes:
- Screen buffers ESC keypresses indefinitely
- Crashes after passing through a zmodem transfer
- Fix double -U issue
GNU Screen - News [Savannah]
Savannah is a central point for development, distribution and maintenance of free software, both GNU and non-GNU.savannah.gnu.org
like this
kbal likes this.
reshared this
Tech Cyborg and Stephane L Rolland-Brabant ⁂⧖⏚ reshared this.
less
the file and copy with my mouse
I have a lot of trouble with the window/pane management. Moving panes to a different window is rather difficult. The server>session>window>pane hierarchy also seems way too deep for my humble needs.
The fact that the active window syncs between sessions is also really odd. Why can't I look at different windows on different devices?
The `screen` package is deprecated and not included in RHEL8.0.0. - Red Hat Customer Portal
How to install the screen package in RHEL8.0.0? The screen package is not available in RHEL8.Red Hat Customer Portal
Lol, up until last year, it hadn't been touched in a decade, and even then there's only 5 commits less than 14 years old.
only 5 commits less than 14 years old
I think you're looking at the latest commit in each branch. There are ~40 commits this year.
Digitala banker utnyttjas för penningtvätt. Helt digitala banker som polisen och en rad andra myndigheter kallar neobanker i en rapport, löper enligt just polisen en betydande risk att utnyttjas för penningtvätt.
vkd3d 1.13 Released
vkd3d 1.13 · wine / vkd3d · GitLab
The vkd3d team is proud to announce that release 1.13 of vkd3d, the Direct3D to Vulkan translation library, is now available. This release contains improvements that...GitLab
like this
ShaunaTheDead likes this.
Maybe not, but doesn't really answer my question what this would be used for.
I'm not hating, just interested; my last knowledge was that if you wanted to play Direct3D 12 games, you'd need the proton fork. But I don't know many other things Direct3D is used for, so...
Asking for donations in Plasma
Asking for donations in Plasma
Why do we ask for donations so often? Because it’s important! As KDE becomes more successful and an increasing number of people use our software, our costs grow as well: Web and server hostin…Adventures in Linux and KDE
like this
ShaunaTheDead likes this.
Now this is much better than getting ads in your Start Menu.
Donate and support KDE
Help us create software that gives you full freedom and protects your privacy, and be a part of a great communityDonate and support KDE
like this
BlueKey likes this.
*checks if it's April 1*
no? just no. please don't open a door to Microsoft BS.
like this
PokyDokie likes this.
I do suspect a small but vocal crowd of people will spread doom and gloom about it on social media anyway, of course.
I see they're here already
like this
RandomStickman likes this.
I mean, at least I'm not paying $200* for the privilege of being advertised to... I'd like an option to disable it permanently in the popup but it seems mostly reasonable?
^* This is the first price I got for a Windows licence when I searched for it. I know you can probably get them cheaper, but that's the price they're advertising, so eh.^
like this
falseprophet likes this.
like this
falseprophet likes this.
Why?
People spend countless hours building the software you use for free. Now they need to buy actual hardware yo build and test that software and what? They have to pay with their own money besides all that time they spent already so that you can continue to use this for free?
You're not forced to pay anything, they're asking for small donations
like this
falseprophet likes this.
I personally think once a year is not enough. Every 6 months might be better. Also people already spend a lot during December that they might not prioritize donating to KDE.
For those complaining.... Well I don't know what to say to them. Such a big complex software which is 100% free should be allowed to remind us that they need money.
Don't forget they said it's running as a daemon specifically so you can easily disable it if it triggers you so much.
like this
falseprophet likes this.
It’s implemented as a KDE Daemon (KDED) module, which allows users and distributors to permanently disable it if they like.
Eh. I guess good enough.
But I'm still opposed on principle.
like this
sunzu2 likes this.
A lot of people here have such a bizarre stance.
People have put work into this, for free. And the moment they ask for support, you immediately bring the pitchforks out, over a singular pop-up you can permanently disable? That's just plain disrespectful, at the very least
like this
dhhyfddehhfyy4673, DaGeek247 and falseprophet like this.
Unfortunately, there has always been the issue that a not-insignificant percentage of users of FOSS software believe the FREE part means "free as in beer" and take umbrage when asked to contribute.
I've long been a proponent (and I know I'm in a minority) that has advocated for a shift in the marketing of FOSS applications from "donation based" to "value based". Meaning that the expectation is that if you enjoy the software, you pay an amount that you believe is commensurate to your use. This is voluntarily of course...if you can't pay, than please use it and enjoy it. But those who can pay, should pay...at least a little bit, to offset the costs for those who can't.
It's more or less that the wording of FOSS apps needs to change so that you are expected to contribute if you can.
Just my opinion. Like I said, I know I'm in the minority. Just not a fan of the percentage of users that has always existed that (falsely) think that asking for money for your project is somehow anathema to the Open Source ideal and whine whenever they're asked to contribute.
Maybe donate 50 cents for every hour you used the software and it was useful to you.
That would be 1000 €$ per year if you work with Linux full time.
Let’s see some commercial software:
Microsoft Office 365 is 70 $€ per year. Adobe Suite around 700 $€ per year. IntelliJ IDEA about 170 $€ per year. Affinity Suite is 170 $€ once. Reaper is 60 €$ for a discounted license. Full featured media player like Elmedia costs 20 $€. BBEdit costs 60.
The FOSS windows and Mac FTP client Cyberduck asks for a minimum 10 €$ donation. It won’t prompt you for a donation if you bought a license. The Duck applications are all pretty nice.
While I absolutely agree with what you are trying to say and donate to kde myself already. The issue with a lot of comments like yours is that the examples you use are almost always commercial software that already only see's limited use. I get value out of non commerical use applications such as dolphin, kate, konsole, and kdeconnect. Finding examples of popular paid versions of those applications would go a long way in my opinion because it would be something that more people can relate to.
The problem I see with the examples you are giving are the same problems I see when someone uses those examples as reasons why they can't switch to linux in the first place. And that is the fact that while those programs are popular. They aren't used by the vast majority of people who don't have a work related need to use them. Half the people that claim it as an excuse probably don't actually use those programs as well.
Your examples such as Cyberduc, Elmedia, and BBBedit are your stronger examples. Again just my opinion.
The Duck applications are all pretty nice.
They make more apps than just Cyberduck?
Also what the hell is up with everyone saying "free as beer"?
Beer isn't free!
The full saying is "Free as in Speech, not Free as in Beer"
Basically the "Free" in free means that it's free to do with as you please, modify, etc... But not free as in "here's a free product...like getting a free beer"
That's also confusing and it is not the full saying. The full saying is "free as in free speech, not free beer".
From the FSF website:
Free software is a matter of liberty, not price. Think of “free” as in “free speech”, not as in “free beer”. Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software.
I have over €1500 donated to opensource projects.
I have only once bought a commercial software license worth €7/lifetime.
It's not complicated.
It's an ad.
There's no version of advertising I will ever be OK with.
Not an ad. No one is trying to sell you anything.
(If you get the notification) you're already using their product.
like this
Sickday and falseprophet like this.
Yes, it is an ad. Any call to action is an ad.
And its mere presence will ensure I don't give them any more money. The core concept of inserting any ad in an OS is not behavior I am willing to reward.
So, asking you to VOLUNTARILY donate IF YOU WANT to with a pop-up you can simply ignore and/or disable is advertising? I don't understand... I mean, they give you a product for free, full of good features and updated regularly, and the moment they ask you to donate, again, IF YOU WANT to, it's considered advertising...
You're so sad, dude.
like this
falseprophet likes this.
Yes. It is literally impossible for an organization asking for money not to be an ad.
And yes, showing me a single ad once means I never give them money again. I am not OK with ads.
Don't use KDE then🤷♂️
Those assholes! They should make an OS for free!!! How dare they ask for support?!?!
No one forces you to support them, if it's so annoying just disable it. I wonder if makes you happy work for someone for free... Hope it will happen to you so you'll understand how bad it is :)
Cya
like this
Sickday and falseprophet like this.
like this
falseprophet likes this.
This is not an OS behaviour. KDE is a desktop environment.
If it bothers you so much, remove the DE and use the command line, full time
like this
Sickday likes this.
Ads try to sell you something, there is no "call to action". Here, there is nothing to sell, so by definition it's not an ad.
They are just asking you if you'd like to help them in providing you the product you're already using.
like this
falseprophet likes this.
I'm not against the idea, but I do think it's a bit unfair. There are dozens of projects KDE relies on that never even get the chance to ask for donations this way, simply because they don't need a GUI.
I believe KDE should at least offer to share the donations with other projects, projects that would otherwise have no voice. Something like the old Humble Bundle donation method would work really well, and let users to choose how their money is allocated.
like this
falseprophet likes this.
The one change I would make would be adding a "never" button to the notification so you don't have to disable it in the settings if you don't want it
Or actually "Don't show again" would probably be better phrasing
I didn't get that notification yet,but when I do,l'll be sure as shit to donate as large amount as I can afford.
Edit: I know I can and have donated already,but just to highlight the idea
What are you talking about? Microsoft is charging for HEIF, HEIC, and looking to do it with AV1 and AVIF. Intel started development of EFI in the titanium days as a new standard for servers running unix before it became an open standard. Microsoft wasn't the only one involved in the standard but apple, redhat, hp, Intel, amd, and a laundry list of hardware/software manufacturers.
Most all the innovation happens in enterprise space with servers running unix/Linux. User operating systems are an afterthought when these features are initially conceived.
No problem... Once a year is fine. It's a non-profit based in Germany...
Thunderbird shows it once at every startup...
Thunderbird shows it for a at every startup
Honestly didn't realise till you pointed that out. I'm so used to seeing it that it doesn't register to me what it's saying anymore. Probably for the best that KDE only does it once a year; if it were daily I'm sure it wouldn't even register to people that it's asking for donations.
like this
falseprophet likes this.
Tumbleweed Monthly Update - August 2024
Tumbleweed Monthly Update - August 2024
Welcome to the monthly update for Tumbleweed for August 2024. This month has been a productive period with significant progress and updates. The rolling-rele...openSUSE News
like this
ShaunaTheDead likes this.
One Of The Rust Linux Kernel Maintainers Steps Down - Cites "Nontechnical Nonsense"
Wedson Almeida Filho is a Microsoft engineer who has been prolific in his contributions to the Rust for the Linux kernel code over the past several years. Wedson has worked on many Rust Linux kernel features and even did a experimental EXT2 file-system driver port to Rust. But he's had enough and is now stepping away from the Rust for Linux efforts.
From Wedon's post on the kernel mailing list:
I am retiring from the project. After almost 4 years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it's best to leave it up to those who still have it in them.
...
I truly believe the future of kernels is with memory-safe languages. I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix.Lastly, I'll leave a small, 3min 30s, sample for context here: youtu.be/WiPp9YEBV0Q?t=1529 -- and to reiterate, no one is trying force anyone else to learn Rust nor prevent refactorings of C code."
One Of The Rust Linux Kernel Maintainers Steps Down - Cites "Nontechnical Nonsense"
One of the several Rust for Linux kernel maintainers has decided to step away from the projectwww.phoronix.com
"There's no compromise, I'M RIGHT AND YOU'RE WRONG!"
no wonder everyone hates rustphiles
I wouldn't say that. For primitives yeah, day or two. But if you want to build a proper program, it'll take time to get used to it. For my first few projects I just used clone everywhere. Passing by reference and managing lifetimes, specially when writing libraries is something that takes time to get used to. I still don't feel confident.
Besides that I do like Rust though. Sometimes I feel like "just let me do that, C let's me", but I know it's just adding safety where C wouldn't care.
A = 2
B = 22
C = 222
D = 3
...
Isn’t Linux still Linux even though probably a lot of the original code is gone?
The Kernel of Theseus.
You get used to the syntax and borrow checker in a day or two.
As someone who spent a couple months learning rust, this was half true for me. The syntax? Yeah. No problem. The borrow-checker (and Rust's concept of ownership and lifetimes in general)? Absolutely not. That was entirely new territory for me.
The first directory block is a hole. But type == DIRENT, so no error is reported. After that, we get a directory block without '.' and '..' but with a valid dentry. This may cause some code that relies on dot or dotdot (such as make_indexed_dir()) to crash
The problem isn't that the block is a hole. It's that the downstream function expects the directory block to contain .
and ..
, and it gets given one without because of incorrect error handling.
You can encode the invariant of "has dot and dot dot" using a refinement type and smart constructor. The refined type would be a directory block with a guarantee it meets that invariant, and an instance of it could only be created through a function that validates the invariant. If the invariant is met, you get the refined type. If it isn't, you only get an error.
This doesn't work in C, but in languages with stricter type systems, refinement types are a huge advantage.
Smart Constructors
(Photo by Michael Dziedzic on Unsplash) The validation problem At some point, every developer writing user-facing code has asked themselves the question “How should I validate input?”Andy G's Blog
What kind of type signature would prove the first block of any directory in an ext4 filesystem image isn’t a hole?
I don't know if the type system proves it's not a hole, but the type system certainly seems to force consumers to contend with the possibility by surfacing the outcomes at the type system level. That's what the Either
is doing in the example's return type, is it not?
fn get_or_create_inode(
&self,
ino: Ino
) -> Result<Either<ARef<Inode<T>>, inode::New<T>>>
Sublicense the Linux Mark | Linux Foundation
The Linux Foundation protects the public and Linux users from unauthorized and confusing uses of the trademark.www.linuxmark.org
That's disengenuous though.
- We're not forcing you to learn rust. We'll just place code in your security critical project in a language you don't know.
- Rust is a second class citizen, but we feel rust is the superior language and all code should eventually benefit from it's memory safety.
- We're not suggesting that code needs to be rewritten in rust, but the Linux kernel development must internalise the need for memory safe languages.
No other language community does what the rust community does. Haskellers don't go to the Emacs project and say "We'd like to write Emacs modules, but we think Haskell is a much nicer and safer functional language than Lisp, so how about we add the capability of using Haskell and Lisp?". Pythonistas didn't add Python support to Rails along side Ruby.
Rusties seem to want to convert everyone by Trojan horsing their way into communities. It's extremely damaging, both to those communities and to rust itself.
Arch people tell you "I use arch BTW"
Rust people make PRs rewriting your code in rust.
Rust people are worse.
There is no "your" new rust kernel. There is a gigantic ship of Theseus that is the Linux kernel, and many parts of it are being rewritten, refactored, removed an added all the time by god knows how many different people. Some of those things will be done in rust.
Can we stop reacting to this the way conservatives react to gay people? Just let some rust exist. Nobody is forcing everyone to be gay, and nobody is forcing everybody to immediately abandon C and rewrite everything in rust.
Of course. Rust isn't immune to logic errors, off-by-one mistakes, and other such issues. Nor is it memory safe in unsafe
blocks.
Just by virtue of how memory safety issues account for 50%+ of vulnerabilities, it's worth genuinely considering as long as the bindings don't cause maintainability issues.
Unless you're a functional programming purist or coming from a systems programming background, it takes a lot longer than a few days to get used to the borrow checker. If you're coming as someone who most often uses garbage-collected languages, it's even worse.
The problem isn't so much understanding what the compiler is bitching about, as it is understanding why the paradigm you used isn't safe and learning how to structure your code differently. That part takes the longest and only really starts to become easier when you learn to stop fighting the language.
Nobody can maintan a fork of the linux kernel on their own or even with a team. It's a HUGE task.
There already is rust in part of the linux kernel. It's not a fork.
But I agree with your first statement, people are dumb as hell, me included lol
There's always going to be pushback on new ideas. He's basically asking people questions like "Hey how does your thing work? I want to write it in rust." and gets the answer "I'm not going to learn rust.".
I think rust is generally a good thing and with a good amount of tests to enforce behavior it's possible to a functionally equivalent copy of the current code with no memory issues in future maintenance of it. Rewriting things in rust will also force people to clarify the behavior and all possible theoretical paths a software can take.
I'm not gonna lie though, if I would have worked on software for 20 years and people would introduce component that's written in another language my first reaction would be "this feels like a bad idea and doesn't seem necessary".
I really hope that the kernel starts taking rust seriously, it's a great tool and I think it's way easier to write correct code in rust than C. C is simple but lacks the guardrails of modern languages which rust has.
The process of moving to rust is happening but it's going to take a really long time. It's a timescale current maintainers don't really need to worry about since they'll be retired anyway.
I'll add that even when you're an expert in both languages, it's common to see WTF's in the original and not be sure if something is a bug or just weird behavior that's now expected. Especially when going from a looser to a more strict language.
I've translated huge projects and most of the risk is in "you know the original would do the wrong thing in these x circumstances -- I'm pretty sure that's not on purpose but.... Maybe? Or maybe now someone depends on it being wrong like this?"
From a developer standpoint you're taking someone's baby, cloning it into a language they don't understand and deprecating the original. Worse, if you're not actually interested in taking over the project you've now made it abandonware because the original developer lost heart and the person looking for commit counts on GitHub has moved on.
Obviously these extremes don't always apply, but a lot of open source relies on people taking a personal interest. If you destroy that, you might just destroy the project.
Schemes and Resources - The Redox Operating System
This book carefully describes the design, implementation, direction, and structure of Redox, the operating system.doc.redox-os.org
I am no visionary but if Linux doesn’t internalize this, I’m afraid some other kernel will do to it what it did to Unix.
Maybe that's not a bad thing? If you ask me the GNU people are missing a trick. Perhaps if they rewrote Hurd in Rust they could finally shed that "/Linux".
I see that my previous comment is not the common reality apparently.
I'm mainly a C# + js dev of a few years, and I would love to see what precisely other people here are having problems with, because I've had a completely different experience to most of the people replying.
I’d like to add that there’s a difference between unsafe and unspecified behavior. Sometimes I’d like the compiler to produce my unsafe code that has specified behavior. In this case, I want the compiler to produce exactly that unsafe behavior that was specified according to the language semantics.
Especially when developing a kernel or in an embedded system, an example would be code that references a pointer from a hardcoded constant address. Perhaps this code then performs pointer arithmetic to access other addresses. It’s clear what the code should literally do, but it’s quite an unsafe thing to do unless you as the developer have some special knowledge that you know the address is accessible and contains data that makes sense to be processed in such a manner. This can be the case when interacting directly with registers representing some physical device or peripheral, but of course, there’s nothing in the language that would suggest doing this is safe. It’s making dangerous assumptions that are not enforced as part of the program. Those assumptions are only true in the program is running on the hardware that makes this a valid thing to do, where that magical address and offsets to that address do represent something I can read in memory.
Of course, pointer arithmetic can be quite dangerous, but I think the point still stands that behavior can be specified and unsafe in a sense.
I'll try :) Looks like I still have my code from when I was grinding through The Book, and there's a couple spots that might be illuminating from a pedagogical standpoint. That being said, I'm sure my thought process, and "what was active code and what was commented out and when," will probably be hard to follow.
My first confusion was in ~~deref coercion~~ auto dereferencing (edit: see? it's still probably not 100% in my head :P), and my confusion pretty much matched this StackOverflow entry:
stackoverflow.com/questions/28…
It took me until Chapter 15 of The Book (on Boxes) to really get a feel for what was happening. My work and comments for Chapter 15:
use crate::List::{Cons, Nil};
use std::ops::Deref;
enum List {
Cons(i32, Box<List>),
Nil,
}
struct MyBox<T>(T);
impl<T> Deref for MyBox<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T> MyBox<T> {
fn new(x: T) -> MyBox<T> {
MyBox(x)
}
}
\#[derive(Debug)]
struct CustomSmartPointer {
data: String,
}
impl Drop for CustomSmartPointer {
fn drop(&mut self) {
println!("Dropping CustomSmartPointer with data `{}`!", self.data);
}
}
fn main() {
let b = Box::new(5);
println!("b = {}", b);
let _list = Cons(1, Box::new(Cons(2, Box::new(Cons(3,Box::new(Nil))))));
let x = 5;
let y = MyBox::new(x);
assert_eq!(5,x);
assert_eq!(5, *y);
let m = MyBox::new(String::from("Rust"));
hello(&m);
hello(m.deref());
hello(m.deref().deref());
hello(&(*m)[..]);
hello(&(m.deref())[..]);
hello(&(*(m.deref()))[..]);
hello(&(*(m.deref())));
hello((*(m.deref())).deref());
// so many equivalent ways. I think I'm understanding what happens
// at various stages though, and why deref coercion was added to
// the language. Would cut down on arguing over which of these myriad
// cases is "idomatic." Instead, let the compiler figure out if there's
// a path to the desired end state (&str).
// drop stuff below ...
let _c = CustomSmartPointer {
data: String::from("my stuff"),
};
let _d = CustomSmartPointer {
data: String::from("other stuff"),
};
println!("CustomSmartPointers created.");
drop(_c);
println!("CustomSmartPointer dropped before the end of main.");
// this should fail.
//println!("{:?}", _c);
// yep, it does.
}
fn hello(name: &str) {
println!("Hello, {name}!");
}
Another thing that ended up biting me in the ass was Non-Lexical Lifetimes (NLLs). My code from Chapter 8 (on HashMaps):
use std::collections::HashMap;
fn print_type_of<T>(_: &T) {
println!("{}", std::any::type_name::<T>())
}
fn main() {
let mut scores = HashMap::new();
scores.insert(String::from("Red"), 10);
scores.insert(String::from("Blue"), 20);
let score1 = scores.get(&String::from("Blue")).unwrap_or(&0);
println!("score for blue is {score1}");
print_type_of(&score1); //&i32
let score2 = scores.get(&String::from("Blue")).copied().unwrap_or(0);
println!("score for blue is {score2}");
print_type_of(&score2); //i32
// hmmm... I'm thinking score1 is a "borrow" of memory "owned" by the
// hashmap. What if we modify the blue teams score now? My gut tells
// me the compiler would complain, since `score1` is no longer what
// we thought it was. But would touching the score of Red in the hash
// map still be valid? Let's find out.
// Yep! The below two lines barf!
//scores.insert(String::from("Blue"),15);
//println!("score for blue is {score1}");
// But can we fiddle with red independently?
// Nope. Not valid. So... the ownership must be on the HashMap as a whole,
// not pieces of its memory. I wonder if there's a way to make ownership
// more piecemeal than that.
//scores.insert(String::from("Red"),25);
//println!("score for blue is {score1}");
// And what if we pass in references/borrows for the value?
let mut refscores = HashMap::new();
let mut red_score:u32 = 11;
let mut blue_score:u32 = 21;
let default:u32 = 0;
refscores.insert(String::from("red"),&red_score);
refscores.insert(String::from("blue"),&blue_score);
let refscore1 = refscores.get(&String::from("red")).copied().unwrap_or(&default);
println!("refscore1 is {refscore1}");
// and then update the underlying value?
// Yep. This barfs, as expected. Can't mutate red_score because it's
// borrowed inside the HashMap.
//red_score = 12;
//println!("refscore1 is {refscore1}");
// what if we have mutable refs/borrows though? is that allowed?
let mut mutrefscores = HashMap::new();
let mut yellow_score:u32 = 12;
let mut green_score:u32 = 22;
let mut default2:u32 = 0;
mutrefscores.insert(String::from("yellow"),&mut yellow_score);
mutrefscores.insert(String::from("green"),&mut green_score);
//println!("{:?}", mutrefscores);
let mutrefscore1 = mutrefscores.get(&String::from("yellow")).unwrap();//.unwrap_or(&&default2);
//println!("{:?}",mutrefscore1);
println!("mutrefscore1 is {mutrefscore1}");
// so it's allowed. But do we have the same "can't mutate in two places"
// rule? I think so. Let's find out.
// yep. same failure as before. makes sense.
//yellow_score = 13;
//println!("mutrefscore1 is {mutrefscore1}");
// updating entries...
let mut update = HashMap::new();
update.insert(String::from("blue"),10);
//let redscore = update.entry(String::from("red")).or_insert(50);
update.entry(String::from("red")).or_insert(50);
//let bluescore = update.entry(String::from("blue")).or_insert(12);
update.entry(String::from("blue")).or_insert(12);
//println!("redscore is {redscore}");
//println!("bluescore is {bluescore}");
println!("{:?}",update);
// hmmm.... so we can iterate one by one and do the redscore/bluescore
// dance, but not in the same scope I guess.
let mut updatesingle = HashMap::new();
updatesingle.insert(String::from("blue"),10);
for i in "blue red".split_whitespace() {
let score = updatesingle.entry(String::from(i)).or_insert(99);
println!("score is {score}");
}
// update based on contents
let lolwut = "hello world wonderful world";
let mut lolmap = HashMap::new();
for word in lolwut.split_whitespace() {
let entry = lolmap.entry(word).or_insert(0);
*entry += 1;
}
println!("{:?}",lolmap);
// it seems like you can only borrow the HashMap as a whole.
// let's try updating entries outside the context of a forloop.
let mut test = HashMap::new();
test.insert(String::from("hello"),0);
test.insert(String::from("world"),0);
let hello = test.entry(String::from("hello")).or_insert(0);
*hello += 1;
let world = test.entry(String::from("world")).or_insert(0);
*world += 1;
println!("{:?}",test);
// huh? Why does this work? I'm borrowing two sections of the hashmap like before in the update
// section.
// what if i print the actual hello or world...
// nope. barfs still.
//println!("hello is {hello}");
// I *think* what is happening here has to do with lifetimes. E.g.,
// when I introduce the println macro for hello variable, the lifetime
// gets extended and "crosses over" the second borrow, violating the
// borrow checker rules. But, if there is no println macro for the hello
// variable, then the lifetime for each test.entry is just the line it
// happens on.
//
// Yeah. Looks like it has to do with Non-Lexical Lifetimes (NLLs), a
// feature since 2018. I've been thinking of lifetimes as lexical this
// whole time. And before 2018, that was correct. Now though, the compiler
// is "smarter."
//
// https://stackoverflow.com/questions/52909623/rust-multiple-mutable-borrowing
//
// https://stackoverflow.com/questions/50251487/what-are-non-lexical-lifetimes
//let
}
What are Rust's exact auto-dereferencing rules?
I'm learning/experimenting with Rust, and in all the elegance that I find in this language, there is one peculiarity that baffles me and seems totally out of place. Rust automatically dereferencesStack Overflow
I honestly like the cognitive load. Just not when I am at the workplace, having to deal with said load, with the office banter in the background and (not so) occasionally, being interrupted for other stuff.
And my cognitive load is not even about the memory allocations, most of the time.
Off topic:
I think, if one is seriously learning programming from a young age, it is better to start with C, make a project, big enough to feel the difficulty and understand what the cognitive load is all about and get used to it, hence increasing their mental capability. Then learn the memory safe language of their choice.
I never made a big enough project in C, but you can get to feel the load in C++ too.
The cognitive load of writing safe C, and the volume of extra code it requires, is the problem of C.
Oh no, i'm having a meltdown with all the cognitive load...
Build all the fancy tools you want. At the end of the day if you put a monkey at the wheel of a Ferrari you'll still have problems.
Nice that Rust is memory-safe, use it if you want, but why the insistence on selling Rust via C is crap? Doesn't earn you any points.
How about rustaceans fork the kernel and once it's fully Rust-only then try and get it to be used instead of the current one... win-win, eh?
No idea what you’re being downvoted. Just take a look at all the critical CVSS scored vulnerabilities in the Linux kernel over the past decade. They’re all overwhelmingly due to pitfalls of the C language - they’re rarely architectural issues but instead because some extra fluff wasn’t added to double check the size of an int or a struct etc resulting in memory corruption. Use after frees, out of bounds reads, etc.
These are pretty much wiped out entirely by Rust and caught at compile time (or at runtime with a panic).
The cognitive load of writing safe C, and the volume of extra code it requires, is the problem of C.
You can write safe C, if you know what you’re doing (but as shown by the volume of vulns, even the world’s best C programmers still make slip ups).
Rust forces safe(r) code without any of the cognitive load of C and without having to go out of your way to learn it and religiously implement it.
You've been blue pilled by null. Once over the hurdle, it's very eloquent.
Null is ugly. Tony Hoare apology for inventing it should be enough reason to learn to do better.
I admit I'm biased towards C-languages out of sheer personal preference and limited exposure to Rust but I am wondering, are there any major technical barriers to Rust replacing these languages in it's current form anymore?
I know there has been a lot of movement towards supporting Rust in the last 6 years since I've become aware of it, but I also get flashbacks from the the early 00's when I would hear about how Java was destined to replace C++, and the early 2010's when Python was destined to replace everything only to realize that the hype fundamentally misunderstood the use case limitations of the various languages.
Its mainly a matter of stabilizing existing features in the language - there are rust modules in the linux kernel as of 6.1 but they have to be compiled with the nightly compiler.
Rust is a very slow moving , get it right the first time esque, project. Important and relatively fundamental stuff is currently and has been useable and 99% unchanging for years but hasnt been included in the mainline compiler.
Also certain libraries would be fantastic to have integrated into the standard library, like tokio, anyhow, thiserror, crossbeam, rayon, and serde. If that ever happens though itll be in like a decade.
Some next level deaf going on. That's not what was being discussed.
The defensiveness proves just how out of touch and unqualified to comment some people are.
What compromise? Half code should be in rust?
What does this even have to do with rust developers,
The language rust gives us the ability to have more compile time checks, and why is that a bad thing. Do you like security issues in your OS because some dev forgot to handle pointers correctly?
That was what he was talking about at the conference, he literally asked for help about how things work, so he could write better APIs that they are more comfortable using.
But the response was we don't want to write rust.
If it were poorly designed and used exceptions, yes. The correct way to design smart constructors is to not actually use a constructor directly but instead use a static method that forces the caller to handle both cases (or explicitly ignore the failure case). The static method would have a return type that either indicates "success and here's the refined type" or "error and this is why."
In Rust terminology, that would be a Result<T, Error>
.
For Go, it would be (*RefinedType, error)
(where dereferencing the first value without checking it would be at your own peril).
C++ would look similar to Rust, but it doesn't come as part of the standard library last I checked.
C doesn't have the language-level features to be able to do this. You can't make a refined type that's accessible as a type while also making it impossible to construct arbitrarily.
Dude what are you on about, there is no rust programmer that want to teach fucking rust to anyone who doesn't want learn...
This has nothing to do with C vs Rust, this has to do with security and enabling more people to develop stuff for Linux.
These so called kernel maintainers you see in the conference are only mainting the parts that they use for their filesystem, they are mainting the API, they are paid by companies who have sold support for ext4, xfs or brtfs etc.. . Of course they don't want to make their jobs any harder by learning a new language.
And of course they obfuscate the API with random naming and undocumented usage, because they want to make it hard for anyone else using trying to use the APIs.
If they don't want to be part of the improvement, then go do something else. Yes rust is better than C for this, because guess what - there are still CVEs being made, because it's impossible to catch everything with you eyes.
That's insightful, thank you. It wasn't hard to follow, I did have these exact same "adventures" but I guess I forgot about them after I figured out the ways to do things.
Personally these kinds of things are exciting for me, trying to understand the constraints etc, so maybe that's also why I don't remember struggling with learning Rust, since it wasn't painful for me 😅 If someone has to learn by being forced to and not out of their own will, it's probably a lot harder
the crew on the Ship of Theseus would like a word with you. Because if you strip out every subsystem and replace them with a different language, everyone would still call it Linux and it would still work as Linux.
Linux isn't "a bunch of C code" it's an API, an ABI, and a bunch of drivers bundled into a monorepo.
You're going to need to cite that.
I'm not familiar with C23 or many of the compiler-specific extensions, but in all the previous versions I worked with, there is no type visibility other than "fully exposed" or opaque and dangerous (void*
).
You could try wrapping your Foo
in
typedef struct {
Foo validated
} ValidFoo;
But nothing stops someone from being an idiot about it and constructing it by hand:
ValidFoo trustMeBro;
trustMeBro.validated = someFoo;
otherFunction(trustMeBro);
Or even just casting it.
Foo* someFoo;
otherFunction((ValidFoo*) someFoo);
That's not the point, though. The point is to use a nominal type that asserts an invariant and make it impossible to create an instance of said type which violates the invariant.
Both validation functions and refinement types put the onus on the caller to ensure they're not passing invalid data around, but only refinement types can guarantee it. Humans are fallible, and it's easy to accidentally forget to put a check_if_valid()
function somewhere or assume that some function earlier in the call stack did it for you.
With smart constructors and refinement types, the developer literally can't pass an unvalidated type downstream by accident.
I hate how I can't do everything I imagine in rust.
But researching about why something isn't possible, makes me realize that the code should never be wroten like the way I did... so I can't blame rust for dissallowing me this.
Azure Linux In, CentOS Out: LinkedIn Switches its Server Operating System
As part of a massive migration campaign, LinkedIn has successfully moved their operations to Microsoft's Azure Linux as of April 2024, ditching CentOS 7 in the process and taking advantage of a more modern compute platform.As many of you might already know, back on June 30, 2024, CentOS 7 reached the end-of-life status, resulting in no new future updates for it, including fixes for critical security vulnerabilities.
...
The developers have gone with the high-performing XFS filesystem, which was made to work with Azure Linux to fit LinkedIn's use case. In their testing, they found that XFS was performing well for most of their applications, except Hadoop, which is used for their analytics workloads.When they compared the issues that cropped up, XFS came out as a more stable and reliable choice than the other candidate, Ext4.
...
Additionally, LinkedIn's MaaS (Metal-as-a-Service) team has developed a new Azure Linux Image Customizer tool for automating image generation, that takes an existing generic Azure Linux image, and modifies it to use with a given scenario. In this case, a tailored image for LinkedIn.
LinkedIn Engineering Blog: Navigating the transition: adopting Azure Linux as LinkedIn’s operating system
Azure Linux In, CentOS Out: LinkedIn Switches its Server Operating System
LinkedIn embraces Microsoft's Azure Linux for its hosting requirements.Sourav Rudra (It's FOSS News)
like this
originalucifer and ShaunaTheDead like this.
Microsoft has had an impressively positive impact on Linux, including the kernel directly. It started ramping up about 15 years ago. They were the 5th highest contributor to the 3.x kernel.
I recall reading about them working on improving Linux's MS related features, like fat32 support, samba, and things to make Linux run better in hyper-v that also helped performance overall.
Top Five Linux Contributor: Microsoft
While you shouldn't expect Windows to be open-sourced in your life-time, Microsoft—yes, Microsoft—is the fifth largest code contributor to Linux 3.0.Steven Vaughan-Nichols (ZDNET)
Depends how you define evil? If you mean they’re continuing to Linux in an effort to ensure it works well in their Azure platform which they can charge money for using, then yes?
They’re making all the right decisions though, they know that there is great demand for Linux in the server market, and are happy to allow it to run on their cloud platform to ensure viable competition with the other big players (AWS & Google).
Then in turn, their contributions benefit the open source community as a whole.
The fact they’ve also made .NET Core cross platform and another step in the right direction, as well as making VSCode cross platform too.
What would be nice is if they made desktop Office available. It’s one of the few subscription models that would probably work out well for them as many businesses would probably be happy to run Linux clients with native Office 365 support.
All valid points.
I believe in this instance, it’s mainly because they have figured out a way to profit off Linux and that is via their cloud hosting platform. As long as they’re making money, it’s probably fine.
So, somebody that was generating no revenue for Red Hat is not generating revenue for Red Het? Sounds like a real catastrophe for them.
Also, if I had to guess, I would say that Azure Linux is based on CentOS Stream. So, whatever “halo” they had before is mostly still in place.
Most importantly though, LinkedIn is owned by Microsoft as is Azure Linux. So I am not sure what kid of bellwether this is.
Are they most using Azure Linux? Or Azure? If Azure, no headline. If they are not using Azure, why not? That would be the headline here.
So, somebody that was generating no revenue for Red Hat is not generating revenue for Red Het? Sounds like a real catastrophe for them.
I'm sure that's how they're thinking. It will cause their platform to slowly fade into irrelevance though.
CentOS 7 reached the end-of-life status, resulting in no new future updates for it, including fixes for critical security vulnerabilities.
Wow are people dumb. We specifically chose the non-IBM source for continuing updates, so that's two counterexamples to whatever this chucklenuts is pushing.
But - speaking as someone who's used RHL since 98 and rhel since 3, el8 is so sketchy and el9 is just not worth it. I'll do Rocky if I have to do anything, as at least it uses the better packaging - albeit requiring to mimic RH's use in the dumbest way to date with a version-switching that pretends the method they fucking invented for doing that better doesn't exist, the same as PCLinuxOS that does it better every day also doesn't exist.
I just hope PCLinuxOS can get a good oVirt/pve template packered before it loses its opportunity to show off how insanely great it is.
reshared this
Tech Cyborg reshared this.
*Safer languages
Also both produce single binaries (as opposed to interpreted languages like php, python, js), which is so much easier to deal with for maintenance.
Also both produce single binaries (as opposed to interpreted languages like php, python, js), which is so much easier to deal with for maintenance.
This is the reason for me.
Whenever you have applications were implementations are plentiful the only real differentiation you can do without creating a different user experience is the technologies used to develop it. The importance of which in people's perspective is several things, mostly supporting technologies they like and want to see grow and possibly being skilled in the underlying technologies to actually contribute back.
Certain technologies are also just hot garbage, I swear to God if I have to install another electron app for some messaging platform I will shit myself.
like this
DaGeek247 likes this.
Actually... I do :/ Even though I have no idea of the programing realm, most of my self-hosted service via docker written in Go tend to be more "reliable", faster, easy to use?
I'm always happy to self-host somthing written in golang. But I do agree, its the new age "I use arch BTW" meme for programing language !
It's a sign of modern approach to solve a problem. Languages like Go and Rust have by definition and by principle less memory and security issues (not talking about other problems), which is otherwise a huge problem in C in example. So it's good to know the language being used.
The language itself can play a huge role for non programmers as well. In example Python can be a pain to use in some environments or it can get slow (although for something like RSS reader speed would be fine). For people using software from source in example, to compile themselves can have an impact too. It gets even more interesting for people who might want to look at the code itself, audit or edit it. In example if a program is written in Python, I know that I can read and make changes to it. In C, I would not be that confident.
Overall for most people it does not matter. That's true. For people like you, you can just ignore it. Not every title is for you. The title is for those who care about the language.
While I agree that there are not enough good local RSS readers, I also think that some kind of state syncing should exist. I understand why all these hosted server side RSS readers exist, but what I really want is some kind of standard way of doing local first RSS (and not just RSS, this could apply to everything we use «as a service», but let's keep this about RSS for now).
Imagine an RSS reader that keeps its state in a standard, well documented way, like having a folder where plaintext files keep a list of subscriptions, list of articles that are marked as read, tagged and starred articles etc., and you could just use syncthing or git to keep this folder in sync on all your devices, and you could use any RSS reader you want (be it on an android, windows, linux or anything else that follows the standard) and be able to seamlessly read your feeds and have the same state everywhere.
A man can dream I guess…
I don't use multiple users or ldap, but miniflux supports many users. And based on this pull request it seems to have the necessary interface for ldap?
github.com/miniflux/v2/pull/57…
I enjoy and recommend miniflux for rss reading.
I have used it for a long time now together with flux news android app. I also use save integration with wallabag sometimes.
Implement support for authentication via Auth Proxy by pborzenkov · Pull Request #570 · miniflux/v2
Auth Proxy allows to authenticate a user using an HTTP header provided by an external authentication service. This provides a way to authenticate users in miniflux using authentication schemes not ...GitHub
La verità sull’arresto politico di Pavel Durov | World Politics Blog
La verità sull’arresto politico di Pavel Durov
L’arresto di Pavel Durov in Francia sta suscitando una tempesta di polemiche internazionali, con numerose voci che denunciano la natura politica del caso. Tra accuse di censura e attacchi all…World Politics Blog
UltimaDark: Firefox Web Extension that uses agressive technique to get dark mode everywhere
GitHub - ThomazPom/Moz-Ext-UltimaDark: Web Extension that uses agressive technique to get dark mode everywhere
Web Extension that uses agressive technique to get dark mode everywhere - ThomazPom/Moz-Ext-UltimaDarkGitHub
like this
karashta and like this.
reshared this
Tech Cyborg reshared this.
I seriously loved dark reader & already used it for years but sometimes it render website into dark mode in weird ways
like this
VintageGenious likes this.
Interesting! I've recently struggled with dark reader memory leaks so I'll check this out
Edit: No scheduled/automatic follow system mode yet, but I'll keep watching for updates
like this
VintageGenious likes this.
Are you the developer? Because with the fact that you're hawking for this program everywhere on Lemmy (5 different communities) is making me think so
Which if that is the case, I can understand you wanting to get your work out there but you need to be transparent
solrize
in reply to pnutzh4x0r • • •like this
subignition and KaRunChiy like this.
unknowing8343
in reply to solrize • • •Everything is better in Rust. Faster, safer... And also the developer experience is amazing with cargo.
The problem here is not Rust, it's the humans, it seems.
The dependencies are set manually, of course, and the dev was enforcing something too strict, it seems, and that is causing headaches.
But, as the debian dude has learned... Rust programs will 99.999 % work if they can be compiled.
like this
DaGeek247, chookity and fif-t like this.
atzanteol
in reply to unknowing8343 • • •It's the same in C. Most programs don't test against the exact versions of most C libraries either. I'm not sure why he's so amazed at this.
MrPoopyButthole
in reply to atzanteol • • •Debian is the most stable distro and downstream loads of distros rely on Debian being clean. This dev has to be strict if they want to maintain the status quo. Rather let the user DL this as a standalone package and still use it, instead of it being included by default with the possibility of breaking.
And another thing. Version pinning should be normalized. I just can't bend my mind around code which has to be refactored every 12 - 24 months because dependencies were not version pinned and a new thing broke an old thing. Unless this code is your baby and you stare at every day, constantly moving forward, you should write code that lasts.
atzanteol
in reply to MrPoopyButthole • • •unknowing8343
in reply to atzanteol • • •The thing is that, in C the API could be slightly different and you could get terrible crashes, for example because certain variables were freed at different times, etc.
In Rust that is literally impossible to happen unless you (very extremely rarely) need to do something unsafe, which is explicitly marked as such and will never surprise you with an unexpected crash.
Everything is so strongly typed that if it compiles... It will run without unexpected crashes. That's the difference with C code, and that's why Rust is said to be safe. Memory leaks, etc, are virtually impossible.
atzanteol
in reply to unknowing8343 • • •What? That's utter BS. Maybe the kernel devs aren't wrong about the "rust religion". Not every bug in C is a memory bug.
We're talking about a future version having regressions or different-than-expected behavior from what your application was built and tested on. I guarantee you that can happen with rust.
Dave.
in reply to atzanteol • • •If library devs do versioning correctly, and you pin to major versions like "1.*" instead of just the "anything goes" of "*", this should not happen.
Your unit tests should catch regressions, if you have enough unit tests. And of course you do, because we're all operating in the dream world of, "I am great and everyone else is shit".
atzanteol
in reply to Dave. • • •gencha
in reply to unknowing8343 • • •P03 Locke
in reply to unknowing8343 • • •That's a dumb statement. Every tool needs unit tests. All of them!
If grep complied, but always returned nothing for every file and filter, then it's still not "working". But, hey, it compiled!
LeFantome
in reply to P03 Locke • • •You are not wrong of course but it does not really refute what they are saying.
Many people have had the experience with Rust that, if it builds, the behaviour is probably correct. That does not prevent logic errors but those are not kinds of bugs that relate to dependencies.
These kinds of dependency shenanigans would be totally unsafe in C but Rust seems to handle them just fine.
someacnt_
in reply to unknowing8343 • • •P03 Locke
in reply to solrize • • •merthyr1831
in reply to solrize • • •This isn't Rust's fault lmao, this is distro maintainers trying to fuck with dependencies on software which has been proven to be a horrible way of managing software distribution for years.
When it's a problem with other languages, we don't pin the blame on them. However, because Linux and its developer community is being dragged by its heels to accept ANYTHING more modern than C99 and mailing lists, the typical suspects are using any opportunity to slow progress.
The same shit has happened/is happening with Wayland. The same shit will happen when the next new technology offers a way for Linux to improve itself. A few jackasses who haven't had to learn anything new for a lifetime are gonna always be upset that new Devs might nip at their heels.
solrize
in reply to merthyr1831 • • •If whatever they are doing has been working for stuff written in languages other than Rust, we have to ask what makes Rust special. Rust is a low level language, so its dependencies if anything should be simpler than most, with just a minimal shim between its runtime and the C world. Why does any production software have a version <= X constraint in any of its dependencies anyway? I can understand version >= X, but the other way implies that the API's are unstable and you're going to get tons of copies stuff around. I remember seeing that in Ruby at a time when Python was relatively free of it, but now Python has it too. Microsoft at least understood in the 1990s that you can't go around breaking stuff like that.
No it's not all C99. I'm using Calibre (written in Python), Pandoc (written in Haskell), GCC (written in C, C++, and Ada), and who knows what else. All of these are complex applications with many dependencies. Eclipse (written in Java) is also in Debian though I don't use it. Bcachefs though is apparently just special.
Joe Armstrong (inventor of
... show moreIf whatever they are doing has been working for stuff written in languages other than Rust, we have to ask what makes Rust special. Rust is a low level language, so its dependencies if anything should be simpler than most, with just a minimal shim between its runtime and the C world. Why does any production software have a version <= X constraint in any of its dependencies anyway? I can understand version >= X, but the other way implies that the API's are unstable and you're going to get tons of copies stuff around. I remember seeing that in Ruby at a time when Python was relatively free of it, but now Python has it too. Microsoft at least understood in the 1990s that you can't go around breaking stuff like that.
No it's not all C99. I'm using Calibre (written in Python), Pandoc (written in Haskell), GCC (written in C, C++, and Ada), and who knows what else. All of these are complex applications with many dependencies. Eclipse (written in Java) is also in Debian though I don't use it. Bcachefs though is apparently just special.
Joe Armstrong (inventor of Erlang) said of OOP, "you wanted a banana but what you got was a gorilla holding the banana, and the entire jungle". Rust begins to sound like that too. It might not be inherent in the language, but it looks like the way the community thinks.
I also still don't understand why the Bcachefs userspace stuff is written in Rust. I can understand about the kernel part, but the concept of a low level language is manual resource management that a HLL handles for you automatically. Writing the userspace in a LLL seems like more pain for unclear gain. Are there intense performance or memory constraints or what?
Actually I see now that kernel part of Bcachefs is also considered unstable, so maybe the whole thing is not yet ready for production.
just_another_person
in reply to pnutzh4x0r • • •It's a huge tire fire at this point. This issue isn't Rust, per-se, but the dev is just being an asshole here. Submitting something that is generally problematic and yelling about how it will EVENTUALLY be good is a good way to get your shit tossed out.
He just lost a good amount of favor with the general community.
like this
subignition, KaRunChiy, chookity and Aatube like this.
Laser
in reply to just_another_person • • •What are you hinting at regarding this specific news?
Ghoelian
in reply to Laser • • •This entire thread:
lore.kernel.org/lkml/sctzes5z3…
tl;dr: bcachefs dev sent in a massive pull request, linus thinks it's too big and touches too much other code for the current state of the release cycle, dev says his filesystem is the future and should just be merged
[GIT PULL] bcachefs fixes for 6.11-rc5 - Kent Overstreet
lore.kernel.orgLaser
in reply to Ghoelian • • •just_another_person
in reply to Laser • • •Wha?
This is exactly what the entire thread is about. Did you not read it?
Laser
in reply to just_another_person • • •The OP is about packaging issues with userspace utilities due to version pinning in Rust. It's an issue with Rust in general. Kent is not obligated to lock dependencies in any particular fashion. He could loosen the dependencies, but there is no obligation, and Debian has no obligation to package it.
This is different from the thread you linked in which the bcachefs kernel code and the submission process is discussed, and on which there was a thread here as well in the last days. But your criticism, as valid as it is, only applies there, not in a thread about tooling packaging issue.
just_another_person
in reply to Laser • • •Laser
in reply to just_another_person • • •The only hint at the other topic I see is this:
I guess this is about reddit.com/r/bcachefs/comments… and while I think the title is too broad, the actual message is
I don't consider Kent's reasoning (also further down the thread) a rant - it might not be the most diplomatic, but he's not the only one who has problems with Debian's processes. The xscreensaver developer is another one for similar reasons.
I think, in fairness, bcachefs and Debian currently aren't a good fit. bcachefs is also in the kernel so users can rest it and report, but it wasn't meant to be stable; it's meant to not lose data unrecoverably.
Anyhow, whil
... show moreThe only hint at the other topic I see is this:
I guess this is about reddit.com/r/bcachefs/comments… and while I think the title is too broad, the actual message is
I don't consider Kent's reasoning (also further down the thread) a rant - it might not be the most diplomatic, but he's not the only one who has problems with Debian's processes. The xscreensaver developer is another one for similar reasons.
I think, in fairness, bcachefs and Debian currently aren't a good fit. bcachefs is also in the kernel so users can rest it and report, but it wasn't meant to be stable; it's meant to not lose data unrecoverably.
Anyhow, while I think that he's also not the easiest person on the LKML, I don't consider him ranting there; and with the author's and my judgement differing in these points, I'm led to believe that we might also disagree on what qualifies as hostile.
Lastly, while I'm not a big fan of how Rust packaging works, it ensures that the program is built exactly the same on the developer's and other machines (for users and distributors); it is somewhat ironic to see Debian complain about it, since they do understand the importance of reproducibility.
There's isn't much more to that issue than that sentence, while all other paragraphs cover the packaging. It's tangential at best.
just_another_person
in reply to Laser • • •P03 Locke
in reply to Laser • • •No, it's about Bcachefs specifically. It's literally in the title. Discussions around Rust version pinning are a useful side conversation, but that's not what the OP is about.
Laser
in reply to P03 Locke • • •thingsiplay
in reply to pnutzh4x0r • • •like this
KaRunChiy likes this.
boredsquirrel
in reply to pnutzh4x0r • • •bsergay
in reply to boredsquirrel • • •bcachefs-tools - Fedora Packages
packages.fedoraproject.orgboredsquirrel
in reply to bsergay • • •Strange, thanks that is nice! The development of bcachefs seems to not be that problematic on rolling or semi-rolling distros.
That guy might just have bad time managent, but filesystem based encryption is really cool.
bsergay
in reply to boredsquirrel • • •Can't agree more.
superkret
in reply to pnutzh4x0r • • •Leaflet
in reply to superkret • • •On the kernel side, there are disagreements between long term C maintainers (who may not know Rust or may actively dislike it) and the new Rust community trying to build in Rust support. To make the Rust parts work, there needs to be good communication and cooperation between them to ensure that the Rust stuff doesn't break.
On the Debian side, they have strict policies that conflict with how Rust development works. Rust has a dependency system called Cargo which hosts dependencies for Rust projects. This is different from C, C++ where there really isn't a centralized build system or dependency hoster, you actually install a lot of dependencies for these languages from your distro's repos. So if your Rust app is built against up to date libraries in Cargo, it's going to be difficult to package those apps in Debian when they ship stable, out of date libraries since Debian's policies don't like the idea of using outside dependencies from Cargo.
like this
chameleon likes this.
P03 Locke
in reply to Leaflet • • •As they should. You don't just auto-update every package to bleeding edge in a stable OS, and security goes out the window when you're trusting a third-party's third-party to monitor for dependency chain attacks (which they aren't). This is how we get Crowdstrike global outages and Node.JS bitcoin miner injections.
If some Rust tool is a critical part of the toolchain, they better be testing this shit against a wide array of dependency versions, and plan for a much older baseline. If not, then they don't get to play ball with the big Linux distros.
Debian is 100% in the right here, and I hope they continue hammering their standards into people.
Wooki
in reply to Leaflet • • •Big, old man vitriol was a sad show of ignorance of Rust.
m.youtube.com/watch?t=1529&v=W…
- YouTube
m.youtube.comqqq
in reply to superkret • • •This doesn't seem to be a Rust problem, but a modern development trend appearing in a Rust tool shipped with Cargo. The issue appears to be the way things are versioned and (reading between the lines maybe?) vendoring and/or lockfiles. Lockfiles exist in a lot of modern languages and package managers: Go has
go.sum
, Rust has Cargo which hasCargo.lock
, Python haspip
which gives a few different ways to pin versions, JavaScript hasnpm
andyarn
with lock files. I'm sure there are tons of others. I'm actually surprised this doesn't happen all the time with newer projects. Maybe it does actually and this instance just gains traction because people get to say "look Rust bad Debian doesn't like it".This seems like a big issue if you want your code to be packaged by Debian, and it doesn't seem easy to resolve if you also want to use the modern packaging tools. I'm not actually sure how they resolve this? There are real benefits to pinning versions, but there are also real benefits to Debian's model (of cont
... show moreThis doesn't seem to be a Rust problem, but a modern development trend appearing in a Rust tool shipped with Cargo. The issue appears to be the way things are versioned and (reading between the lines maybe?) vendoring and/or lockfiles. Lockfiles exist in a lot of modern languages and package managers: Go has
go.sum
, Rust has Cargo which hasCargo.lock
, Python haspip
which gives a few different ways to pin versions, JavaScript hasnpm
andyarn
with lock files. I'm sure there are tons of others. I'm actually surprised this doesn't happen all the time with newer projects. Maybe it does actually and this instance just gains traction because people get to say "look Rust bad Debian doesn't like it".This seems like a big issue if you want your code to be packaged by Debian, and it doesn't seem easy to resolve if you also want to use the modern packaging tools. I'm not actually sure how they resolve this? There are real benefits to pinning versions, but there are also real benefits to Debian's model (of controlling all the dependencies themselves, to some extent Debian is a lockfile implemented on the OS level). Seems like a tough problem and seems like it'll end up with a lot of newer tools just not being available in Debian (by that I mean just not packaged by Debian, they'll likely all run fine on Debian).
CasualTee
in reply to pnutzh4x0r • • •I find that funny that, since this is rust, this is now an issue.
I have not dwelved in packaging in a long while, but I remember that this was already the case for C programs. You need to link against libfoo? It better work with the one the distribution ship with. What do you mean you have not tested all distributions? You better have some tests to catch those elusive ABI/API breakage. And then, you have to rely on user reported errors to figure out that there is an issue.
On one hand, the package maintainer tend to take full ownership and will investigate issues that look like integration issue themselves. On the other hand, your program is in a buggy or non-working state until that's sorted.
And the usual solutions are frown upon. Vendoring the dependencies or static linking? Are you crazy? You're not the one paying for bandwidth and storage. Which is a valid concern, but that just mean we reached a stalemate.
Which is now being broken by
... show more-
I find that funny that, since this is rust, this is now an issue.
I have not dwelved in packaging in a long while, but I remember that this was already the case for C programs. You need to link against libfoo? It better work with the one the distribution ship with. What do you mean you have not tested all distributions? You better have some tests to catch those elusive ABI/API breakage. And then, you have to rely on user reported errors to figure out that there is an issue.
On one hand, the package maintainer tend to take full ownership and will investigate issues that look like integration issue themselves. On the other hand, your program is in a buggy or non-working state until that's sorted.
And the usual solutions are frown upon. Vendoring the dependencies or static linking? Are you crazy? You're not the one paying for bandwidth and storage. Which is a valid concern, but that just mean we reached a stalemate.
Which is now being broken by
- slower moving C/C++ projects (though the newer C++ standards did some waves a few years back) which means that even Debian is likely to have a "recent" enough version of your dependencies.
- flatpack and the likes, which are vendoring everything and the kitchen sink
- newer languages that static link by default (and some distributions being OK with it)
In other words, we never figured out a proper solution for C projects that will link with a different minor than the one the developer tested.
Well, /rant I guess. The point I'm raising does not seem to be the only one, and maybe far from the main one, for which bcachefs-tools is now orphaned. But I've seen very dubious arguments to try and push back against rust adoption. I feel like people have forgotten where we came from. And while there is no reason to go back per say, any new language that integrate this deep into the system will face similar challenges.
2xsaiko
in reply to CasualTee • • •merthyr1831
in reply to CasualTee • • •Because it's Rust it's now "rust bad" but Debian and other distros have been fucky with dependency management for YEARS. That's why we're moving to flatpak and other containerised apps!
Once again, the wider Linux dev community is trying to openly kneecap the first attempt in decades to bring Linux and its ecosystem up to a vaguely modern standard.
markstos
in reply to pnutzh4x0r • • •Maybe start over with whole kernel OS in Rust?
drewdevault.com/2024/08/30/202…
Rust for Linux revisited
drewdevault.commerthyr1831
in reply to pnutzh4x0r • • •