Last Week in Fediverse – ep 82
1 million new accounts on Bluesky as Brazil bans X, and premium feeds with Sub.club, and much much more.
[share author='Laurens Hof' profile='https://fediversereport.com/author/laurenshof/' avatar='' link='https://fediversereport.com/last-week-in-fediverse-ep-82/' posted='2024-09-01 18:04:28' guid='08552256-1db60dc7714646e3-cb23b587' message_id='https://fediversereport.com/last-week-in-fediverse-ep-82/']Last Week in Fediverse – ep 82
1 million new accounts on Bluesky as Brazil bans X, and premium feeds with Sub.club, and much much more.
Brazil bans X, and a signup wave to Bluesky
The Brazilian supreme court has banned the use of X in an ongoing legal fight with Elon Musk. The ban follows after a long trajectory of legal issues between the Brazilian government and Musk’s X. In April 2024, the Brazilian court ordered X to block certain X accounts that were allegedly related to the 2023 coup attempt, which Musk refused to do. In that same time period, President Luiz Inácio Lula da Silva opened an account on Bluesky, and there was already an inflow of a Brazilian community into Bluesky. Now, the legal fight has further escalated over X’s refusal to appoint a legal representative in the country, and Musk’s continuing refusal to comply with Brazil’s laws and regulation has resulted in the supreme court banning the use of X in the country altogether.
The ban on X has caused a massive signup wave to Bluesky, with over 1 million new accounts created in just three days, of which the large majority are from Brazil. The user statistics shot up even more than that, suggesting that there are a lot of people with an existing account logging back in as well.
The new inflow of people to Bluesky is having some significant effects on the network, as well as on the state of decentralised social networks more broadly:
- President Lula is putting actual focus on Bluesky. In one of his final posts on X, Luala listed in non-alphabetical order all other platforms that he is active on, and placed Bluesky at the top of the list. Posts by Lula that are placed on Bluesky (134k followers) as well as on Threads (2.4m followers) get more than 5 times as much likes on Bluesky. Today, Lula explicitly asked people on Bluesky what they thought about the platform, in a post that got over 30k likes and counting. It is hard to imagine that the Brazilian government is not paying attention to this all, and is looking which platform(s) the Brazilian community is moving towards in the wake of the ban on X.
- Brazilians are a very active community on the internet (see Orkut), and bring with them their own unique culture to Bluesky. The current decentralised social networks are heavily focused on US politics, judged by top posts on both Mastodon and Bluesky, and beyond shitposts and memes there is surprisingly little space for mainstream pop culture and sports. The Brazilian community does seem to bring a large number of pop culture and sports to Bluesky, significantly diversifying the topics of discussion, and in turn, creating more space for other people who are interested in that in the future. The activity of Brazilians on microblogging can also be seen in the like counts on popular posts of Bluesky: before this week, the most popular posts of any given day usually got around 3k likes, this has sprung up to 30k to 50k likes. Brazilians are so chatty in fact, that currently 81% of the posts on the network are in Portugese, and the amount of accounts of people who post on a given day has gone up from a third to over 50%.
- The Bluesky engineers have build a very robust infrastructure system, and the platform has largely cruised along fine without issues, even when faced with a 15x increase in traffic. This all without having to add any new servers. For third party developers, such as the Skyfeed developer, this increase in traffic did came with downtime and more hardware requirements however. It shows the complications of engineering an open system, while the Bluesky team itself was prepared with their core infrastructure, third party infrastructure, on which a large number of custom feeds rely, was significantly less prepared for the massive increase in traffic.
In contrast, the ban on X in Brazil has made little impact on Mastodon, with 3.5k new signups from Brazil on Mastodon.social. I’d estimate that this week has seen 10k new accounts above average, with 15k new accounts the previous week and 25k in this week. That places Mastodon two orders of magnitude behind Bluesky in signups from Brazil. There are a variety of reasons for this, which deserve their own analysis, this newsletter is long enough as it is. One thing I do want to point out is within fediverse community there are two sub communities that each have their own goals and ideas about the fediverse and growth. Some people responded with the news that most Brazilians went to Bluesky with type of response that indicated that they appreciate the small, quiet and cozy community that the fediverse currently provides, and a distrust of the growth-at-all-costs model for social networks. For other people however, their goal of the fediverse is to build a global network that everyone is a part of and everyone uses (‘Big Fedi’), a view of the fediverse that is also represented in the latest episode of the Waveform podcast (see news below). And if the goal is to build ActivityPub into the default protocol for the social web, it is worth paying attention to what is happening right now in the Brazilian ATmosphere.
The News
Sub.club is a new way to monetise feeds on the fediverse, with the goal of bringing the creator economy to the fediverse. It gives people the ability to create premium feeds that people can only access via a subscription. People can follow this feed from any Mastodon account (work on other fediverse platforms is ongoing). Sub.club handles the payment processes and infrastructure, for which they charge 6% of the subscription fee (compared to 8-12% Patreon charges). Sub.club also makes it possible for other apps to integrate, both IceCubes and Mammoth have this option. Bart Decrem, who is one of the people behind Sub.club, is also the co-founder of the Mastodon app Mammoth. Sub.club also explicitly positions itself as a way for server admins to fund their server. Most server admins rely on donations by their users, often via services like Patreon, Ko-fi, Open Collective or other third party options. By integration payments directly into the fediverse, Sub.club hopes that the barrier for donations will be lower, and more server admins can be financially sustainable.
Newsmast has build a new version of groups software for the fediverse, and the first group is dedicated to the Harris campaign. There are few types of groups available that integrate with Mastodon, such as with Friendica or a.gup.pe. These groups function virtually identical to hashtags, by boosting out posts where the group account is tagged in to everyone who follows the group account. As there is no moderation in these types of group accounts, it allows anyone to hijack the group account. A group account dedicated to a political campaign is especially vulnerable to this. On Mastodon a volunteer Harris Campaign group used a Friendica group for campaign organising, but the limited moderation tools (blocking a user from following the group) that are available are not working, which allowed blocked users to still get their posts boosted by the group account. Newsmast’s version of Groups gives (working) moderation tools, and only boosts top level comments and not replies, to cut down on the noise. For now, the new Group is only available to the Harris Campaign group for testing, but it will come later to Mastodon servers that run the upcoming Patchwork plugin.
Bluesky added quite a number of new anti-toxicity features in their most recent app update. Bluesky has added quote posting controls, allowing people to set on a per-post basis if people can quote the post or not. There is also the option to remove quotes after the fact as well: if you’ve allowed quote posts on a post you’ve made, but someone made a quote post that you do not feel comfortable with, you have the possibility to detach your post. Another update is the possibility to hide replies on your posts. Bluesky already hides comments under a ‘show more’ button if the comment is labeled by a labeler you subscribe to. You now have the option to do so on all comments that are made on your posts, and the hidden comment will be hidden for everyone. Finally, Bluesky has changed how replies are shown in the Following feed, which is an active subject of discussion. I appreciate the comments made by Bluesky engineer Dan Abramov here, who notes there are two different ways of using Bluesky, who each prioritise comments in conflicting ways. As new communities grow on Bluesky, prioritising their (conflicting) needs becomes more difficult, and I’m curious to see how this further plays out.
The WVFRM (Waveform) podcast of popular tech YouTuber MKBHD has a special show about the fediverse, ‘Protocol Wars – The Fediverse Explained!’. It is partially a discussion podcast, partial explainer, and partial interview with many people within the community. They talk with Mastodon’s Eugen Rochko, Bluesky’s Jay Graber, Threads’s Adam Mosseri, and quite some more people. It is worth noting for a variety of reason. The show is quite a good introduction, that talks to many of the most relevant names within the community. MKBHD is one of the biggest names in the tech creator scene, and many people are paying attention to what he and his team is talking about. Furthermore, I found the framing as ‘protocol wars’ interesting, as the popularity of Bluesky in Brazil as an X replacement indicates that there is indeed a race between platforms to be build on top of the new dominant protocol.
Darnell Clayton has a very interesting blog post, in which he discovers that there is a discrepancy in follower count for Threads accounts that have turned on fediverse sharing. Clayton notes that the follower count shown in the Threads app is lower than the one shown in a fediverse client, for both Mastodon and Flipboard. He speculates that this difference is the number of fediverse accounts that follow a Threads account. It should be noted that this is speculation and has not been confirmed, but if this is true, it would give us a helpful indication of how many fediverse accounts are using the connection with Threads. While we’re talking about Threads accounts, Mastodon CEO Eugen Rochko confirmed that the mastodon.social server has made a connection with 15.269 Threads accounts who have turned on fediverse sharing.
The Links
- Threads has figured out how maximise publicity by making minimal incremental updates to their ActivityPub implementation, edition 500.
- A Developer’s Guide to ActivityPub and the Fediverse – The New Stack interviews Evan Prodromou about his new book about ActivityPub.
- FedIAM is a research project where people can use fediverse and Indieweb protocols for logging in.
- You can now test Forgejo’s federation implementation.
- This week’s fediverse software updates.
- Ghost’s latest update on their work on implementing ActivityPub: “With this milestone, Ghost is for the first time exceeding the functionality of a basic RSS reader. This is 2-way interaction. You publish, and your readers can respond.”
- Dhaaga is a multiplatform fediverse client that adds unique client-side functionalities.
- Lotide, a experimental link-aggregator fediverse platform, ceases development.
- A custom QR code generator, which some pretty examples of custom QR codes for your fediverse profile.
- Custom decentralised badges on atproto with badges.blue, a new work in process by the create of atproto event planner Smoke Signal.
- Smoke Signal will be presenting at the next version of the (third party organised) ATproto Tech Talk.
That’s all for this week, thanks for reading.
Rust in Linux lead retires rather than deal with more “nontechnical nonsense”
Rust in Linux lead retires rather than deal with more “nontechnical nonsense”
How long can the C languages maintain their primacy in the kernel?Ars Technica
"Old" doesn't have to mean biologically old. In this case, it means people who have been doing it for a long time—long enough that they're set in their ways.
So while I can understand the confusion, it doesn't apply here.
Lol the out of memory error was a joke. A reference to that two people both trying to do the same thing will fill the heap since there’s unnecessary work.
I tried to make a code joke but it failed.
As far as what are they unwilling to release? Control. Ownership of any bit of the kernel they control
kernel maintainer Ted Ts'o, emphatically interjects: "Here's the thing: you're not going to force all of us to learn Rust."Lina tried to push small fixes that would make the C code "more robust and the lifetime requirements sensible," but was blocked by the maintainer.
DeVault writes. "Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats: introducing Rust effectively is one part coding work and ninety-nine parts political work – and it’s a lot of coding work."
You have to continually practice fluidity and actively learn things lest you solidify and lose that skill like any other.
I’m all for keeping one’s cognitive skills. However it is a fact that this decline happens, and that there is a phase of life where one has wisdom without necessarily having the same raw intelligence they had before. The wisdom is encoded in crystallized intelligence.
By wisdom here I mean “The tendency to make decisions that turn out well”.
My father was an equipment operator well into his 70s. After he retired they kept bringing him back to train the younger guys, and to get things don’t they couldn’t get done.
That was possible because those machines don’t change too much as time marches on. Because they use a stable platform, his organization was able to do better work by relying on his deep expertise. He could train those younger guys because it was the same platform he’d always used. Same dirt, same physics, mostly the same machines, same techniques, same pitfalls, etc.
His fluid intelligence is almost zero. The man’s practically an ASIC at this point, yet he’s fascinating to talk to and competent in the world. Fluid intelligence is not the only way to get things done.
We of course play plenty of video games together to keep him sharp. We also eat mushrooms, paper when necessary, and he works out a lot. We do all we can, believe me.
PHP is still in use and Wordpress is still somehow a behemoth. But the fact is that PHP has fallen out of favor, isn't used by new projects, and there's less demand for people with that skillset
Also while I’m driving, my Uber app locks up. Siri talks to me in a halting, broken voice, and responds with “something went wrong”. Google Maps shows a brief flash of my home before flipping to my current location. Then back to home again, then back to my current location. Spotify doesn’t remember what song I was listening to. Amazon Prime Video can’t remember what episode I was last watching.
Enshittification is everywhere. Our tech is buggy as fuck and solved problems in project management and devops are recurring. It’s not just about focusing on advertisers’ needs over customers. It’s also about wanting to kick out the greybeards as part of our great cultural revolution. It’s about driving trains into tunnels without adequate ventilation because fuck the previous generation thinking they know better than me.
It is the case that new technologies are introduced all the time, but that’s not necessarily right.
This example doesn't work as well with C/++ since that's older than most people here (though the language has also gone through iterations) and likely won't be going away any time soon. But still, in most cases you probably don't want to use that language for general work.
Why not? Because you won’t be able to hire younger devs? That is a function of this culture of pushing for change in everything. Younger people don’t learn C++ because it’s a little harder to read and because culturally we don’t respect established things. I’m sure there’s a word I don’t know here, but we generally have a culture of hating the past.
The good news is though, that it's relatively easy to transfer core skills between most languages.
I agree. Design patterns, work patterns, these transcend languages. And they’re 99% of the success or failure of a project.
And yet here we are emphasizing how C++ and Rust can’t realistically coexist in a serious project, because there’s some mismatch in their capabilities. I point to the current conundrum as the counter to this idea of transferability. The devil’s in the details and if the wisdom transfers between languages so well then we don’t need new languages.
Fundamentally, the question is “What are these news things that need to be done by code, that weren’t being done by code 30 years ago, such that it necessitates new languages?”
It’s cool to be able to tell your college buddies you’re building a new programming language.
In fact, it’s great that people are making new languages as a way of keeping language design wisdom alive. It’s great that CS kids build logic circuits from scratch for the same reason.
But then again, Netflix can’t remember what episode I was watching, when I’m almost certain they had that ability a few years ago.
Unfortunately there are a lot of problems created by using C in the kernel, and having all of this done manually. Many kernel vulnerabilities including several severe ones have been due to issues with memory management. Even the whitehouse has spoken on these issues related to C. Rust has been proven to be comparable to C in terms of performance, sometimes even faster. So it doesn't make a great deal of sense to keep using C for new projects.
That all being said Rust has had its own issues. There was a recent vulnerability in older versions of cargo the Rust package manager for instance. It's a somewhat new language so obviously teething issues are to be expected, and it might be too soon to use Rust for mission critical systems. It's also a harder language to learn and understand, so that makes adopting it more difficult especially for very experienced C developers like those who work on the Linux kernel. It might be better to wait and see what other languages like Zig and Carbon manage to do, but those are even newer and will take more time to actually be production ready.
Pointers are not guaranteed to be safe
So I guess they are forbidden in @safe
mode?
but it's being replaced by something else instead
Do you know what is the replacement? I tried looking up DIP1000 but it only says "superceded" without mentioning by what.
This makes me wonder how ready D is for someone that wants to extensively use @safe
though.
One detail about Rust in the kernel that often gets overlooked: the Linux kernel supports arches to which Rust has never been ported. Most of these are marginal (hppa, alpha, m68k—itanium was also on this list), but there are people out there who still use them and may be concerned about their future. As long as Rust remains in device drivers only this isn't a major issue, but if it penetrates further into the kernel, these arches will have to be desupported.
(Gentoo has a special profile "feature" called "wd40" for these arches, which is how I was aware of their lack of Rust support. It's interesting to look at the number and types of packages it masks. Lotta python there, and it looks like gnome is effectively a no-go.)
That to me sounds like exactly the reason why developers like the above have left. They are having to take on the burden of gently letting down other devs who are angry over a simple misunderstanding. A misunderstanding that wouldn't have happened if they had been listening or bothered to ask first before jumping to conclusions. Imagine someone heckles you on stage and you have to respond kindly. I certainly wouldn't. If someone had listened to my talk, misinterpreted it, then heckled me over it you can bet I would be angry and would respond in kind. To then see this misinformation being spread again would drive me nuts. I can see why they left.
The bottom line for me is that Rust devs who work on this stuff for free shouldn't be getting hounded by C devs just for asking for proper documentation that frankly they should have provided in the first place. I say this as someone who is skeptical of Rust for various reasons.
This is true, but the differences go even further than that. Redox is intentionally non-posix-compliant. This means that userspace programs written for posix operating systems may or may not need patching to even compile.
Part of the philosophy of Redox is to follow the beaten path mostly, but not be afraid of exploring better ideas when appropriate.
They are having to take on the burden of gently letting down other devs who are angry over a simple misunderstanding.
I feel like, if anyone would be happily willing to do that in their free time, they would have been a Politician or an HR and not a Developer.
I'm pretty n00b as a dev, but if I were to see someone misinterpreting my explanation, the most I would do is rephrase the same in a more understandable manner.
Definitely not going to resort to using "people management tactics", specially not in an Open Source Free Work setting, where the expectation is that the other person wants the good of the project as much as I do ^[as compared to a corporate setting, where if they are getting money to sit and do nothing, they will prefer that].
Facts are more important than feelings, specially when written text is the medium, where the reader can, at any time, go back and re-read to make sure they are at the same page, which a responsible, non-sleepy, non-drunk person would do in such a case.
On this note, I went and re-read the above comment and I realise, the "But that’s the thing where you are wrong." sentence is kinda useless. If the previous commenter were to have read the rest, they would realise that's where they were wrong. Mental note to not use useless stuff like this as the first sentence in a reply, because I probably have the habit
Yes, I know I joined both circumstances, this comment thread and the condition of the Rust Linux dev. It seemed relevant to me.
Lots of good insight there. While I disagree with much of it, I get it.
I’m all for keeping one’s cognitive skills. However it is a fact that this decline happens, and that there is a phase of life where one has wisdom without necessarily having the same raw intelligence they had before. The wisdom is encoded in crystallized intelligence.
Yeah, realizing you have that wisdom is eye opening and it's actually pretty powerful. I can hunt down bugs by smell now with surprising accuracy. But I'm not convinced it's mutually exclusive to fluidity. I guess I'm just hoping my brain doesn't petrify and am battling against it.
That was possible because those machines don’t change too much as time marches on. Because they use a stable platform, his organization was able to do better work by relying on his deep expertise. He could train those younger guys because it was the same platform he’d always used. Same dirt, same physics, mostly the same machines, same techniques, same pitfalls, etc.
It's a poor analogy for software though. Software is an ongoing conversation. Not a device you build and forget about. User demands change, hardware changes, bugs are found, and performance is improved.
I'm honestly curious what the oldest line of code in the Linux kernel is now. I would be pretty shocked to see that anything survived 30 years. And I don't think that's because of enshittification.
This example doesn’t work as well with C/++ since that’s older than most people here (though the language has also gone through iterations) and likely won’t be going away any time soon. But still, in most cases you probably don’t want to use that language for general work.Why not? Because you won’t be able to hire younger devs? That is a function of this culture of pushing for change in everything.
No, because C/++ isn't the right tool for every job. If I want to write up something quick and dirty to download a sequence of files, I'm not going to write that in C. It's worth learning other things.
I have to admit though that the conservative approach is more suited to things like a kernel, aerospace applications, or other things with lives riding on it. But also software that doesn't change becomes useless and irrelevant very quickly. For instance, running Windows XP is a bad call in just about any case.
But again I'm also not trying to say all software should be trend following. Just that devs should embrace learning and experiencing new things.
i would consider 4 years to be
- too long amount of time for a job you don't like (it took you 4 years to realize that you don't like it?)
- too short for a job you are committed to (you give up only after 4 years?)
After almost 4 years, I find myself lacking the energy and enthusiasm I once had
once i had? energy and enthusiasm he had only 3 years ago?! When i started reading the article i was expecting 15 years, 20, 30+
I'm not against Rust. I'd like to see something less dangerous with memory than C, but I don't think it's time yet for the kernel to leave C.
It's pretty clean, stable, it's working well at the moment and the C language (or variants of it) is/are still actively used everywhere. I think the kernel universally going Rust will be a long road of everything under the sun going there first before it's ported in earnest.
Straw Man Fallacy: A straw man fallacy occurs when someone misrepresents an opponent's argument to make it easier to attack or refute. Instead of addressing the actual issue, the person creates a distorted version of the argument that is easier to discredit.
This is what you have done in every single reply you made when I have made it quite clear that this is about the migration being an urgent security issue that the cyber security community at large has been calling attention to.
You avoid all the core points I make and distort them into trivial things that you can easily argue, like the fact that you "Don't code C much and use Rust occasionally". It's irrelevant to the actual arguments and you use it to dismiss the real core issues AKA a Straw Man fallacy
You have failed to argue in good faith and are actually a part of the problem. Good job!
Now please explain to me how C works.
That's not what they're asking. It's not about how C works, it's about how specific APIs written in C work, which is hard to figure out on your own for anyone who is not familiar with that specific code. You'll have to explain that to any developer coming new into the project expected to work with those APIs, no matter their experience with C.
Failing to respond in detail to all of the claims you believe to be your most important ones is not what is usually meant by a "straw man."
While I don't mind Rust (although I'm not too good at it yet) I really do find the crowd of overzealous enthusiasts claiming in the most hyperbolic terms that the necessity of its universal use is an urgent security issue quite off-putting sometimes.
How to Win Friends and Influence People by Dale Carnegie should be required reading for everyone. It's full of things that are so obvious in hindsight but go against our natural instincts so we blunder through attempts to persuade not realizing that we might be increasing resistance rather than decreasing it.
Like the whole, "you might be right but you're still an asshole" thing. Being correct just isn't enough. In some cases you get crucified and then after some time has passed, the point you were trying to convince others of becomes the popular accepted fact. And they might even still hate you after coming around on the point you were trying to make.
That book won't turn you into a persuasive guru, but it will help avoid many of the pitfalls that make debates turn ugly or individuals stubborn.
Or, on the flip side, you can use the inverse of the lessons to become a more effective troll and learn how to act like you're arguing one thing while really trying to rile people up or convince them of the opposite. I say this not so much to suggest it but because knowing about this can make you less susceptible to it (and it's already a part of the Russian troll farm MO).
then complain about them later.
I don't see where they're complaining? They don't seem to be asking anything of the C devs other than help with API definitions.
It also seems to require a GC though...
newxml is GC only, for simplicity sake.
Not even, it will suffocate on its own by having the capitalists keeping their changes from each other. Like a bucket of crabs; where if one crab is about to get free the others grab onto it and pull it down.
Kernels really benefit from being "forced" to share the code changes as the GPL license, they are too tied to HW, and HW needs a lot of capital when iterating.
I guess the question is, what happens to the kernel when all the people who learned on C are gone? The majority of even the brightest new devs aren't going to cut their teeth on C, and will feel the same resistance to learning a new language when they think that there are diminishing returns to be had compared to what's new and modern and, most importantly, familiar.
I honestly get the hostility, the fast pace of technology has left a lot of older devs being seen as undesirable because the don't know the new stuff, even if their fundamental understanding of low level languages could be a huge asset. Their knowledge of C is vast and valuable, and they're working on a project that thrives because of it. To have new people come to the project and say "Yeah, we could do this without having to worry about all that stuff" feels like throwing away a lot of the skill they've built. I'm not sure what the solution is, I really don't think there are enough new C developers in the world to keep the project going strong into the future though. Maybe a fork is just the way to go; time will tell which is more sustainable.
Yeah, the Rust guys' proposition is roughly this:
Hey you guys with 20-30 years of experience doing a single thing very well. Let's nullify most of that skillset and replace it with a thing we're good at.Don't worry, we will teach you.
They're not technically wrong about Rust being a better choice for a kernel, of course. They're just incredibly misinformed about the social hurdles they need to climb over for it to happen.
Just go ahead and write a very basic working kernel in rust.
I don't get this stance, really. If I want to write a driver in Rust I should start by creating a completely new Kernel and see if it gains momentum? The idea of allowing Rust in kernel drivers is to attract new blood to the project, not to intentionally divert it to a dummy project.
Rust is sufficiently different that you cannot expect C developers to learn rust to the level they have mastered C
If you , no one asked anything from the C developers other than documentation. They just want to know how to correctly make the Rust bindings.
Note that Rust is not replacing C code in the Kernel, just an added option to writing drivers.
C has been around for a very long time. I don't think wanting to replace a 1970s language, that was old when current gray beards were young is a bad thing. People have had more than enough time, and still have a good decade or two to make their careers writing and maintaining C code. Sometimes things have to change, old people be damned. It's diatribes like this that remind me the human race advances one body at a time as those holding us back die out.
Edit: also we aren't talking about people in their 70s and 80s here. Most of these "greybeards" are in their 40s and 50s at most. Linux itself is from the 1990s and is therefore more modern than C.
Netflix is using FreeBSD for servers. You can't blame everything they do wrong as being a problem with the new hires. They are using an OS older than Linux that changes more slowly than Linux, simply because it performs the best for their specific application. Rate of change isn't the issue here.
In fact that's 90% of what this comment is. Blaming new people and new techniques for problems when you aren't a part of that organisation and don't actually know what's happening.
Working with computers is not the same as working with construction equipment. Some degree of fluid intelligence is needed in this field, no matter how experienced you might be, just like how a surgeon needs steady hands. The people you call greybeards aren't nearly as old as your father is. We are talking about people who are in their 50s and 40s. They don't have that level of cognitive decline yet. Likewise some things like ext4 aren't likely going to be ported to Rust now or even ever. They can keep maintaining them as they are now for the foreseeable future. Plus I don't want people to have to keep working into their 70s and 80s. At some point it becomes elder abuse. Let people retire man.
C has existed for a long time now. We've been trying to replace it for ages, for most of it's lifespan even. C++ actually was one of the new options at one point. I get it seemed immovable only a decade ago, and I think that has lulled people into a false sense of security. In truth it was inevitable it would have to be replaced one day. It's already well outlived the life expectancy of a programming language. Just think about Ruby: created long after C yet has already become mostly irrelevant. You talk about the maximum rate of tool change, but C is one of the oldest tools we have, keeping it around would be almost 0 rate of tool change over decades. If you can't see that C is very slowly dying then you haven't seen the writing on the wall for the past several years. It's on you at that point.
We should look back with pride at everything that has been accomplished with C, and just how long it's been relevant. We can do this while still acknowledging it needs to be phased out gradually.
No one is asking for change that rapid either. Linux started adopting Rust four years ago now. It's probably still going to have C code inside it for at least a decade from now. This isn't some quick change, it's a gradual process. People have plenty of time to adapt, and those who are too old to do so will be around retirement agent if not already dead by the time C is fully phased out.
We of course play plenty of video games together to keep him sharp. We also eat mushrooms, paper when necessary, and he works out a lot. We do all we can, believe me.
Honestly you take more care of yourself and your father than I do. I am only in my 20s and suck at video games. If I took mushies or LSD I would probably lose my mind, assuming it's all still there in the first place. I suspect there is a good reason why people like me only have a life expectancy of 58 or so.
That is honestly a decent analogy. So, on what rides is it ok if something goes wrong and a young family member is killed? Rust says, it is never ok so we won’t let you do it.
To use your analogy though, the issue is the driver feeling quite confident in their skills and rating the risk as low. Then a tire blows on a corner. Or somebody else runs a red light. Or, there is just that one day when an otherwise good driver makes a mistake. History tells us, the risk is higher than the overconfident “good” drivers think it is.
In particular, history shows that 70% of the real world injuries and fatalities come from passengers without seat belts. So, instead of each driver deciding if it is safe, we as a society decide that seat belt use is mandatory because it will prevent those 70% of injuries and fatalities ( without worrying about which individual drivers are responsible )
Rust is the seat belt law that demonstrably saves lives regardless of how safe each individual driver thinks they are. It is a hard transition with many critics but the generation that grows up with seat belts will never go back. Eventually, we will all realize just how crazy it was that they were not always used.
I saw the clip previously. The rust guys are absolutely assuming that the C guys would go for something because (a) the compiler guarantees it's memory safe (b) the semantics would be encoded in the type system. They demonstrate this using rust terminology and algebraic data types. Algebraic data types are the bees knees, (but not with that syntax and clumsiness), and compiler guarantees are the bees knees, but that's not how a C programmer who's middle aged sees the world, it just isn't. Your typical middle aged C programmer grew up telling pascal programmers that automatic array bounds checking is for wimps and real men use pointer arithmetic and their programs run five times as fast. They were always right because their programs did really run significantly faster, but now rust comes along and its fast and safe. Why wouldn't C programmers like it? Because the speed was the excuse and the lack of guardrails was the real reason they liked C.
I said it's a massive culture clash that the rust folks didn't realise they were having because they just assume that "memory safe" wins people round, whereas C folks value their freedom from automatic compiler-based safety, and here you are, sounding like a rust person, saying it isn't a culture clash at all and that the rust folks are right about memory safety and the C folks are just being irresponsible.
They aren't asking C devs to write Rust code, which is what the guy being a heckler was claiming. Why don't they want to right Rust? For exactly the reasons you describe. The thing is though that's not currently being asked of them, all they actually want is the documentation to create that code themselves.
You really don't have to explain any of the culture clash to me lol. I've written both C/C++ and Rust. My C and C++ coding skills are demonstrably better (or at least used to be, it's been a while) than my Rust skills. Why? Because of how complex those guardrails are. The difference is I have the self awareness to know that my lack of Rust skills doesn't mean that the language is bad, or that C is a safe language to use. Rust tutorials could be improved. Perhaps an easier to use language like Zig might be more useful for some people. I feel like it's a good compromise between safety and ease of use. Rust though is still incredibly progressive for the industry, and will improve systems security, maintainability and reliability going forward if only people would stop getting in the way.
I'm not sure. I remember seeing an example in the docs, but I can't find it now. Actually the docs in general are a lot less opinionated than I remember them.
One thing that I did find is that the ion shell document mentions that it isn't a posix compliant shell because they would have had to leave out a bunch of features.
TL;DR: Vast culture clash that rust guys didn't perceive and C guys hated and false assertion that "you don't need to learn rust" based on inexplicably naive lack of understanding that maintenance might be necessary.
If someone builds a rust api on top of your C code inside your project, you have exactly five choices: (1) preserve the assumptions the rust code is making (2) only change your code if you have a rust expert to collaborate with handy (3) edit the rust code yourself (4) break the rust assumptions leading to hard to find bugs (5) break the build. The C guys hated all five of those options, and the rust guys told them they didn't need to worry their pretty little heads about it. ON, they weren't as dismissive as that, but they either didn't understand those as issues or didn't care about them or dismissed them.
The rust guys were asking the C guys to tell them the semantics so that they could fix the type signatures for their rust functions and the C guys were reluctant to do that because they wanted to be able to change the semantics of that turned out to be useful to them. They didn't want to commit so something that was documented in a way they weren't familiar with because they felt that even if they wanted to, they couldn't ensure their code was compliant with this specification going forward because they didn't understand the rust type signature fully. (They got hung up on the self argument and launched a rant against OOP.)
The rust guys knew instinctively that the Result return type meant that the operation could fail and could tell from the two arguments to that both in what ways it could fail and every kind of answer it could produce if it succeeded, but the C guys found almost none of that obvious. This was for just one function in the rust API, but it also radically changed the way of doing it. This one rust call replaced the whole algorithms of ask, check answer, if none, check this and that, otherwise do this blah blah blah. The C guys are used to keeping everything lean and simple with a single purpose and were being asked to think of a while collection of procedural knowledge and edge cases with a handle everything monolith. But they were audibly reluctant to commit to that being all the edge cases because they don't think of all of those tests as one thing and instinctively wouldn't write something that checks for all of the edge cases because (a) in a lot of circumstances the code they're writing only needs to know that there was a problem and will give up quickly and move on and (b) they want to be able to freely choose to add other edge cases in the future like they normally do without having to worry about the rust code breaking.
They weren't complaining that they were being asked to write rust, they were complaining that they didn't want to learn rust, and they were complaining this because they could see that to preserve all the rust API type signatures they would have to understand them, the expectations around them and memory safety principles, so that a rust programmer in the future wouldn't have to change the rust type signature.
The rust guys would have gained a lot more traction by just asking the C guys to keep a bunch of comments up to date detailing the semantics and error checking procedures, and promising to edit their rust API if the C code changes, but I suspect they didn't ask for that because they know that no guarantees come from a comment and they want to be sure that the rust code works across all the possible scenarios and in rust culture, that is always documented in the type system where it can be enforced.
The rust guys spoke like it was self evident that having a monolithic API with a bunch of stuff guaranteed by the rust compiler was best, but seem not to have realised that this is a massive culture clash because the C guys come from a culture of rejecting the idea of compiler guarantees anyway (because they have long had confidence in their ability to hand optimize their code to be faster than some prescriptive compiler's output and look down on people who choose to have the guardrails up).
They felt like they were being asked to help write an interface definition in a monolithic style that they have always rejected, to achieve goals that they have long resisted, in a language that they find alien, with no guarantees for them that the rust guys were going to stick around to agree and implement the rust changes necessary if they changed the C code, and with no confidence that they understood what would count as a breaking change at the rust level.
This perceived straightjacket made them particularly cross. They complained about the inability to change their C code and its semantics and the need to learn enough rust to understand quickly what not to change, but they didn't want to not change things and would need to edit the rust API at the same time as editing the C code if they didn't want the rust build to break, and then there would be even more downstream changes from that, so realistically they would need not only to be able to understand the rust type signatures, they would need to be able to edit both the type signatures and the functions themselves, and basically maintain all the downstream rust, and they would want to be sure they were writing efficient rust, well aware that it took them decades to get to the level of extreme efficiency they write in pure C, a much simpler language.
The rust guys said "Just tell us what your code means so we can write our type signatures", but the C guys didn't want to help create for themselves a prison whose walls were of a strange and intricate design they found hard to perceive, made out of materials they didn't have experience working with. They felt like the first guys were asking them how all the doors, windows, chimneys, air vents etc of the house that they built by hand would ever be used, so they could encase it in a stainless steel shell and make it part of a giant steel city. The C guys said "but I might want to build an extension or a wider garage!" They claimed that the C guys didn't have to learn how to weld or manufacture steel sheets, and that their house would be much safer, but for some reason this didn't win the C guys round to the plan, and there's a bunch of people online calling the C guys tech luddites for not liking the whole thing and saying that they were incorrect that they needed to learn rust just because the rust guys made that claim, but that claim is actually completely incorrect unless you think that it's OK to stop the project compiling with your pull request or you think that changes to the C code should be banned wherever a rust API is built on top of it.
The rust guys would have gained a lot more traction by just asking the C guys to keep a bunch of comments up to date detailing the semantics and error checking procedures, and promising to edit their rust API if the C code changes, but I suspect they didn't ask for that because they know that no guarantees come from a comment and they want to be sure that the rust code works across all the possible scenarios and in rust culture, that is always documented in the type system where it can be enforced.
I could be being daft but I thought this is more or less what the Rust guys were asking for. Tell us the current symantics of the system, and if it changes in future let us know what the new semantics are and we will fix the Rust code accordingly.
I do understand what you mean though about enforcing restrictions on what the C guys can do without breaking the Rust code. I think you run into situations wherever two languages meet. The way most projects handle this is the upstream releases a new version, or a release candidate of a new version with their breaking changes documented and then downstream updates their stuff accordingly when they get time. Obviously this is one project, but I imagine it's possible for the C guys to update stuff in a pull request and then drop an email in LKML to the Rust guys so they know stuff needs fixing. None of this seems that hard to me.
Ultimately though everything here is Linus decision. Either your in or your out. If Linus says yes to Rust doing whatever then that's what's going to happen. Likewise if he says no, then it's not going to happen that way. Until he weighs in no one can really say how this will end.
Personally though I disagree with the C guys. Safety features are important and should be used where it is practical to do so. Until now C has had the justification that it's still the fastest language and by a significant margin. Now a somewhat safe language like Rust exists with the same speed and capabilities I don't think we can afford to continue ignoring safety for the sake of a few bruised egos. If this was a proper industry like aviation safety would always come first, and if that means adopting new technologies and forcing people to adapt. I can understand if C devs have a hard time adapting, I don't expect it to happen overnight. The expectation though should be they should learn some Rust eventually, even if it's just enough to know the type signatures and what not that they might break with their changes to C code. Kernel devs are supposed to be some of the smartest computer people out there. If they can't learn even that small amount of another language then should they really still be kernel developers?
Ah, but I still agree with the C devs, it creates unnecessary headaches for them. Also, old habits die hard.
I view it as the same way ZFS is supported: Linus and Greg KH are like "you can maintain it, but we don't give a shit about it, and if what we do breaks ZFS support, well too bad."
Yeah it is a monumental task, but it's also the one with the least push back. I don't mean start from scratch, but convert the C code to Rust in a dev branch or something and release a Linux-Rust kernel image.
Almost all real-world software development is like this. That's what we do.
I'm aware, I've written my own software even though I'm a SysEng, all I'm saying is that it's not an easy process with a potential for disaster. Just look at CrowdStrike (not saying that they were attempting to switch languages but just the scale of the fuck up and the fallout that it caused), we don't want that to happen with Linux.
The long-term goal is for Rust to overtake C in the kernel (from what I understand
Your understanding wrong. Rust is limited to some very specific niches within the kernel and will likely not spread out anytime soon.
critical code gets left untouched (a lot of the time) because no one wants to be the one that breaks shit
The entire kernel is "critical". The entire kernel runs - kind of by definition - in kernel space. Every bug there has the potential for privilege escalation or faults - theoretically even hardware damage. So following your advice, nobody should every touch the kernel at all.
I agree. C isn't going anywhere anytime soon, but if we don't start modernizing the kernel now we could end up with a future like the US government is in where all critical systems run on COBOL code and no one wants to touch it for the fear of breaking everything.
I'm not sure if it was in my above post or not, but the article said we should start modernizing the kernel now before someone does to Linux what Linux did to Unix.
Redox OS already exists and is functional (meaning it boots and has a GUI, but it's lacking in various aspects), from what I understand it's pretty much Linux/Unix rewritten entirely in Rust and looks pretty promising. In 5 or so years it could be a competitor with BSD and then overtake Linux once it has a proven track record.
"If it ain't broke, don't fix it!"
I'm not a software dev, but I'd imagine that the codebase could definitely be reduced once most things are converted to Rust. From what I've heard, the kernel is a huge mess of spaghetti code that most people don't want to touch, for the fear of going insane in the process 😂
I think this overall is a better idea. I'm going to say this because, I thought I'd look into rust today. So I installed it, setup vscode to work with it etc. And it's all up and running. I thought I would port over a "fairly simple" C# project I wrote recently as a bit of a test.
While I've generally had success (albeit with 30+ tabs open to solve questions I had about how to do certain things, and only making it about 20% into the task) I'm going to say that it's different enough from C, C++ and C# (all of which I can work with) that I really don't think it is fair to expect C developers that have day jobs and work on the kernel in their spare time to learn this. It's fundamentally different in my opinion.
Now, I don't condone any bad attitude and pushing away of rust developers from the project. But there's no way they're going to want to do anything to help which involves learning a new language. It's just not going to happen.
Likewise, C is not a language most new developers are learning. So, I feel like over time there won't be so much of an influx of new kernel developers and any Rust based kernel could find itself with more contributors over time and taking over as the de-facto kernel.
In terms of Redox (not looked into it yet). So long as there's a different team working on the userspace tools. I would say the main task should be getting a solid kernel with drivers for most popular hardware etc in place. The existing GNU tools will do until there's a kernel that is able to compete with the C one. But that's just my opinion.
Noob Question Thread: Ask Any Questions About Linux!
Ever had a question about Linux but felt too afraid to ask? Well now's your chance, ask any question about Linux, no matter how noob or repeated it is, and I and others will help answer them.
Previous noob question thread: lemmy.ml/post/14261893
like this
dhhyfddehhfyy4673 likes this.
"pactl load-module module-switch-on-connect"
?
I am somewhat in the same boat, but more gentoo sided. For the main repo they killed mkstage4 because its outdated and insecure. So like you i wanted to backup my data (my gentoo install) to my nas or local storage.
Rsync is the magic bullet for this. You can use ssh to securley transfer data to or from the server. And it automate it via a cron job (i suggest fcron) for a automatic timed backup/sync.
Now i will add, rysnc can be used as a backup. But as the name implies it syncs data from one pc to the other. So if you break your desktop and it syncs to your server. Your SOLPDQ, thats only if you automate it tho.
And for the services id reccomend making a directory and adding all the services to a group, which owns the directory. Or the more lazy solution, which is probably frowned uponed. But you can rsync your docker container data to a directory where it has permissions to copy/sync.
Id highly recommend Rsync tho and just syncing offsite to another computer
PROTON_LOG=1 %command%
. Then, run the game. This will make a log file in your home directory, with the prefix "steam-" and then your appid. If you want to upload the log or paste the output here, I can try and look at it and try to help.
Yes, good
But what init system?
;)
Gentoo is great
With the recent Microsoft garbage, I'm giving Linux another try. I've been running a laptop for a while, no issues. My main rig, however can't read all of my um..?hard drives
A live USB of Mint 21 reads 2 of 5 drives fine. The rest are recognized from GParted, but can't access them. It looks like NTFS-3G is installed.
I've duck duck go'd (which apparently is just Bing) for a solution, but haven't succeeded. Long term, I can probably pick up another drive, copy, and reformat everything to something Linux friendly. For now, I just want access.
I'm lazy and burned out. I don't want to use the terminal- which I did try. I just want to make a few clicks and have access to all of my files.
If it matters, the drives (roughly) show up as:
500 gb, 4 TB NTFS (readable)
3, 12, 16 TB unknown (not readable)
Windows says they're all NTFS.
Is there an easy way to easily mount my drives?
I think the disks could be Dynamic Disks on which it would not be a good idea to install a linux distro.
Unfortunately Microsoft's own advice to change it to a basic disk (since it considers dynamic deprecated) WILL RESULT IN DATA LOSS.
Since you only want to access them it seem to be possible with ldmtool. While it is a cli tool there is a corresponding service that at least according to some askubuntu posts and arcwiki should make them behave like normal filesystems.
Double checked and all of the drives are basic. I'm very confused as to what is different between the disks that readable and the ones that aren't.
I've even tried multiple distros. Same scenario.
That's a bummer. Unfortunately I can't think of something else since fast startup has been suggested by another user and it's also not the case.
The drives are shown as NTFS by Gparted right? Also can you confirm that the sizes should be those sizes? As in do you remember from when you bought them? 16 TB is still a big drive. Additionally can you confirm that they are all different drives and not partitions on the same disk.
Do they show up on the file explorer sidebar or if you go to "Other Locations" (in the file explorer)? If so do you get an error when you try to access them?
If they don't unfortunately you probably will have to use the terminal to try and mount them so we can hopefully get some error message and hopefully some clue to what is going on.
If you can boot back into windows, turn off quick startup/shutdown, run chkdsk or whatever on the drives, reboot back into windows then boot back into Linux and you’ll be okay.
Quick startup is a kind of weird sleep/hibernate mutant that leaves drives in an unclean state when it turns off, so the Linux drivers for ntfs say “I’m not gonna touch that possibly damaged drive”.
It could be that only the installer has issues. If you're dual booting have you tried launching the already installed program from your windows partition?
Otherwise I would try launching the installer from the wine command line to see if it gives you a specific error there.
How are you launching the exe with WINE? Try doing it via the command line if you aren't already. That way you may get some more information about why it isn't working. Its as simple as wine path/to/your/exe
You could also try something like Bottles, which will let you use possibly newer versions of WINE without modifying your system's WINE.
First suggestion: commit to using Ubuntu for a set period of time. Could be a week, could be 2 hours. When you encounter issues, force yourself to stay on Ubuntu.
What you'll find is that at first, errors will seem like gibberish, then you'll do some snooping online, and find out how to access some log files or poke around your loaded modules. You'll slowly learn commands and what they do.
Eventually, something will click, ie; "wait a minute, I just checked to see which kernel modules are loaded, and I'm missing one that was mentioned in my error, that must mean I need to load that module at boot." You load that module, reboot, try your command again, and bam, everything works. You've learned how to troubleshoot an issue.
The best way to learn Linux is to immerse yourself in it. You can't efficiently learn German if, every time you hear a phrase you don't understand, you switch back to English, right?
Just come ask here when you have trouble, and we'll try to help.
When troubleshooting, the biggest thing is searching the web honestly. But some more things to help you out: look for logs. Linux has loads of logs and sometimes can tell you how to fix the problem.
Logs may not be immediately apparent. Some programs have their own log files that you can look into. Sometimes, if you run the program from the terminal, it'll print out logs there. Otherwise, you read look through journalctl, although this has logs for everything so might be harder to search.
Another useful tip, particularly for system tools and terminal tools, is manual pages. Just run man ls
and replace ls with any command, you'll get the documentation on how to use that tool.
The first thing I'll say is the reason you're more comfortable with Windows is because you've been using it for however long and learning to deal with the issues it has. The same needs to be done on Linux. You'll have to learn how it works just like you forgot you did for Windows.
Second, along with logs like other users said, you have to know how to use a search engine well. Most issues will be easy to solve, but some may take some searching. The Arch wiki is a good resource even if you aren't using Arch.
I recently had the realization that I've just been putting up with Windows bullshit forever recently when a friend asked me for help with their work PC. They're a Mac user, but they just started working from home and have been provided a Windows laptop. They sent me a bunch of rushed texts when their headset stopped working. They changed the default audio device after they launched the program. Which never works on a Windows PC. I never have that problem because I have just learned to live with it, I don't even think about it anymore.
Now I'm really starting to notice all the little things I put up with from Windows on my machine. To be fair my Linux machine is just as janky but at least I can say I made it that way. I keep telling myself to 'tidy' my Linux machine up but I never do, it still plays games just fine. Usually. If I didn't fuck with it.
Ubuntu Wiki
Ask Ubuntu
Ubuntu Forums
The wiki has some information and should correspond to how Ubuntu specifically is configured. You can ask for ubuntu specific help in those communities. You can also ask here and on several Linux communities on Lemmy.
The Arch Wiki I find to be more in depth than the ubuntu wiki. Of course some things may differ from Ubuntu's defaults but I found it a useful resource when using Ubuntu.
Finally I suggest you learn a bit about how Linux works in general, what is in what directory, what is wayland and xorg, understand how drives are named etc and some understanding of the terminal (moving around in directories, how to use sudo etc, no need to learn to make bash scripts).
I love it so far and I'm having very few issues, but trying to sort and delete photos from a folder without a thumbnail/preview is impossible!
I have tried googling the issue but apparently it's not that common? I'm sure these is a dumb setting somewhere but I still haven't found it.
It used to be that someone with midi controllers could be assumed to be technical enough to say “you’ll be fine, everything will work”, but most of the time nowadays software just automatically figures out stuff and you don’t have to go looking at the implementation chart and using midiox to see where you’re screwing up,
So,
I’ve never seen an interface that didn’t work, but if you’re not comfortable troubleshooting midi signals then give it a shot and see.
What are you using midi for, a daw?
You should switch to RebeccaBlackOS
I meant to write socket instead of port because I was tired.
If for example a program can take rpc over a socket which is a file somewhere is it just the filesystem permissions that determine what can be done or is there more at play?
- For Linux enthusiasts, how do you decide which distro you would like to try out next among the plethora of options that are available? The difference I perceive between majority of distros gets smaller the more I try to understand about them.
- What are the minimum issues I am likely to face using the most beginner friendly distro like Mint for programming and light gaming?
- How customizable is the GUI in Linux Mint specifically? What if I want a start menu like Windows 10 with the app list and the blocky app tiles? What about those custom widgets I see in hardcore Linux users' desktops?
- I heard there is no concept of file extensions in Linux. How am I supposed to work on my projects that I imported from my Windows machine that do contain extensions?
Bonus: Who creates those distro icons in color coded ASCII in the system info command in the terminal?
For #1, I've made the realization that most distros are lightweight skins or addons on top of another distro. Most of the time, if you start with the base distro, all you have to do is install some apps, change some configurations, and suddenly you have that other distro. It is much easier than doing a reinstallation.
If you filter out all of these distros that only do a little on top of an existing, you're left with a quite small number actually. I'd bet it's less than 10 that are not super niche. Fedora, Arch, debian, gentoo, nixos are the big ones. There's some niche ones, like void Linux and Alpine.
So I'd say if you try all of those, you don't need to try any more 😁
For #2,
For gaming, if you use steam, you may not face more than the following:
- game does not work with no well known way to resolve. You can find this out by checking protonDB
- game does not work because it needs to enable some options. Very easy to fix, and you can find the options on proton db for each game.
- does not work because you didn't setup steam right. You often need to enable proton, which in short is steam's emulator or windows
- does not work because your gpu drivers did not install. This depends on distro and they should all have a guide on how to do it, but usually it is just a matter of installing something.
For programming, you will love your life because everything programming is way easier on Linux.
1) I usually stick with distros that have large userbases. I've tried smaller and niche distros before, and inevitably they stop being maintained, or move in a direction I don't like. The larger distros like Ubuntu, Fedora, OpenSuse, have more resources (people, time, money) to spend on testing updates, and have reliable update schedules. When I was younger I didn't care about that kind of thing, but these days I use my PC almost exclusively for work 10 hours a day, 5 days a week, I need my PC to not break when I update it.
Another technique I use is to go to the vendor site for software I use and look at which Linux distros they officially support. Usually they will publish at least an Ubuntu package, sometimes a universal deb file that works on Ubuntu, Debian or Mint. Sometimes an RPM package for Fedora/CentOS too. This is getting less relevant these days with Appimage files and Flapak images that work the same across all distros.
It's natural to get bored or frustrated with one distro and want to try out others. Imagine if Microsoft made many different flavours of Windows that each look and operate differently, everyone who is bored and frustrated with default Windows would be trying them all out, comparing them, debating the pros and cons, communities would form around common favourites.
I have a small gaming PC that I use to test out other distros, I'm currently on Nobara, that I actually highly recommend for a gaming-focused distro.
2) This one is really hard to say. It depends on so many factors like what hardware you are running, what software you plan to run, how tech savvy you are, even your definition of what is an issue. Mint is very stable and easy to use, you may run into zero issues getting it installed, running VSCode, playing some Factorio. Or you might run into a small incompatibility between your GPU and the bundled kernel drivers and run into a whole world of hurt spending days tinkering on the command line with no usable graphics driver.
3) I believe Mint still comes with the Cinnamon Desktop, that is specifically designed to be familiar and easy for users transitioning from Windows. It's not super customisable, but I think it can do what you described. I'm not the best person to answer, I haven't used Mint or Cinnamon since 2012.
4) File extensions are optional in Linux for some kinds of files. Linux usually tries to identify a file type using a "Magic string", meaning it will read the first 8 to 16 bytes of the start of a file and will be able to tell with a great deal of accuracy what kind of file it is. Executables, drivers, shell scripts, and many others use this method and do not need a file extension. You can definitely still use extensions though. Eg, libre Office will still save documents with a doc extension (.odt). Often Linux will use a combination of both the magic string and the file extension to determine the file type. Eg, the magic string identifies it as an open office file, and the extension tells you it's a document kind of office file.
Your Linux photo editor will still save images with a .png or .jpeg extension, because these are the convention (and may be required if you will be opening those files on a different OS). Similarly, your project files created on Windows will still work fine on Linux (if the equivalent Linux app supports that file format).
- I rarely distro hop. I used Linux Mint for a solid decade. I've made the jump to Fedora KDE pretty much entirely because Wayland support is the farthest along here, and that enables me to use more features of my hardware such as two monitors at different refresh rates, Freesync, etc. I did come to the conclusion awhile back that there's a lot of pointless distros out there, a lot of them are just "I want this particular permutation of default software."
- Assuming you're currently a Windows user, I think the main issue you're going to face using Linux Mint Cinnamon Edition for "programming" is going to be general culture shocks. Using a package manager instead of heading to the browser, stuff like that. "Light gaming" depending on what you mean by that could be no trouble at all or dealing with some hiccups involving Nvidia's imperfect support. There are some games that require proprietary anti-cheat that doesn't support Linux, Valorant is one of those that springs to mind.
- Difficult question to concisely answer; Mint has a system they call "Spices" which include a series of applets and widgets you can add to the UI, choose them from a menu and then configure them. One of these is "Cinnemenu" which replaces the default Menu with a somewhat more customizable one, though you might struggle to exactly replicate the WIndows look and feel. Beyond that, you might look at Conky for your desktop customizing needs.
- File extensions do exist in the Linux world but they're not as important for making things work as it is on Windows. Some files, particularly executable binaries, won't have extensions at all. A text editor might not automatically append .txt to a plaintext file, because it doesn't want to assume you're not writing a bash script or config file or something. But if you record a sound clip with Audacity or something it'll add a .wav or whatever extension as appropriate.
Bonus: You probably mean Neofetch (or whatever we're using since the developer of Neofetch has "gone farming"). Those are hard-coded into Neofetch by its developer.
For the #4, the file extension can be seen just as a note, a little tag that'll help you (or anyone else that will receive your file) remember which program you should use to successfully open the file.
From the viewpoint of your computer, in fact, a file is just a sequence of bits and every program can open every file, only it will not be able to find what it expects and actually do something useful with it, just as you can open a book written in any possible language: in most cases you will unable to undestand it, in some others you will be able to read it without any problem.
The "concept" of extensions was than introduced to allow your file manager (Explorer for Windows, Finder for macOS, Dolphin for KDE or Nautilus for GNOME) to know which program to launch when you double click on a certain file through a simple association table (that you can edit in your system preferences).
In regards to Linux you can sometimes read that file extensions are not a thing, but this is just because in the commandline you launch a specific program that you personally point to a certain file, so there is no file manager that needs to guess which app should be launched to open the document you just double clicked on.
That said, I think that should be pretty clear that in a Desktop context (like in a Personal Computer) that double click on a file situation pretty much applies to Linux too, so extensions will be useful and respected by the file manager you'll find installed in your distro of choice, even if it can use other means when that is missing.
I'm afraid this answer isn't 100% correct. There are ways to find out a file's type beyond looking at an extension. For example, there are lots of file formats where all of the files start with a specific sequence of bites, known as a file signature (or as "magic bytes" or "magic numbers").
You can try the file
command line tool to check that you can find out a file's format without resorting to its extension, and you can read the tool's manpage to learn how it works.
- I don't distrohop. Instead I just use what works for me and what I find comfortable.
- You will eventually need to use the terminal. And it will be overwhelming at first. But eventually the learning curve flattens a little when you get more comfortable not breaking your system ;þ
- Can't comment
- File extensions are, in essence, nothing but a convention. You don't even need them in Windows, really (You can open a file with any program, for example, you will just not get anything useful from it). So it's far from a big deal.
Depends what your goals are. With Arch, you will need to closely follow a guide to get it installed, if anything goes wrong you will need to search through the Arch Wiki for answers. Arch has an insane amount of customisation options, you will spend a lot of time in the Arch Wiki learning about them. By installing Arch you will learn a lot about Linux. Is that your goal?
You will spend more time reading and learning, but come out further ahead than someone who first installs Ubuntu or Mint.
However if your goal is to simply install Linux on your PC to try it out, (if you don't even know if you will like it, and don't know if you want to learn it's mechanics) then Arch wouldn't be my first choice.
First time Linux user you mean?
I wouldn't recommend it, unless you can navigate the terminal well. When you install arch, it installs no desktop environment, only the ability to talk to a terminal.
It's technically possible and very doable with some googling, but I wouldn't recommend it.
The arch wiki is difficult to use for beginners. Each page is single topic. It is not a guide. Using it daily, it takes at least a month to understand it well enough to "build your own guides". If you want to do that kind of deep dive, jump on in. If not, you'll have a better time using just about any distro other than arch.
BTW. If you do decide to take that route. Don't become one of those miscreants who "uses arch btw" It's a red flag for someone who doesn't know wtf they are talking about.
This is what I did. If you generally know what you’re doing around computers it just requires patience and a willingness to “Read the (Friendly) Manual.”
If you’re running intel, nVidia, dual GPU setup, and some other things, your installation will be more involved.
But the great part is that once you’ve set all that up, things just generally work and the Arch wiki is an amazing resource.
I'm using Arch, I love it. What's absolutely bonkers is that the system belongs to you.
However, if you have never used Linux, it's insane to try to install Arch. The online wiki is tailored for people with at least a decent amount of Linux knowledge.
As a noob, it will result in data loss, except if you're already very familiar with terminals or are very fluent in IT.
You might want to try something more user friendly, as Zorin then come back to Arch when you want more power.
Maybe. I'm busy right now.
I might do it later. Maybe I'll do it on
🎵 FRIDAY FRIDAY GOTTA GET DOWN ON FRIDAY 🎵
sudo pacman -R steam
), and remove the .steam folder in your home folder. This deletes your configuration for steam, and might help resolve issues. Then, re-install steam(sudo pacman -S steam
), and sign in again. Download your game and set up proton as I told you. If that still doesn't work, you'll want to make a post and share the log files.
"Pactl load-module" outputs "you have to specify a module name and arguments."
I duck go'd that command and it seems like it's for pulseaudio. The latest mint release uses pipewire for the audio server. Is the command different for that?
“Pactl load-module” outputs “you have to specify a module name and arguments.”
As I said in earlier comment, please run "pactl load-module module-switch-on-connect"
exactly.
Note that Pactl
and pactl
are different commands and the former is invalid.
Is the command different for that?
As the name suggests, pactl
is a command for PulseAudio. PipeWire supports
application written for PulseAudio, including pactl
. Try "man pipewire-pulse"
to get further info.
I didn't know that. Thanks!
Which distros, out of curiosity?
Finally bit the bullet and got a Thinkpad and I'm leaning towards putting Fedora on it. I've never used Linux before but I've done some research and I like the idea of something that updates more often than Debian but isn't as DIY as Arch. Do y'all think Fedora would make a good starting point? I hear it's stable enough and offers enough non-free applications through the RPM file management system.
Also, are there any drawbacks in using the immutable Silverblue version? I'm considering it just so I don't do anything dumb by accident.
Fedora KDE does. I think it's going to go with the DE rather than the distro, I bet Kubuntu also does.
I think dating back to the Space Cadet keyboard, Unix systems recognize 6 modifier keys: Shift, Control, Alt, Super, Meta and Hyper. It is my understanding that they choose to bind either Super or Meta to the "Windows" key (or the octothorpe whatever that thing is called key on Macs) and in practice it's used as another modifier key, often with Windows-like functionality such as opening the Menu if tapped tacked on.
module-switch-on-connect"
Which indicates 2 seperate commands.
The other day I learned that you can just grep an unmounted filesystem device. It will read the entire disk sequentially like it's one huuuuge file. And it will reveal everything on that disk... whether a file inode points to it or not.
Used it to recover data from a file I accidentally clobbered with an errant mv command. It's not reliable, but when you delete a file, it's usually not truly gone yet... With a little luck, as long as you know a unique snippet that was in it, you can find it again before the space gets something else written there. Don't even need special recovery tools to do it, just use dd in a for loop to read the disc in chunks that fit in RAM, and grep -a for your data.
When Libre office take over microsoft in office space?
I wanna install Linux on my Desktop as main OS after years of windows, last time I tried desktop was Fedora and Ubuntu back in the late 00s, back then all I remember is playing around with Gnome and KDE and compiz...
Most of what I know about Linux distros today is from memes...
How can I quickly learn about the best distro for my needs, (general use, some development, and some gaming, easy hardware support). With a toddler and demanding job, I don't have too much time to just experiment with different distros and draw my own conclusions.
Thanks in advance.
Ubuntu and Linux Mint are ideal for people who just want to ignore the OS and get work done.
If you are a Dev you should be clear of such problem, unless you need a very specific tool, but, many people can't switch because the programs they work with are not supported on Linux. Take a look into that, and in the worst case scenario you can dual boot windows.
Gaming wise proton is a bless and let's you play most games, check protonDB for compability. Major portion of the games that don't work are due to crappy anticheat solutions.
Good luck, any other questions feel free to ask.
I agree with Mint. I think Ubuntu has kind of devolved though, and PopOS is the better way to go. Fedora's good too these days.
My recommendation is to try out a few distros in VirtualBox before switching - this was my process, and it can be very gradual.
I don't use Mint, but I would guess that you could change your repos in /etc/apt/sources.list
, run sudo apt update
, and then sudo apt full-upgrade
. Just make sure the full upgrade isn't doing really dumb stuff like deleting a bunch of programs.
I could be completely wrong and this could be terrible advice, but this has become the wisdom for me when I use Debian Testing. Of course, I just did straight sudo apt update
after Bookworm was released and the upgrade to Trixie went mostly fine. I have never upgraded between stable versions, so I may not be one to say.
I'm familiar with Proxmox, virtualbox, and KVM/KVM manager.
If I want to set up a PC to virtualize multiple operating systems, but with the feel of a multiboot system, what virtualization software would you suggest?
My goal is for the closest I can get to a multiboot system (windows, Debian, fedora) but virtualized so I can make snapshots. It should feel like I'm on baremetal when inside the VM.
Virtualbox is clunky with lots of pesky UI cluttering the screen and Proxmox doesn't seem great for this use case.
Is OpenRC meant to be faster than systemD as a process system? I've been thinking of spinning up some non systemD distros like Artix on a VM on a mini DELL tinbox.
I will say though, I am not an advanced Linux user as the distros I've used were :
Ubuntu
Endeavour OS
SpiralLinux (Easy Mode Debian)
Would I need to make configurations in openrc or can it just run without messing with it like systemD?
Thank you
This is an example of a tab group using Sidebery. If you click links on a page that open in new tabs, it creates a sort of folder from the original tab, with the group of links as children of the parent tab. You can also drag them into these groups manually.
Picture-in-Picture auto-open experiment enables PiP on active videos when switching tabs
Hell yeah
Picture-in-Picture auto-open experiment enables PiP on active videos when switching tabs
Hell no!
You must've gone out of your way to run an obscure OS if you can't run Firefox on it...
And in choosing a super obscure OS, you probably knew software compatibility would be an issue. In other words, kind of a problem of your own doing, and not related to Linux either.
That's my point.
If the above user is using an OS that Firefox doesn't work on, it mustn't be Linux, and therefore is irrelevant here.
Diamant Salihu är ingen expert på brottslighet. Han är en journalist och reporter som numera arbetar på SVT. Han tycks också användas allt mer som en så kallad expert. Men har är ingen expert och verkar också ha svårt att djupare analysera händelseförlopp eller konsekvenser av olika politiska beslut.
Square Enix invests in Linux distribution
Square Enix invests in Playtron for their Linux-based PlaytronOS - first Alpha out now
Playtron are quietly building up their Linux-based PlaytronOS behind the scenes, and not only have they released their first Alpha but they've pulled in another investor too.Liam Dawe (GamingOnLinux)
like this
Lasslinthar, Oofnik, TVA and imecth like this.
reshared this
Tech Cyborg reshared this.
I'm not sure about the others, but I'm pretty sure Hitman isn't linux native.
As far as I can find on protondb, neither are Deus Ex or Tomb Raider.
I've never had any issues running those games through Proton though, so that's great.
Although this game has a Linux-native build available, Steam does not list it as having Linux support. This can happen if a game has an unofficial, unfinished, or unsupported build. You may need to force Steam to enable Proton for the game in order to run properly.
Square where early adopters of Linux back when Steam Machines V1 came out commissioning ports for a bunch of their Eidos (western) IPs. And then they stopped support for those ports when Proton came around
According to Wikipedia, Deus Ex MD, Hitman, Life is Strange 1&2, and the new Tomb Raider trilogy all have native Linux ports.
The first hardware that will be actually using it is the SuiPlay0x1, a strange looking and sounding web3 / blockchain handheld.
Oh dear.
like this
TVA likes this.
It looks like the picture in that article is upside down judging by the devices logo on the screen. Are the analog sticks above the d pad and buttons?
Everything about that device confuses me.
SuiPlay0X1 runs Playtron's device-agnostic gaming operating system, enabling gamers to play both Web3 and Web2 games across PC and mobile.GamesBeat have some more details, noting it will have "native Sui blockchain integration via zkLogin and Sui Kiosk SDKs, enabling asset ownership directly connected to a device’s account system for the first time in the gaming industry
What is a web3 game? Something that allows you to grind for NFTs?
TF2 hats but on a block chain instead of an inventory system.
Pros:
- In theory you can still sell the item as a collectible even if the game dies (I doubt in practice though)
- In theory it makes it possible for other games to use the same items to make stuff in their games (I doubt this in practice)
Cons:
- it's a fucking block chain
In theory it makes it possible for other games to use the same items to make stuff in their games (I doubt this in practice)
I've heard this before, but there's literally nothing preventing games from setting up some shared items on their own without NFTs. Nobody does it because companies want to keep their IP, and worrying about external items would be a nightmare to balance.
NFTs solve like 1% of the problem of sharing items. So much more goes into making them actually work. For example: NFT id 5551337 is owned by the player: now what? How do you figure out what 3d model to render? What actions can you perform? How does it integrate with other systems? All of that is going to have to be custom for every game involved on a per-item basis.
Collectibles are non-fungible tokens by definition, and blockchain is just a data structure.
I don't care about collectibles / NFTs, but this is nothing new in the gaming world.
There was an abstract conceptual theory of system agnostic game add-ons. It isn't... completely inconceivable.
You could work with a relatively prolific engine, like Unreal, and set up a standard character model dummy with designated hard points for attaching accessories and certain default movements. Then any accessory could simply scale to the environment - Master Chef could swing a keyblade while the Elden Ring guy gets to wear Iron Man armor, because these are all "human" models with well-defined structures that could map to the associated equipment. The blockchain becomes a universal registry for these assets that a platform can read from to render the art.
The problem is that nobody ever actually implemented this universal protocol. They all just ran off making jpegs of weird animals and running fake auctions to create the illusion of a secondary market. You had zetabytes of data being processed so some Baked Alaskahole could claim his Kumming Koala was worth $40M.
I don't even strictly begrudge "the blockchain" as an idea for licensing and data storage (just please don't ask me to think about who is generating the licenses or storing the data). But it was all vaporware. None of it was anywhere close to being created, much less delivered. People were throwing billions with a b of dollars at entirely empty promises.
Then please, enlighten us!
What is a game that brands itself as a web3 game (not a game that just uses blockchain tech but specifically calls itself web3) that isn't also play to earn.
Yep, they literally cannot work any other way than as a ponzi scheme. Because the people "earning" want to take more money out of the system than they put in, and the company is taking money out as well just to keep the game running and the employees paid, as well as to make a profit. So you need substantially more suckers buying into the system than the money that is being paid out.
Eventually, somebody is gonna be left holding an empty bag.
The big thing I've seen for gaming is the idea that you can have tokens not tied to a specific game. Like maybe an achievement gotten in resident evil unlocks a cool skin dead by daylight.
Or you could implement something like Sword Art Onlines's item/skills system where they attached to a user and not a server, and servers can choose which ones to implement in their version of the game.
Of course its also possible it could be sold second hand, so maybe you think the Christmas skin of a popular character is stupid and will never wear it, but someone else just missed the event. You could sell/gift it to them, because the entitlement is yours to do whatever you want with it. Maybe you just hate the hell out it so much you offer a ton of cash just lock them all up. Idk this part of it just not my jam tbh
Maybe you see your game as spiritual successor to a previous game and want to honor player previous progress or achievements in the previous game.
Maybe you accept third party assets and need some market for those assets, and NFTs can represent a legally binding proof of ownership of that asset.
Maybe you want keep your live service relevent by having your assets transferable among a larger ecosystem of games.
It is very much like the steam market place but unlike steam items you do actually own it, steam can't legally shut down your account or block you from transferring that ownership after the fact.
But yeah its just a tool no idea what all game makers will want to do with it.
But it's got blockchain!
(does that actually still get any vc excited nowadays?)
Why, Linux?! It is said that you would destroy the Blockchain, not join it!
Linux is free and open source software ecosystem. It's like handing people free brushes, canvases and paints - sure, removing the financial hurdles may enable talents otherwise unable to afford indulging their artistic streak, but you also can't really prevent anyone from painting awful bullshit. Best you can do is not give them attention or a platform to advertise their stuff on.
That's the price of freedom: It also extends to assholes. We can't start walling off Linux, so the best we can do is individually wall them off from our own life and hope enough other people around us do it too.
Additionally, the first Alpha version of PlaytronOS has now been released for those of you who wish to test and give feedback. So far they note it has been tested across the AYANEO 2, ASUS ROG Ally, GPD Win 4 (2023), Lenovo Legion Go, Valve Steam Deck LCD and Valve Steam Deck OLED.
Quite a nice list of tested handhelds.
like this
TVA likes this.
"There's no login screen, how do I unlock it?"
"It's square enix, they expect you to have a keyblade"
i'll do my own css framework!
not because it makes sense, but because of godcomplex.
/s
I'm having a hard time understanding how this would work. udev will load kernel modules depending on your hardware, and these modules run in kernel space. Is there an assumption that a kernel module can't cheat? Or do they have a checksum for each possible kernel module that can be loaded?
Also, how do they read the kernel space code? Userspace can't do this afaik. Do they load a custom kernel module to do this? Who says it can't just be replaced with a module that returns the "right" checksum?
Anti-cheat doesn't actually need to eliminate cheating, it just needs to make the masses think it works by slightly raising the bar for entry into cheating. Cheating is still rampant, players just feel better about it and complain about smurfs more because they dont think its possible to get around kernal level anti-cheats.
Honestly I'd be much happier if the industry moved away form terrible anti-cheat software in general.
Here is the quote I paraphrased in my comment (I'm sure I got something wrong):
The immutable file system from Fedora Silverblue will be very helpful in implementing our anti cheat system but it is not our anti cheat system. We are planning to generate signatures for each version of our OS (easy with Silverblue) as well as all the DLLs we install dynamically. Basically using our SDK, a game developer will be able to obtain a signature of the current config on the device then call our backend to verify that this is a genuine Playtron version.
Ah, so they don't actually say that they read kernel space. They check the version of all installed packages and checksum the installed DLLs/SOs.
If the user still has root privileges, this may still not prevent sideloading of kernel modules. Even if it would detect a kernel module that has been sideloaded, I believe it's possible to write a kernel module that will still be resident after you unload it. This kernel module can then basically do anything without the knowledge of userspace. It could for example easily replace any code running in userspace, and their anticheat would miss that as it doesn't actually check what code is currently running. Most simply, code could be injected that skips the anticheat.
Of course, in their model, if a user isn't given root privileges it seems much harder to do anything, then probably the first thing you'd want to look for is a privilege escalation attack to obtain root privileges. This might not be that hard if they for example run Xorg as it isn't known to be the most secure - there's a reason there's a strong recommendation to not run any graphical UI on servers.
Another way if you don't have root is to simply run the code on a system that does but that does have such a kernel module - or perhaps modify the binary itself to skip the anticheat. I don't see anything preventing that in their scheme.
I don't get all this "gaming on Linux is hard" non-sense. All I have to do is set a specific flag on Steam and click play. That's it. One step, and 99% of my library just works, sometimes better than on Windows.
If it isn't on Steam, I search for it on Lutris and Lutris installs it for me, and I click play. And more often than not, it just works.
Hell, the mother fuckers that make Final Fantasy XIV's quick launcher made that shit a flatpak! And it's so fucking seemless, not a soul would know that game isn't a native Linux game!
Where's the difficulty?
Yeah, not all games work on Linux in all situations though. It depends for example on
- which distro you have,
- whether you have an Nvidia or AMD GPU (for example, SWTOR evidently runs fine through Lutris, but didn't last time I tried with an Nvidia GPU, so that might better with AMD—same thing happened with Dragon Age: Origins)
- what driver for either you have installed (Nvidia is getting better, but good gods the flickering could be better with some of their driver versions—games may play without being playable, after all),
- whether your computer's firmware is even Linux-compatible, let alone Linux-friendly (I know Lenovo laptops used to suck in this regard—they might still, though I don't).
So, no, although it's gotten a LOT better in the last 5 years, the notion that it "just works" is only situationally correct, and is by no means correct to the extent that justifies ridiculing those who say that it is not so plug-and-play as what is claimed.
Furthermore, doing so only sets up new Linux users without the optimal hardware or firmware for disappointment due to unrealistic expectations.
I host 2 ejabberd servers. One casual, federated, the other one standalone, for work.
- Conversations is a decent android client that supports modern XMPP standards
- Dino on the desktop. It just happen to support the same subset of standards as Conversations, so they work pretty well together.
For Mastodon, I'm using an Akkoma instance hosted by a frind of mine
- Tusky works pretty well with it. There were certain annoying bugs when I combined the official Mastodon app with Akkoma.
Every once in a while I try Matrix, but each time I try to log in, Synapse is is fucked in a different way. I have to scrap it up and start from the ground up some day.
- Only the element based clients so far, because every alternative lack certain features.
I'm a big fan of Nostr, because of one particular feature - You control your identity without having to selfhost a server. The network seems to be occupied by the christian-carnivore-bitcoin-conservatives so far, therefor it's pretty bland when it comes to content.
- Amethyst on Android
- Gossip on the desktop. This one requires a certain knowledge of the protocol. Each action needs to be manually triggered.
For some special use cases I have Signal, but most of the time, Telegram is the best the average person can do to meet me in the middle.
like this
Glasgow likes this.
Y'all remember Pidgin?
That proggie was the bomb for all your AOL, ICQ, MSN, etc so you can keep up with your homies while you update your live journal.
Pidgin was decent, but remember Miranda? The community around it was fantastic. The plugin system was an absolute blast. Not only there were plugins for any communication network you could think of, the UX was fully customizable.
At one point, somebody even bothered to implement the ICQ flash animations. There has not been anything like it ever since.
Miranda IM was the best messaging software ever. Enormously configurable and super light weight. The installer was like 1Mb or something.
Apparently Miranda-NG (Next Gen) is a thing.
Mastodon
Bluesky
Nostr
where Akkoma
where Iceshrimp
where Friendica
where Pixelfed
"If it ain't got no for-profit "Inc." and no CEO, we ain't gonna support it."
Too many app devs don't know jack about the Fediverse. Or they didn't when they started developing their apps.
It happens again and again that someone jumps into Fediverse app development, maybe even claims to build a, quote, "Fediverse app," end quote. And then they build it hard against Mastodon, only Mastodon and nothing but Mastodon. Not even just the Mastodon API. Straight against Mastodon with both a frontend and a backend that only supports Mastodon.
Usually because at this point they still think the Fediverse is the Mastodon network, and there's nothing else in the Fediverse than Mastodon. Some 99% of all Mastodon newbies come into Mastodon believing that, the vast majority still spends the first months believing that, and my estimation is that every other Mastodon user still believes it.
And when you tell such a dev that the Fediverse is more than just Mastodon, and there's a whole lot of stuff that isn't Mastodon, but that uses ActivityPub, and that communicates with Mastodon like Mastodon communicates with itself, they're taken off-guard.
"What? What do you say? You aren't on Mastodon? How can you talk to me then? Like, black magic or what? Whaddaya mean, that's normal? There's other stuff connected to Mastodon? But Mastodon is the Fediverse. Whaddaya mean, it isn't? The Fediverse is not only Mastodon? So it's Mastodon forks? No? Is it extra stuff glued onto Mastodon then? Not either? Like, WTF? Yeah, sorry, no. I've built my app hard against Mastodon, and if I wanted to support anything else, I'd have to rip everything out and rewrite everything from scratch. 'Sides, it ain't worth doing anyway. Literally nobody uses that stuff. You're, like, literally the first whom I talk to on Mastodon who isn't on Mastodon. Everyone else I see uses Mastodon.* Over 99% of all people in the Fediverse are on the original, Mastodon. Ain't worth supporting those few others."
All this explains why you have tons of iPhone and Android apps for Mastodon that either only work with Mastodon, or that you can connect other Mastodon API stuff to, but only have Mastodon's features at hand. And at the same time, you barely have any apps that support anything beyond Mastodon's features. Exception: Lemmy apps, often written by people for whom the Threadiverse or the Fediverse as a whole is Lemmy.
*No, they don't. But Mastodon users can't see it unless either non-Mastodon users do something that's _painfully obviously_ not possible on Mastodon, or they rub it straight into Mastodon users' faces.
Common fallacy that the only thing in the Fediverse that people use is Mastodon.
Misskey, for example, is bigger than Lemmy AFAIK.
Överfisket i världen underskattat. Det skriver Natursidan med anledning av en australiensisk rapport som de dock inte länkar till så det går inte att kontrollera deras påstående. De namnger inte ens de fyra forskningsinstitut som sägs stå bakom studien.
Though photons don't have mass, they can force momentum when they hit an object — that's what a solar sail takes advantage of.
Sounds like a bug.
like this
Chozo, KaRunChiy and subignition like this.
In short, even though photons have no mass, they still have momentum proportional to their energy, given by the formula p=E/c. Because photons have no mass, all of the momentum of a photon actually comes from its energy and frequency as described by the Planck-Einstein relation E=hf.
From here: profoundphysics.com/if-photons…
Essentially, momentum is a function of energy, not mass. It's just that massive objects have way more momentum than massless ones.
Essentially, momentum is a function of energy, not mass.
Thanks! That's the critical piece of information.
Because they have mass. They don't have "mass at rest", but they are never at rest anyway.
Do you remember that famous E = mc^2
equation? Everything that has energy has mass.
Do they even exist if they don’t move?
No. Or, at least not from our point of view.
They only exist moving at the speed at light. All particles with no rest mass only exist moving at the speed of light.
They can also be created or absorbed into something else. The mass of whatever absorbs them increases, and the mass of whatever is emitting them decreases when they do that.
The mass of everything is changing all the time. The thing that is constant is the rest mass.
But how do you apply this with Lorentz' transformation (i.e. relativistic factors)? You cannot approach the speed of light without considering relativism. It is known that p = gamma * m * v
where p is momentum, gamma is the gamma factor given by sqrt(1/(1 - (v^2/c^2)))
, m is mass and v is velocity. If you study the gamma factor, you'll realize that it approaches infinite as v approaches c, the speed of light. Since we are actually dealing with light here, where v = c
we are breaking the equation. Momentum cannot be defined for any mass which moves at the speed of light. It's asymptotic at that speed.
Also note that the same goes for E = mc^2
. At relativistic speeds, also this equation needs to consider the gamma factor. So those classical equations break down for light.
The answer is that photons don't have mass, but they have energy. There is a good explanation a bit further up in this thread on how this is possible.
The one that you multiply with gamma is the rest mass, not the total mass.
To be short, p = m_0 * γ * v
, where m_0
is the rest mass. Put that in your equation and look what happens.
Thankfully for us, the spacecraft that deployed the sail contains four cameras that can capture a panoramic view of both the reflective sail and the accompanying composite booms. The first of the high-resolution imagery is expected to be accessible on Wednesday, Sept. 4.
I can't wait.
like this
KaRunChiy likes this.
like this
subignition likes this.
I hope it can be sent somewhere neat once they are done testing. But I assume it’s not configured for long range communication.
It would be cool if it could be sent, slowly, to a “nearby” body.
That's actually not that big of a deal!
Since these craft would be small, they wouldn't have the power to transmit back to Earth anyway. So with something like this, you would actually want a string of these kind of crafts that you would propel along the same vector so that they could send the data back using each following craft as the next point in the network back to Earth. So each one can take additional pictures to get a resolvable image at the end!
Now, getting them on the same vector is the hard part, since we're constantly moving through space and won't have the same launch conditions on subsequent launches, but this is all theoretical at this point anyway.
No, because the solar wind drops off around 100 AU, and the power of the solar wind is going to reduce the farther out you are. These kinds of craft would get much more acceleration from a laser array that can put much more concentrated energy into the sail. But just like the solar wind, it will lose power the farther away from the array it is, along with any kind of intermediary debris attenuating the beam or unfavorable angles between the array and the craft.
So you can get these to an appreciable fraction of the speed of light, but I don't think we'd be able to get anywhere close to c with this kind of a setup.
Edit: I was wrong about the solar wind above, it's only like .5% as powerful as the photons emitted by the sun, and that energy drops off at only 1.5 AU, so they'll get much less energy than I thought without an external power source like a laser array.
You could have an eccentric orbit that swings far out into the solar system and then when it approaches the sun again accelerates to reach the escape velocity of the sun. But that would take years.
And I think still only be rather slow once it escapes the solar system since the escape velocity would be almost used up.
There is the concept of a nuclear photonic rocket - Wikipedia which I think of as a light sail and a white hot glowing lump of nuclear fuel on a string.
Doesn't "solar wind" refer to the physical particles emitted from the sun? Like hydrogen, helium, etc ejected from the sun's outer layers?
My understanding is that the solar sail is propelled mostly by the photons themselves, not the atomic particles that may also be reaching it.
Of course this probably doesn't change your argument at all, since the intensity of light drops off precipitously as you fly further and further from the sun.
I believe this is one of those things that benefits from scale. Theoretically, the larger you make the sail, the better the thrust to mass ratio you can achieve (even before calculating a better payload mass to sail mass ratio). With improved materials, we can make stronger and lighter sails and support structures, and this will in turn result in higher velocities by the time the vehicle has left the effective range of the sun. I think speeds truly approaching c are unlikely, but they can still achieve "really freaking fast".
But then new advanced materials could also change that, we're developing metamaterials with some fascinating properties, carbon nanotubes are just the tip of the iceberg. Who's to say that we couldn't some day achieve those speeds.
This is pretty amazing. I was wondering how they did the booms and they are soft carbon fiber tubes rolled up on spools. I was imagining that you could simply spin the sail and use centrifugal force to expand it.
Maybe someone can answer me this: I've always wondered if a solar sail can only generate momentum away from the sun or if it can be angled to create momentum in other direction. Since the light is reflected and not absorbed, angling it e.g. 45° you change the "momentum" of the photons. That should also change the momentum impacted on the solar sail and spacecraft. Right?
Or maybe the advantage is not having to counter any sail movement with gyroscopes... which cost energy too.
I think you nailed it right there.
That's such a badass design, I love it.
A variable thrust and thrust vectoring propulsion system with no moving parts. I doubt that's ever been done before...
Maybe someone can answer me this: I've always wondered if a solar sail can only generate momentum away from the sun or if it can be angled to create momentum in other direction.
Yeah, 100%. You can totally steer and control your orientation with a solar sail. This is one of the rare actually intuitive things when it comes to spaceflight. (with physics in space It seems like nothing actually works the way you'd expect it to, but this basically does)
We actually have some experience controlling orientation with "solar sails" too. I remember one example of a spacecraft which long after finishing its official mission was left to tumble out of control. Years later, some engineers were able to regain control, use the last of its propellant to counter its tumble and then keep it oriented correctly using only its solar panels as sails, light pressure was carefully controlled to keep the spacecraft oriented.
Ah thank you, never seen that mentioned. That makes it even more awesome for solar system exploration! The article mentioned expanding to much larger designs too.
PS: Now I wonder if assembling these solar sails with lightweight girders in orbit on a space station would be worth it. The spooling mechanism is awesome but if you could just send it up dis-assembled and assemble it in space they could probably be more efficient.
Yeah, we may well end up seeing a lot more use of solar light pressure to orient spacecraft over the coming years, especially deep space probes and higher orbit satellites. Telecom satellites need a lot of power for their transmitters, so they generally have quite large solar arrays, I wouldn't be surprised to see some of those using this method for station keeping. (Though these days they're already using very efficient ion thrusters for station keeping, so they may not really need to use this method)
But while solar sail propulsion takes a lot of very specific design requirements, using solar pressure just for maintaining orientation doesn't actually require any fancy hardware (if you already have solar panels), it's all an entirely software solution. Which means it's always a viable backup plan when hardware failures occur.
Ok, a side note on etiquette here.
When I saw this reply it had a point score of 0, which means somebody downvoted the post.
When a user is freely admitting a lack of expertise, and defers to another user who seems to know better, I would say it's extremely rude to downvote that reply.
This is an example of a user going out of their way to humbly rescind their previous statements when it appears they were mistaken (this is admirable and not a thing that usually happens on the Internet). They didn't do it for their own benefit, but for the benefit of the community, to not leave misleading or incorrect information in the comments.
So to sum up, downvoting a selfless act is pretty shitty and not good for the community.
like this
subignition likes this.
First photo looks rather unspectacular: space.com/nasa-solar-sail-spac…
NASA also has a "spot the diamond in the sky" challenge haha: space.com/nasa-solar-sail-spac…
Brasilien har stängt X. Elon Musks högerextremistiska mikroblogg har stängts ner i landet av Brasiliens regering och det är inte längre möjligt att använda plattformen i landet. Detta på grund av att bolaget sen den 17 augusti inte längre har en rättslig representant i landet.
Ion Launcher: A beautiful, functional & customizable launcher
Zagura / Ion Launcher · GitLab
A beautiful, functional & customizable launcher https://t.me/IonLauncherGitLab
A list of features would be nice.
Is the calendar widget part of the launcher for example?
like this
TVA likes this.
Is everything supposed to be black and white? Also, how does one access the purported customisability? I can't seem to find a settings option anywhere, and the readme.me is just a vague list of selling points.
Edit: NVM, figured it out. For anyone else wondering, long press on the desktop and select "Tweaks" from the menu that pops up.
TVA likes this.
it's available on f-droid
This looks neat. It'll be sweat it it could get
- tap gestures (double tap -> launch app)
- Use app shortcuts as icons in pins (My apps shortcut instead of play store app)
- lock desktop mode
Will use this instead of Neo for a few days to see how it goes.
Will leave this one here in case anyone wants a look. A launcher called Neo Launcher, very smooth and customizable one.
As long as it's not yet another "minimalist" launcher, i.e. ones that're basically each just a glorified apps list.
I'd personally love more launchers that use a tiling system, with built-in clock and/or weather widget.
Kind of like Focus Launcher with the Arcticons icon theme but, you know, with less moon and more grid.
like this
TVA likes this.
That hasn't been updated in two years. Seems kinda dead if you ask me.
Which is a shame because I used to use that and it was great!
An update on my Thinkpad E16
Original Post: startrek.website/post/13283869
Update: Nope, I'm still having the problem. It seems to be an ACPI problem. I found a potential solution, which I will test soon. The issue seems to only occur when using the charger and Bricklink Studio. These seems to be a common issue on Lenovo.
Another update: I fixed it, but I can't remember what I did. I'm having a great experience again. I'll see if I can find the fix for other owners of this laptop.
Update: I remember what I did, and have detailed it and where I found the fix here: startrek.website/post/14342770 . You should probably update the firmware for the sake of a clean journalctl, though.
After using this laptop a few weeks, I have one important note. I was having a problem for a while where, usually after waking from sleep, in some rooms my Wi-Fi card would disconnect and I'd have to reboot to get my network connection back. Based on journalctl, it seemed to be some sort of weird firmware error.
I found the fix was to install updated firmware, specifically the version of firmware-realtek from testing, upon which the problem has stopped ocurring. As firmware packages tend to not have a lot of dependencies, I do want to see if I can get a bookwork-backports package uploaded so it's easier to install.
We’ve Got Depression All Wrong. It’s Trying to Save Us.
If depression is the emotional expression of the immobilization response, then the solution is to move out of that state of defense. Porges believes it is not enough to simply remove the threat. Rather, the nervous system has to detect robust signals of safety to bring the social state back online. The best way to do that? Social connection.
For people who don’t prefer social connection, I’ve seen that exercise works well
Edit: just want to highlight that polyvagal theory, the main point behind this article, is unsubstantiated thus far
like this
karashta, originalucifer, Maeve, mozz and Doll_Tow_Jet-ski like this.
like this
Maeve and NoneOfUrBusiness like this.
like this
Maeve and NoneOfUrBusiness like this.
When people are told that depression is an aberration, we are telling them that they are not part of the tribe. They are not right, they don’t belong. That’s when their shame deepens and they avoid social connection.
And that's not the only reason people are made to feel they're not part of the tribe, that they don't belong. There are many things in this modern (post modern?) world that cause us to become alienated from other people, even and especially those in our own community. The nature of community itself has changed. Many relationships and social institutions feel more tenuous or impermanent.
It's a vicious cycle: people feel alienated from others, it causes them stress, the stress causes anxiety, that leads to the immobilization response and depression, the effects of the anxiety and depression cause people to become further alienated from others, and the process accelerates and perpetuates.
like this
Maeve likes this.
People don't depress me, but I only have so much charge on my social battery. And yeah, seems we're the odd ones regarding the outdoors. When people first meet me, they often comment about my running around the creeks and swamps alone.
"Aren't you worried about (panthers, gators, bears, serial killers) ?!"
"Uh, no, they're rare enough and I carry a gun if it comes to it."
Great. On top of thinking me fruity, now I'm an armed fruit.
City people might be at greater risk, being more likely to start filming if they see a bear. Or trying to get a selfie with it.
Less likely to do that with a mugger! Lol
What's wrong with responsibly owning a firearm?
I really fucking hate this culture of us Left-leaning people looking at firearm ownership as stupid; meanwhile we are surrounded by armed unhinged racists, bible-thumping violent anti-LGBTQ religious fanatics, skinhead right-wing cops and we hope those fascist protect us if Republicans riot violently.
So fucking dumb to think deterrence is bad.
Open carrying an ak47 in walmart in a suburb is weird
Is this an exaggeration for effect, or something that's actually legal in some places?
like this
NoneOfUrBusiness likes this.
like this
Azathoth likes this.
I suppose the response to that would be that the social interactions which makes you anxious are unhealthy social interactions, And that, instead, you should be having social interactions that don’t cause anxiety for you.
Of course, without knowing more, that’s just speculation.
I spent quite a while paying for therapy to overcome depression.
Problem was that my depression was basically just caused by poverty: Too poor to afford healthy food, a car, a living situation that didn't include unstable abusive narcissists causing me stress, lack of sleep, constantly guilt tripping me for things I had no control over, shaming me for enjoying anything they didn't approve of, telling me I have mental issues.
So ... what I need is money, a fresh start, a new living sitch... and I am paying a bunch of money for my therapist to also become depressed at my situation and just give me the same CBT exercises I already know.
Why did I pay for all those sessions?
Oh right, my roommates were gaslighting me.
I am currently extremely not depressed now that I am finally faaaaar away from them.
What's the point to feel content that my father is a disappointment or that I feel like one dispite the fact i felt I made all the right choices in life. No spending that cash on therapy is not going to make all the issues go away. It will just make me feel like it's acceptable? F that. My problems wouldn't be an issue if I had more money.
I just had the same experience with therapy and psychiatry. Even Zoloft does nothing, it might dull it slightly but it doesn't fix the problems underlying my depression.
I make $77,000 a year, and I'm living the same as a person making minimum wage in the 70s.
I have a side job too, and I'm still struggling to just feed myself and pay rent. I don't know how people working serving and retail jobs are even affording to live inside.
Porges believes
This is an interesting article and yet you've chosen to quote the most speculative unscientific part of it from the final paragraph.
"Have you tried going outside" is not a scientific cure for depression.
like this
PokyDokie likes this.
That’s not what it’s saying at all, it’s talking about immobilization as a survival strategy as induced by the body’s neurophysiology, think of it as another option after flight vs fight responses.
Here’s the report mentioned in the article explore.bps.org.uk/content/rep…
Edit looking closely, the report itself doesn’t mention anything about the immobilization defense.
Edit2 so on further review, I agree that this article is low quality. Apologies, was just browsing while half asleep and thought it was interesting
Polyvagal theory itself does not seem promising so far. Oh well, editing this post to highlight that…
like this
NoneOfUrBusiness likes this.
think of it as another option after flight vs fight responses
Usually expressed as fight, flight or freeze...
Overworked is my depression.
I once was a recluse I didn't go out for months at a time. Most mentally lucid and healthy I've felt.
I have to somewhat agree with the author. My experience and understanding of depression is that it is more of a (sometimes very persistent) symptom than an underlying cause. Ideally, we would all have the guidance to deal with depressing scenarios, but similar to dissociation during trauma, our mind defaults back to disconnection to limit the pain.
I’m not saying this is every case, but I do think as a society we could view depression more as a coping strategy, and try to replace it with healthier practices. After time, it takes more time and effort and support to replace those coping strategies, but that is essentially what psychotherapy does.
I think too often in the modern world people tend to just shrug and say “this is who I am,” instead of trying to improve their coping skills and quality of life. Like another commentor mentions, this becomes a feedback loop of depression feeding depression and takes immense support and effort to curve and should absolutely not be shamed.
Depression has many causes:
- For once, people work too much. It exhausts the body and we feel tired.
- For two, there's the meaninglessness of life. It's difficult to stay motivated when nothing makes sense/there is no future.
- Thirdly, positive sexual experiences strongly cure depression. Since the dating market is largely fucked (no pun intended), well that option doesn't exist to large parts of the population.
- Fourtly we're socialized to hide depression. As everybody knows, the first step to solve a problem is to recognize it exists. Stigmatization of depression has held back effective treatment for way too long.
Fourtly we’re socialized to hide depression. As everybody knows, the first step to solve a problem is to recognize it exists. Stigmatization of depression has held back effective treatment for way too long.
"Hey, how's it going?"
"Good, you?"
Honesty about our emotional state (with people who aren't trusted friends / partners) is programmed out of us by social norms.
Seems to be an ambitious rethinking of depression. As someone diagnosed bi-polar, I'll watch the development of this idea with interest.
But if the threat continues indefinitely and there is no way to fight or flee, the immobilization response continues. And since the response also changes brain activity, it impacts how people’s emotions and their ability to solve problems. People feel like they can’t get moving physically or mentally, they feel hopeless and helpless. That’s depression[…]
Immobilization has an important role. It dulls pain and makes us feel disconnected. Think of a rabbit hanging limply in the fox’s mouth: that rabbit is shutting down so it won’t suffer too badly when the fox eats it. And the immobilization response also has a metabolic effect, slowing the metabolism and switching the body to ketosis. Some doctors speculate that this metabolic state could help to heal severe illness.
I could see this being the case. If I hate my job and have no other prospects on the horizon, my getting angry at stupid decisions by mgmt threaten my ability to preserve my position. But sarcastically resenting them keep me in stasis. I don't think that's a great analogy for what is being described here, but that's what I've got off the cuff.
I feel exactly like that rabbit right now. Getting squeezed from every angle
I dropped 20 lb this summer, down to an unhealthy weight now...
Parrot Security
The ultimate framework for your Cyber Security operations
reshared this
Tech Cyborg reshared this.
I've just created a GUI for a journal viewer
Hi everyone,
I’m excited to share that I’ve started working on a new project called Journal Helper. Its a journal viewer built from scratch using Qt6 and C++. The goal is to provide a fast, visually integrated journal viewer for Linux, particularly for KDE/Plasma users.
While there are existing tools like journal-viewer (github.com/mingue/journal-view…), which uses WebKit, I found that its GUI doesn’t integrate well with the Qt/Plasma ecosystem. I also wanted to improve performance and create a more seamless visual experience. Therefore, I decided to create a new viewer from scratch that should be quicker and more efficient.
The project is still in its early stages, but I’d love to get more people involved, especially those who are interested in Qt development. As a beginner myself, I’m eager to learn from others and collaborate on making this tool as good as it can be.
How to Get Involved
GitHub Repo: github.com/rughinnit/journal-h…
AUR Package: aur.archlinux.org/packages/jou…
Any contributions, whether it’s in the form of code, design ideas, or feedback, would be incredibly valuable. If you’re experienced with Qt, C++, or even just interested in contributing, please feel free to fork the repo or reach out.
I’m aware of tools like KJournalDBrowser (apps.kde.org/kjournaldbrowser/), but I had some trouble with installation without using Snap. My goal is to create something simple and accessible for all users.
Looking forward to any thoughts, contributions, or advice you might have!
Thanks for sharing!
While there are existing tools like journal-viewer (github.com/mingue/journal-view…), which uses WebKit, I found that its GUI doesn’t integrate well with the Qt/Plasma ecosystem.
I was going to bring that up. I’ve never heard of journal-viewer, but I have used:
- GNOME Logs
- KSystemLog
- QJournalctl
You mentioned specifically integrating well into QtPlasma, what improvements do you hope to have over the native KDE KSystemLog?
Welp, didn't realize those tools was exist
Mainly its just journalctl command written in Qt
- journalctl --list-boots
- journalctl -x
- journalctl -b ...
- journalctl -p
still find it difficult to include some colors to it...
It has been a while that I tried #StarCitizen. With the new #Neuralnet Tracker plugin (AI haha) for #OpenTrack we get head tracking without annoying IR LEDs or reflecting stripes just by reading the webcam video feed. This is apparently fast enough to try #headTracking without a dedicated #headTracker nowadays. And all that on a #Linux PC. Took some fiddling but the concept still works. What a time to be alive.
Demo: makertube.net/w/groS1wpAhP8XYE…
HowTo: simpit.dev/systems/opentrack/
beko.famkos.net/2024/09/02/156…
#gaming #headtracker #Headtracking #linux #Neuralnet #opentrack #StarCitizen
reshared this
Tech Cyborg reshared this.
that's awesome! #StarCitizen LUG org member, here.. Hopefully some of that detail can get added to github.com/starcitizen-lug/kno… someday!
Cheers, seeya around the 'verse!
Thanks mate 🙂
Following back from main. I’m also in the LUG alas not big on Wiki edits (any more).
Why don't more people use Linux? - DHH
And Linux isn't minimal effort. It's an operating system that demands more of you than does the commercial offerings from Microsoft and Apple. Thus, it serves as a dojo for understanding computers better. With a sensei who keeps demanding you figure problems out on your own in order to learn and level up.
...
That's why I'd love to see more developers take another look at Linux. Such that they may develop better proficiency in the basic katas of the internet. Such that they aren't scared to connect a computer to the internet without the cover of a cloud.
Related: Omakub
Why don't more people use Linux?
A couple of weeks ago, I saw a tweet asking: "If Linux is so good, why aren't more people using it?" And it's a fair question! It intuitively rings true until you give it a moment's consideration.world.hey.com
like this
DaGeek247 likes this.
like this
DaGeek247 likes this.
On desktop, Linux isn’t the best choice.
People use Linux where it’s the best, servers!
I just installed Manjaro over my windows 10 drive and the effort so far has been way easier than I thought.
KDE Plasma reminds me a lot of WIn 10, and nearly everything I did on my windows system works under Linux without hassle. The only issue I had were certain technical things like overlooking my GPU and setting up my LED lights.
Yeah I didn't find Linux install any harder than installing windows from scratch.
Edit: the only thing was multiple choices for home filesystem, which made me do some research on why I would want ext3 or 4 of xfs, or btrfs.
I have never bought the idea that free/libre SW in general is just not as easy, including GNU+Linux. I'll leave out open source initially, and come back to it later, not because it doesn't experience the same, but because corporate wide it doesn't suffer the same fate. And linux itself is one of the most widely used kernel if not the most, it happens similarly to openssl, and so many other open source components. So I see no issue with linux adoption, I can't think of any kernel more adopted than linux...
To me what has really affected free/libre SW is the monopolistic abuse of the corporations, plus their ambitions, and how in Today's world, they have created the illusion that being a technologist is the same as being a technology consumer, which gets into the hearts of governments and education systems (more hurting, public education systems). Let me try some practical examples:
- Educations systems translate the need to educate students about technology into making them familiar with MS different SW, like the windows OS, MS outlook, MS office, MS project, MS visio. Even on the higher levels of education, colleges and universities prefer to use matlab over octave for example, even for just matrix operations scripting. Office covers spread sheets BTW, so people specialized on accounting know excel, but no other spread sheet.
- On public education systems, where one would be inclined to think it might get more interest on developing the expertise to not depend on proprietary SW only, it's where corporate reach deeper offering "cheap" educational licences.
- From the prior two keep in mind that educational licenses from proprietary SW usually means future professional and people depending on proprietary SW in general. They are meant not to educate, but rather generate the future dependent population.
- Governments, whether local or nation wide, instead of adhering to open standards, for any kind of form submission, and even further to adhere to use of free and open source SW, to build the technical and competency expertise required to have a criteria about different technologies, about SW, infrastructure, DBs, and so, they prefer to require citizens to use non free or open source SW to create required forms, and prefer to pay for SW solutions which totally lock in the entire solution, usually coming from big corps, or other companies actually making use of SW and technologies coming from big corps.
- In their effort to discredit free/libre SW, the idea that the fundamental principles behind free/libre SW hurt the SW industry, or that are irrelevant to Today's world or even worse than that, there were claims that the GPLed kernel was a great threat and GPLed SW a cancer. Now that open source usage has totally overcome free/libre SW, there are no such claims, but the damage is done. There's nothing wrong with people wanting some compensation from corps, when developing SW, and thus not using free/libre licenses like GPL-3+ or AGPL, but in the end that eventually might hurt the users rights protected by such licenses, which such corps don't really care that much (their profit has higher priority for sure), and experience shows that just because SW is licensed open source doesn't guarantee any compensation for the development whatsoever, so if volunteering SW, doing so as open source is not even close to get every developer a decent income out of their contributions. Well, except for the big corps backed SW, linux included, but that's not the majority of open source SW.
- The discredit of free/libre SW, which allowed the eventual creation of open source, is such that the banning of individuals ends up being an attack to the organizations behind it and even their principles and motivation.
- Moving away from the free/libre SW observations, even now with open source, from the big corps, which barely compensate the open source developers, complain about the open source supply chain, campaigning against not well maintained SW and such, there's the famous image of a complex and heavy structure depending on a weak and deficient leg. Whatever truth around that figure, it of course hides the overall picture of the developer of such leg not ever being compensated (not to mention paid) for his library or SW component, and perhaps that's one of the reasons the project got even abandoned, but now it's easy to blame such situation when talking about FOSS in general.
Paid SW might be more intuitive to use at times, I can understand that. There are paid developers making the UIs more intuitive and attractive, in the end it needs to be bought or massively consumed to get earning through its use. But if you look deeper, perhaps it's not just that free/libre or open alternatives are non intuitive at all, perhaps people gets used to that UI when attending basic or high school, or college/university. Perhaps even when exposed to mobile devices even when they can barely walk. Everything else, different in nature, will look alien to the future "technologists"...
On a sad (lacking hope) note, I don't think there's any indicator of things changing. My only hope is changes in educational systems, which are nowhere happening, and not the parents, as mentioned they are already convinced that using google, ms, apple, oracle or whatever prepare their kids for the future and will make them the technologists of the future.
On a funny note, I would answer the motivating question with: Linux is so good that it's actually most probably the most used kernel world wide, :)
If talking about non proprietary kernels' drivers, such as linux, then again, profit is what regulates it. No wonder why now nvidia finally cares about linux, being the most used kernels behind the cloud, behind servers of whatever. Meaning, it's not profitable not to support linux now a days for Nvidia.
The other fundamental factor is lock-in, which is abused by some big corps, such as MS.
But the profit idea es even wrong, but it's what we have been educated with. For an OEM, providing FOSS drivers or FOSS FW doesn't mean to have less profit, but somehow it's interpreted as such. And there's also our culture, backed by corps again, that tends to make us believe that everything profitable enough has to be corporate secret, and if not, others would take advantage of you business. That way of thinking really prevents for more FOSS adoption at the OEMs level. I don't agree with it. It might be the presence or lack of some HW features might be inferred by the drivers/FW, but it doesn't mean your competitors will know how exactly you provide such feature, and even less how to make it with the performance you do. And usually once released, you really want to show off your features, your innovation and so on, not keep it secret. So in general, really see no issue for OEMs not to offer drivers and FW as FOSS, even as free/libre SW.
I can imagine OEMs offering FOSS drivers and FW, but that not being as convenient for the major players in the market, since that would risk their position in the market. Just a thought...
Remember the lock-in mechanisms by the corps that feel being threatened if open sourcing dirvers... Some of which no longer say it out loud, but still think GPLed licences are a cancer...
- Doesn't have millions to market like the alternatives.
- More technical requirement (historically anyways)
- Much less likely to be the default on hardware (which is what most ppl stick to)
I figured I'd squeeze in one anticomment among the circle jerk
There's plenty of videos on YouTube of people trying Linux for the first time, and it can be painful to watch how poorly they try to fix something or unintentionally break their system.
That's not to say windows is any better, because they'd do the same thing there.
But people will only switch permanently if windows really falls off hard, which may or may not happen.
You have to think of it like how people first learned to use a mouse and double click back in the 90s. It's not immediately intuitive for everyone, they often have to start over.
That being said, having a big OEM ship linux would do wonders, but Microsoft fights hard to make sure that almost never happens.
iirc due to some anti trust lawsuits, they cannot do that anymore.
But it's still easy to coerce OEMs to run Windows because they offer stuff like quick support and standardized IT support.
If an OEM ships Linux, they don't want to have to make an entire department to help troubleshoot the OS for users who will inevitably call for help. Ignoring them would only result in returns and loss of sales.
I think some thinkpads actually do ship with some distro like redhat or opensuse as an option, but that's because thinkpads are very popular in the business space which means lots of CS people use them, so it helps save some cost from a windows license that won't get used.
Like I said though, if windows really dives into the deep end, I think a potential market would open and some OEM will take a chance on it.
At the same time I think most people don't think about how much prior knowledge you need to just be able to use Windows or Mac. And for someone without ANY prior knowledge all of them are the same.
Story time, my MiL is a zero when it gets to computer literacy, to the point that every week I had to solve something for her. Eventually I gave her a laptop with Linux in it to make it easier for me to do support, and to my surprise she had lots of problems the first months when setting things up and until learning the ropes, but afterwards there were almost no problems.
The thing is that people have a lot of Windows knowledge, so when they try Linux they expect it to be Windows and get frustrated when it's not.
That’s why I’d love to see more developers take another look at Linux.
I'd love to see more developers taking a look at writing portable cross-platform code.
every programming language I use being that way
Are there interpreted languages?
Python is the gold standard for cross platform interpreted languages
If what you meant was are they all interpreted, no C# rust and python are the main 3 I use rn and are all cross platform
like this
Sickday likes this.
totally honest, I had MUCH more trouble with windows AND Mac.
First off, Linux is so easy to install, while Macos and windows have all that unnecessary stuff, like iCloud and one drive (don't even get me started on one drive, its awful, nobody wants to use it, and where do you disable it? and why did it enable itself again?) Then theres the thing where you can't install anything on Mac without having to change about 500 permissions.
And the main reason why I switched, customization. Windows has none of that, you can change the color and that's it. even the cursors often get reset when you restart. Macos is even worse in this regard though.
I think the main reason, why people don't switch is because our generation of teenagers is lazy, or they've always been, I don't know.
I know from friends, that they just don't care about privacy or functionality and think I'm a conspiracy theorist. And generally they make fun of everything I say regarding this topic and don't take it serious AT ALL.
A lot of people (regardless of age) have a very fuzzy idea (if at all) of what a file or a directory is. They wouldn't know a operating system if it sat on their face.
The only way to get them to use Linux is to switch the system on their computers. And they'll probably manage just fine(after a bit of initial grumpiness), since most interfaces are pretty much the same anyway.
But they're never going to change on their own.
When using Windows, I occasionally encounter this weird phenomena that I never experience using any other type of OS, whereby it generates a problem that's so stupid on such a fundamental level that there's no way to really work around it.
Like when I recently tried out Windows 11, I made a manual restore point in case it fucked itself up doing a big update. Which it did, and then when I tried to restore it I found out that it only keeps one restore point, and that after it broke itself doing the update it overwrote my manual restore point with its own automatic restore point, ensuring that the fuckup it just did was the only thing to restore to. I tried restoring it anyway to see what would happen, and it said it couldn't do it but didn't explain why.
Like when an allegedly modern OS so utterly misses the point of both system restore and basic error messages, I don't know what to do with it really.
Windows is not difficult to install, it's just tedious and full of anti privacy options most people don't care about
Also don't think 90% of people will ever install an operating system in the first place anyway
Teenagers are not lazy and are definitely installing it have you seen the hyprland discord
I've seen maybe 4 people using Linux in real life and 2 of them are friends/family
If you're in any of the Linux related discords for long enough to pick up on it you'll realize there are a lot of kids there
After some encouragement, I've been making an effort to switch much of my computing over to Fedora (at least, on weekends until it's got everything I need on it).
My (Framework) laptop fully supports the OS, and even booting it up on an external SSD has been easy, and it works fast and smooth.
But, it's absolutely not as easy to settle into compared to windows.
With Windows, the only "tweaks" that a user might make is installing a different browser, but everything else will work as it should.
Power Windows users will spend more time removing bloat and ads, I won't deny that!
But on Fedora, I had to scour the internet to find out how to get a minimize and maximize button on a window (had to install another utility, then an extension...). Then I had to do the same to move things down to a dock.
Annoying, but it wasn't a huge deal. These small add-on, tweaks, and personalization options all require that you know where to look and how to actually apply these fixes. Thank god I didn't have to fuss around with device drivers.
Then, as I happily watched the Para Olympics while multitasking, my screen just went black. No warning, no way to recover it. Hitting my laptop's power button throws up a series of errors and !!!!!!!!!!!!!!!!! "FAILED TO EXECUTE SHUTDOWN BINARY".
If this is the equivalent to a BSOD on Windows, then it would be my first BSOD in many, many years.
Now I need to figure out how to get some Windows-only software to run, if that's even possible, which adds another layer of time and aggravation.
If I were a novice computer user, I wouldn't even bother with any of this and just stick to Windows. Hell, I wouldn't even know where to begin with any of it!
But I'll see how long I can ride this out, and perhaps I'll be a full-time Linux user some day.
Windows only software
I'm sure by now you know about the troves of compatibility layers that exist in order to make this possible; depending on the software.
Get a minimize and maximize button
This is more of a DE issue than Linux issue, I'm assuming you went with the default Gnome but you might like KDE or Cinnamon for a more windows like experience. I personally loved both of those DEs until I made the mistake of getting comfortable with a window manager
I’m sure by now you know about the troves of compatibility layers that exist in order to make this possible; depending on the software.
Yes, I'll need to do a bunch of experimentation to see if I can get it working. But it's a messy solution to something that isn't even a thing on Windows.
This is more of a DE issue than Linux issue, I’m assuming you went with the default Gnome but you might like KDE or Cinnamon for a more windows like experience. I personally loved both of those DEs until I made the mistake of getting comfortable with a window manager
Fair point, I'm using what Fedora came with, but I can go with something else. Better if I do that sooner, rather than later. LOL
I may have another external SSD I can use, so it should be easy to just install another copy with KDE or whatever on it.
Or... I may just stick with GNOME, since I'd rather keep things simple anyway. Regardless, I'm glad I have options.
That's less about Fedora and Linux than it is about Gnome.
Coming from Windows to Gnome is a shitshow, honestly I think it's the main reason there isn't more Linux users. If that's your first introduction to Linux, no wonder people yell screaming for the exits. It's not an easy transition.
Using DEs like Plasma or Cinnamon is a way more welcoming way to change over. Maybe eventually you'll want try Gnome and it's opinionated workflow, but I think its a terrible way to start out an already jarring transition.
Nobara is a good distro to use Fedora and have KDE by default, with the option to change later. And it has a pile of video tweaks and fixes for gaming and editing out of the box or via the welcome screen tasks.
Well, I think my experiment might have come to an early end.
Yesterday, when I booted up fedora, I lost my wifi (like, it didn't even give me the option to use wifi). Re-booted and it worked again.
Then I decided to get a copy of Fedora with KDE Plasma loaded up. Seemed fine, started setting it up.
Let's try some Windows software through Wine (Bottles, I believe, is what the actual software was called). Program 1, installed, but won't run. Program 2, installed, but wont' run...
Then, out of nowhere: Blank screen.
After waiting several minutes, I hit the power button: FAILED FAILED FAILED messages "Failed to start plymouth-reboot.services" being the last. FFS...
I just don't understand how I can break Linux so quickly without really doing anything. My experience over the last 20 years of trying Linux has always ended the same. Are there no stable distros available? Ubuntu, Mint, Fedora, Elementary, Damn Small... none of them last more than a few days/weeks before they crash and burn.
And when Linux crashes and burns, I really don't know how to fix it.
It's extremely hard to go from Windows 11, which has been absolutely rock solid. Literally no problems, no crashes, no BSOD, no compatibility issues, etc. to Linux, even though I value Linux more.
I would rather not use Windows, but I feel like I'm forced to at this point.
That's the shits. Hardware can be finicky with Linux, especially laptops.
I would try Nobara or Manjaro, both have some pretty good hardware detection and updated/non-free drivers. Fedora itself doesn't have certain things in it that aren't "free" by default.
But you might not be destined to use Linux and no shame in that. Keep trying back if you change hardware.
My laptop is a Framework and has official support for Fedora and Ubuntu. I wouldn't expect these kinds of issue, TBH.
I can probably try a few more distros, but I'm just disappointed that the experience seems to always be the same :(
Their forum is pretty good, and there's a dedicated linux section there, too. They also have extensive support documentation.
I'm sure I can get it working to be more stable, but man, it's an effort for sure.
That is a fair point. I don't expect every feature to match 1:1. But minimize and maximize window seems to be a no-brainer for basic use. At least, how I use floating windows.
But... I'm glad that there are options to bring those features (and more) back if someone chooses.
Is this only when using Linux? The drive's S.M.A.R.T status is all perfect (it's only got like 40 hours of use on it), and tests with no errors).
Maybe I can try another drive.
My drive was brand new when the issue started. I don’t think SMART showed anything wrong with it, apart maybe from the improper shutdowns count.
Not sure if it was Linux only, I never had Windows installed on that drive.
I just wish we could have less ways to do things in Linux.
I get that's one of the main benefits of the eco system, but it adds too much of a burden on developers and users. A developer can release something for Windows easily, same for Mac, but for Linux is it a flatpak, a deb, snap etc?
Also given how many shells and pluggable infrastructure there is it's not like troubleshooting on windows or mac, where you can Google something and others will have exact same problem. On Linux some may have same problem but most of the time it's a slight variation and there are less users in the pool to begin with.
So a lot of stuff is stacked against you, I would love for it to become more mainstream but to do so I feel it needs to be a bit more like android where we just have a singular way to build/install packages, try and get more people onto a common shell/infrastructure so there are more people in same setup to help each other. Even if it's not technically the best possible setup, if its consistent and easy to build for its going to speed up adoption.
I don't think it's realistically possible but it would greatly help adoption from consumers and developers imo.
I love SteamOS for gaming and I think going forward that may get more and more adoption, but a lot of day to day apps or dev tools I use either don't have Linux releases (and can't be run via wine/Proton). I would love to jump over on host rather than dabbling with it via vms/steamdeck but it's just not productive enough.
One especially painful thing is when certain libs I'm developing with need different versions of glibc or gtk to the ones installed by default on OS, and then I die inside.
i think flatpak has done a lot to make this easier, but at the same time... i'll admit i'm not a fan of it (mostly due to random issues).
the way i see it, more distros need something like arch linux' AUR. if an application is reasonably easy to build, it really does not take much to get it into the AUR, from where there's also a path towards inclusion in the official repos.
i don't know too much about other distros, but arch really makes it amazingly easy to package software and publish everything needed for others to use it. i feel like linux needs more of this, not less - there's a great writeup that puts why linux maintainers are important way better than i ever could:
Yeah it'd be nice if there was a really standardized Linux distro that gave developers a baseline to aim for, and then those of us who use the nerdier distros could just figure out our own stuff from there. I think Ubuntu was on track for that for a while, but they tend to go off on these tangents (Unity, Mir, Snaps etc.) which sometimes work against them, and now distros like Pop!OS and Mint are starting to fill that space a bit more.
Basically it's this lol
Package management in central is a bit of an issue. I think nix has the right approach where it's incredibly difficult to create a package that won't work on x system. I think appimage flatpak and snap all work in a similar way
Pip is a right pain in the arse though, if I had a nickel for every time a pip install has failed for some specific package with an esoteric error message...
The working out analogy is great, everyone with a technical job involving computers probably should keep a Linux machine, switching to it has skyrocketed my knowledge on computers in general
It's difficult though, I would compare daily driving it with cycling into work instead of driving, it's fun and good for you but constant effort
Select Audio Output Via Command Line
Does anyone know how I can select my audio output via the command line? I'm frequently switching between using my monitors inbuilt speakers and a USB audio interface and I'm finding it laborious to navigiggerate graphically through the settings in GNOME to do so.
What I'd like to do is set up a couple of bash aliases and do it in my terminal.
What's the best way for me to do that?
Many thanks
reshared this
Tech Cyborg reshared this.
Doesn't alsamixer
work?
Anyway, you may wanna try pactl set-default-sink [sink-name]>
as well
That depends on which audio system you're running.
Since this can vary depending on your distro, the easiest place to look for that info is going to be your distro's documentation. That documentation may also include instructions for how to accomplish exactly what you want.
I've been ricing my GNOME DE.
Only joking. I had a bit of fun in GIMP for to illustrate this post. You're welcome to use it if you want :)
Oh no, you're going to make me be that guy lol.
Ricing comes from "rice cooker", meaning a Japanese car. The term is so far removed from any racial implications now, that some people say RICE means "Race Inspired Cosmetic Enhancements", though it's just an excuse where one need not exist.
I regularly see people brigade for others to stop saying it, even though the word now exists on its own. People treat it like it's comparable to something like the Washington "Redskins", it isn't.
Pretty cool I just found it crazy that
A: nobody commented
B: you didnt mention the image at all
pactl get short sinks
gets you a list of devices with a numerical identifier. And
pactl set-default-sink ID
Sets the default sink to the desired ID. I only ever want to swap between two so I wrots a bash script to do that. I just type 'aud' and it does it for me.
\#!/usr/bin/env bash
DEVICE=$1
# read input, parse list of available sinks (outputs)
if [ "$DEVICE" = "pc" ]
then
OUTPUT=($(pactl list short sinks | awk '{print $2}' | grep -i -E 'hdmi|samson|Targus' -v))
elif [ "$DEVICE" = "tv" ]
then
OUTPUT=($(pactl list short sinks | awk '{print $2}' | grep -i -E 'hdmi'))
else
echo "No valid input (must be either 'pc' or 'tv')"
exit -1
fi
# get all currently connected streams
INPUTS=($(pactl list short sink-inputs | awk '{print $1}'))
# change default sink (for new audio outputs)
pactl set-default-sink $OUTPUT
# switch sink for existing audio outputs
for i in "${INPUTS[@]}"
do
pactl move-sink-input $i $OUTPUT
done
# use notify-send to send a visual notification to the user that the sink changed
notify-send -c info "Default sink changed" "Changed default sink and sink-inputs to $OUTPUT"
Our family mail server quit working today. Maybe it's a bit long in the tooth...
Apparently I installed that thing in 2006 and I last updated it in 2016, then I quit updating it for some reason that I totally forgot. Probably laziness...
It's been running for quite some time and we kind of forgot about it in the closet, until the SSH tunnel we use to get our mail outside our home stopped working because modern openssh clients refuse to use the antiquated key cipher I setup client machines with way back when any longer.
I just generated new keys with a more modern cipher that it understands (ecdsa-sha2-nistp256) and left it running. Because why not 🙂
like this
Get_Off_My_WLAN and Noxious like this.
It's behind a firewall. The only thing exposed to the outside is port 22 - and only pubkey login too.
And gee dude... It's been running for 18 years without being pwned 🙂
like this
DaGeek247 likes this.
Did you really only use it when you were home? If you used it outside the firewall then port 25 must have been open also.
I used to run my own server and this was in the early 90s. Then one day, perusing the logs I realized I was not smart enough on the security front to even attempt such a thing. It was quickly shut down and the MX record moved to an outsourced mail provider.
like this
Davel23 likes this.
If you used it outside the firewall then port 25 must have been open also.
Do you know what an SSH tunnel is?
Very very aware.
So you had another mail server elsewhere that port forwarded port 25 via port 22 to your internal mail server's port 25.
I take it that outside mail server was secure.
That's an impressive setup.
like this
Davel23 likes this.
We're not talking about some punch card COBOL machine he jimmy rigged with network access, it's an old Debian Linux box with SSH enabled.
It's not like Metasploit would have a tough time finding unpatched vulnerabilities for it...
like this
DaGeek247 likes this.
Unless it's for SMTP only, it's probably a back end sever to some other front facing box, or service, that has IP addresses whitelisted for email.
I'm pretty sure I read one of his comments elsewhere talking about tunneling everything over SSH, so I assume that's what he meant, but I could be mistaken.
Regardless, using an EOL distro as an internet facing SSH server that's 8 years behind on SSH updates, is probably a bad idea.
like this
DaGeek247 likes this.
like this
DaGeek247 likes this.
I've started up new domains and never had an issue getting mail accepted.
There's a right way to do it, and most people that complain that hosting email is impossible don't know how to configure it correctly.
Patience. It really helps to have all the latest set up: SPF, DKIM, DMARC. Then after that it's a matter of IP reputation, you can email the various blocklists and you wait for the rest of them to clear on their own.
I've had that IP for 10 years and it has never sent spam, and I've sent enough emails that people open that it actually does get through fine. I haven't had to think about it for a long time, it just keeps on working. Barely had to even adjust my Postfix config through the upgrades.
like this
DaGeek247 likes this.
security-tracker.debian.org/tr…
Depending on how it was configured it may or not be have been compromised. Probably better to go the nuclear option.
Not to be that guy but why not use Curve25519?
I still remember all the conspiracies surrounding NIST and now 25519 is the default standard.
In 2013, interest began to increase considerably when it was discovered that the NSA had potentially implemented a backdoor into the P-256 curve based Dual_EC_DRBG algorithm.[11] While not directly related,[12] suspicious aspects of the NIST's P curve constants[13] led to concerns[14] that the NSA had chosen values that gave them an advantage in breaking the encryption.[15][16]
I'm fairly certain that SSH and whatever else you're exposing has had vulnerabilities fixed since then, especially if modern distros refuse to use the ssh key you were using, this screams of "we found something so critical here we don't want to touch it". If your server exposes anything in a standard port, e.g. SSH on 22, you probably should do a fresh install (although I would definitely not know how to rebuild a system I built almost 20 years ago).
That being said, it's amazing that an almost 20 year old system can work for almost 10 years without touching anything.
They normally are isolated systems with controlled access. Same with shipping and any other critical industry.
Not to say that there aren't exceptions but these days there is a required level of compliance
Family email server? Your family have an email server to themselves? You managed to deal with block lists over 2 decades and more?
My utmost respect to your dedication
Microw
in reply to squirrel • • •nutomic
in reply to Microw • • •Microw
in reply to nutomic • • •have done so
Have nice holidays!
nutomic
in reply to Microw • • •