Cheatsheet script for displaying Linux command examples
You can use cheat sh web service to show cheatsheets for all kind of commands. Just replace the command name: curl -s cheat.sh/date. I also wrote a a simple script with filename being just a question mark to get a working command as ?, that shows all commands in fzf menu if no argument is given or shows the cheatsheet in the less pager if command name is given.
Usage:
?
? -l
? date
? grepScript ?:
\#!/bin/env bash
cheat='curl -s cheat.sh'
menu='fzf --reverse'
pager='less -R -c'
cachefile_max_age_hours=6
# Path to temporary cache file. If your Linux system does not support /dev/shm
# or if you are on MacOS, then change the path to your liking:
cachefile='/dev/shm/cheatlist' # GNU+LINUX
# cachefile="${TMPDIR}/cheatlist" # MacOS/Darwin
# Download list file and cache it.
listing () {
if [ -f "${cachefile}" ]
then
local filedate=$(stat -c %Y -- "${cachefile}")
local now=$(date +%s)
local age_hours=$(( (now - filedate) / 60 / 60 ))
if [[ "${age_hours}" > "${cachefile_max_age_hours}" ]]
then
${cheat}/:list > "${cachefile}"
fi
else
${cheat}/:list > "${cachefile}"
fi
cat -- "${cachefile}"
}
case "${1}" in
'')
if selection=$(listing | ${menu})
then
${cheat}/"${selection}" | ${pager}
fi
;;
'-h')
${cheat}/:help | ${pager}
;;
'-l')
listing
;;
*)
${cheat}/${@} | ${pager}
;;
esaccheat.sh - The only cheat sheet you need
cheat.sh - The only cheat sheet you need. GitHub Gist: instantly share code, notes, and snippets.Gist
like this
funbreaker likes this.
Basic examples for the Linux date command
I rarely ever use the date command, but when I need it I almost always struggle to get the right incantation. So, wrote a blog post for easy reference.
Do you use a cheatsheet as well?
Basic examples for the Linux date command
Examples and resource links for the Linux date command.learnbyexample.github.io
date is the command for setting the system date and time from the command line. Nothing to do with formatting, beyond the fact that it presumably applies system locale settings when echoing date-time info.
tldr which someone else suggested, there's also the cheat command. It's pretty easy to add to it's cheat sheets, if you have custom commands, or want to keep a specific example. I've never kept a physical cheat sheet... They're just too inconvenient and my fingers are probably already at the keyboard.
For all I know, new versions probably run fine in current OSs. But I don't own new versions. I could use open source stuff that has less features and less creature comforts, but then I also need to dedicate a newer laptop to the go box.
The whole point of that hobby is reliability and stability. Those old lenovos are tanks and I have spares for days.
A new way to develop on Linux - Part II
A new way to develop on Linux - Part II
In our last update, we shared how to improve the developer experience for building and testing software using system extensions with sysext-utils. This time, we want to share how to leverage that work to enhance end-to-end testing.www.codethink.co.uk
Article picture is of a mac and even better, touchbar controls are for photos.
I love those very real tech pics.
Andries Brouwer on the OOM killer
Via Andy Miller (2007), an amusing metaphor for Linux memory overcommit. Originally posted by Andries Brouwer to the linux-kernel mailing list, 2004-09-24, in the thread titled “oom_pardon, aka don’t kill my xlock”:An aircraft company discovered that it was cheaper to fly its planes with less fuel on board. The planes would be lighter and use less fuel and money was saved. On rare occasions however the amount of fuel was insufficient, and the plane would crash. This problem was solved by the engineers of the company by the development of a special OOF (out-of-fuel) mechanism. In emergency cases a passenger was selected and thrown out of the plane. (When necessary, the procedure was repeated.) A large body of theory was developed and many publications were devoted to the problem of properly selecting the victim to be ejected. Should the victim be chosen at random? Or should one choose the heaviest person? Or the oldest? Should passengers pay in order not to be ejected, so that the victim would be the poorest on board? And if for example the heaviest person was chosen, should there be a special exception in case that was the pilot? Should first class passengers be exempted? Now that the OOF mechanism existed, it would be activated every now and then, and eject passengers even when there was no fuel shortage. The engineers are still studying precisely how this malfunction is caused.
Twenty years later, as far as I know, the OOM killer is still going strong. In fact, if you don’t like the airline’s policy on what counts as an “emergency” (for example, that it might exhaust your swap partition too before killing any bad actor at all), you can hire your own hit man, in the form of the userspace daemon earlyoom.
Explanation of the OOM-Killer: Understanding Out of Memory Killer (OOM Killer) in Linux
What is Out of Memory Killer (OOM Killer) in Linux?
Learn about Linux kernel's out of memory management handling mechanism.Abhishek Prakash (Linux Handbook)
Can it not be disabled? I've heard so many horror stories about the OOM killer that I'm really not a fan at this point.
And might as well add one of my own.
I needed to do an unpacking of a very large file, which I kept running in the background, but it used a ton of memory and took a ton of time. So to ensure I'm not bored for 30 mins, I opened up the browser. Around 10 mins or so later, I go to check up on the window where the operation is running only to find out the operation.... stoppped? So after that, I just started the operation again, closed all other windows and background programs, and checked out stuff on my phone while I waited.
I mean, this is literally what someone in the original mailing list said:
How about a sysctl that does "for the love of kbaek, don't ever kill these processes when OOM. If nothing else can be killed, I'd rather you panic"?
Open Source extension that greatly increases streaming speeds
GitHub - Andrews54757/FastStream: Stream videos without buffering in the browser. An extension that replaces bad video players on the internet with a better, accessible video player designed for your convenience.
Stream videos without buffering in the browser. An extension that replaces bad video players on the internet with a better, accessible video player designed for your convenience. - Andrews54757/Fas...GitHub
like this
KaRunChiy likes this.
reshared this
Open Source reshared this.
Automatic fragmentation and parallel requests for up to 6x faster download speeds. Watch videos without interruptions by predownloading the video in the background.
Looks like it mostly just buffers the video more aggressively. If you have a good Internet connection it won't do anything useful other than peg the server's connection downloading the entire video file at gigabit speeds and make it worse for other viewers.
like this
DaGeek247 likes this.
"You're trying to be efficient and you use a computer too well, you must be a bot! You're banned."
My experience every time I try to use most ecommerce sites like amazon
like this
DaGeek247 likes this.
I get that its abnormal for me to open twenty tabs for a bunch of products and a bunch of different queries simultaneously. That's just being good at computers, and it should be encouraged.
Don't ban people who are abnormal. Machine learning anomaly detection is making the internet unusable.
like this
DaGeek247 likes this.
If you have good internet it could make it significantly better
Tcp transfers are limited by the product of window size and latency, if I am in Australia with gigabit internet downloading from Europe then I could be limited at mere megabits with a single connection!
I've pulled multi gigabit between Australia and Europe back when I worked at PIA, over OpenVPN over TCP. You just need the appropriately sized buffers and window sizes.
You need extra large buffers because you need to hold on to the packet until acknowledgement in case you need to retransmit it. At gigabit+ with some 300ms to deal with, it can be like some hundreds MB of packets, on top of the regular queue.
But fair enough, that will workaround the issue.
Bingo, that's the core issue. Big fat windows decrease operating efficiency at large scale and if most clients are nearbyish it's unnecessary.
I've noticed PIA do a good job, maybe that's your work at play!
Nope. Everything has been replaced by Kape's infrastructure, from what I've heard. I worked there 2016-2019, so 5 years ago already.
They did do an alright job with the app though vs what we had to deal with back then. I ran those kinds of speed tests between regions over OpenVPN in part to disprove speed complaints which, Windows + OpenVPN + TCP on Windows 7, yeah the speeds weren't amazing but nothing I could do about that.
It was kind of funny in retrospect though, everyone online was like PIA is faster, no AirVPN is faster, no ExpressVPN and flame wars of which one had the worst speeds. I measured it, it was all the fucking speed/latency curve on the client's side 😂
Default audio level and SponsorBlock integration · Issue #16 · Andrews54757/FastStream
it's 100% by default. i am deaf now. i dont want others to be deaf too. I missed the audio config option. Also if possible, maybe add sponsoreblock support. The extension is working great for me.GitHub
like this
TVA likes this.
Åklagare har väckt åtal i ett ärende där en man i 20-årsåldern sköts till döds i Sätra centrum i södra Stockholm den 7 augusti 2023. Sju personer har åtalats för inblandning i mordet. Utöver det åtalas även tre personer för grovt vapenbrott. Samtliga åtalade är häktade.
blog.zaramis.se/2024/08/22/sju…
Sju personer har åtalats för mord i Sätra, Stockholm - Svenssons Nyheter
Åklagare har väckt åtal i ett ärende där en man sköts till döds i Sätra centrum i södra Stockholm. Sju personer har åtalats för inblandningAnders_S (Svenssons Nyheter)
Why I Prefer Minetest To Minecraft - YouTube
- YouTube
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.www.youtube.com
reshared this
Linux reshared this.
If you run your own server it's as simple as setting
Online-Mode: false
And installing an account management plugin if you open it to other players (so people can't just easily change names and impersonate someone)
I exclusively played on and ran cracked servers for 6 years (2011-2017), shit rules
I completely get where people are coming from with that opinion, but I've been playing MC for almost 15 years, and I'm having just as much if not more fun with the game now as I did at any other point in its development.
Minetest is super cool and can be very fun. I play a bit on it as well, but exclusively advertising for it on the platform of hur dur mineshit sucks, which isn't necessarily what you're doing, I just see that a lot, definitely isn't the best way to go.
Sounds super cool :o ... Am still kinda salty about M$ blocking my account and holding my copy of Minecraft (that I paid Mojang for, well before it was Microsoft's!) hostage because they want my phone number, though. 😠
... Also I kinda wanna know if it's got the moddage I love about Minecraft, but am afraid to ask because I'm stuck on a laptop that can't really run much without getting all melty 😅
Yep, I didn't convert either of my accounts over as well.
I would just try it and see what you think of it! It's completely free. Minetest is the program you install on your computer, and then there are lots of different games that you can download and try inside of Minetest. There's more besides Minecraft-likes that you can try, and there are definitely mods available. I never modded Minecraft though, so I'm not sure how they compare.
As to system requirements, it could run pretty well on a six year old Android phone the last I tried. It might be worth a shot on your laptop! Be aware that it'll probably be a somewhat different experience than Minecraft, but not necessarily in a bad way!
Haven't watched the video - just my thoughts...
Minetest (specifically Mineclone2) is an impressive feat, and a very faithful reproduction of the original. I pretty much used the Minecraft fandom wiki to progress through the game. Hours of fun was had without handing money to M$.
I only really stopped because the redstone functionality wasn't fully implemented.
Hats off to the devs on that project regardless
The redstone is a large feat indeed. I started working on that but had to stop due to time constraints. Its still in my head though.
All in all there is just too much great stuff someone with a little drive and a little coding knowledge can do in the foss ecosphere.
like this
TVA likes this.
Thanks :) it is a bit confusing
Irrlicht is discontinued but I think it is under a different name now
wayland.freedesktop.org/xserve…
It provides backwards compatibility for running X apps under Wayland.
Individual apps, particularly full-screen games, shouldn't need "Wayland support"(quotes because what that means will vary between implimentations).
Now, if you have to install xorg on a system that doesn't have it in order to play a game? Yeah that would suck, although games are on my personal shortlist of application categories that should always be run from a flat-pack/equivalent and/or containerized wherever possible.
Now I think about it, why don't (anti-cheat)games just run their own VM's and "calibrate" those versus any weird system variables? Seems like a better anti-cheat than hacking-my-kernel-to-make-sure-I'm-not-hacking-the-game...
Even if you use Flatpak, you need XOrg / XWayland on the host system.
Fedora Kinoite/KDE and the KDE Plasma desktop on its own are especially annoying, as I have no idea how to turn off those legacy support services from constantly running, like XWaylandVideoBridge (never used) or XWayland entirely.
I think Windows is just too bloated to also use Containers. With WSL they found a good way and apps should totally run in containers, but this is simply not yet done.
VMs would suck for efficiency as they rely on CPU virtualization and GPU passthrough. The former will never give native performance
Not all dependencies. Flatpak is an application, and a display server is outside of an application.
Closing an app should not result in a black screen XD
Not that hard to stop wayland or xorg at the launch of a given application and restart it at that application's exit. Of course, I only did it on the Raspberry Pi because the hardware lagged horribly running such apps with a gui/compositer/desktop the app wasn't using in the background, but it wasn't hard for me to get working, and its exactly how we did things with DOS apps and even some Windows games back in the WFWG 3.11 days.
Basically, there's no technical reason the host operating system should have to be providing say X, KDE, Plasma, Gnome, Gk, Wayland, whatever, to a flatpack app that needs those things. Yes, the result is a larger flatpack, but that's why flatpack's do dependency consolidation.
Unless ... Unless, you just really want to to run your games windowed with smooth window-resizing, minimization, maximization, etc.
like this
TVA likes this.
Does anyone know how to "fly" with double space?
And how to fly faster?
These were big issues I had with Mineclone2 or how it is now called
While in game, Escape>Change Keys> in the right corner checkbox called "Double tap 'jump' to toggle fly". For flying faster, you can press J (by default?) to enable fast mode, you can change how fast it is in the settings menu, in the main menu. This is all assuming that you have the 'fly' and 'fast' privileges.
All players should check the settings menu at least once, I changed many things in there, and you should too. One of them was enabling the crosshair in the Touchscreen menu on mobile , it enables a much smoother experience on mobile.
like this
TVA and SaltySalamander like this.
Thanks, friend! Also, are you saying the creator of Minecraft, the creator of Minetest, or the creator of the video?
Curse you, syntactic ambiguity!!! xD
So I tried VoxelLibre recently and I have three main papercuts:
* The lack of dual wielding (and perhaps crits and other Combat Update things).
* Shift clicking items doesn't do the same thing as Minecraft in a lot of cases. Shift clicking armour doesn't equip it, for example.
* I think sometimes there's a keyboard combination for opening the inventory (shift+I?) that I keep accidentally hitting when I try to move.
Still, it's an interesting project and Iook forward to how it continues.
It's open, it's free, and it's fun! It's got a ton of mods and custom games to make it whatever you want out of a voxel game. That's everything I need.
Shoutouts to the Asuna game.
I played quite a bit of solo mineclone2/voxelibre. Really good stuff with a surprisingly short wishlist on my part.
It's silly, but one of my favorite things is that it fires up the launcher in under a second. Reminds me of when software wasn't bloated halfway to hell. 😁
Played some Voxellibre for the first time after seeing this. I fell to the most classic of blunders: I tried to spam click to kill an enemy.
My Minecraft skill did not translate.
Pixel ImPerfection - ContentDB
Pixel Im(proved)Perfection aim to be close and familiar to Minecraft texture.content.minetest.net
Hell yeah.
Let's fucking goooooooo!
Replacing M.2 system drive (btrfs) on motherboard with single slot
I finally have the budget to build my first NAS and upgrade my desktop PC. I have used Linux for quite some time, but am far from an expert.
One of the steps is to move my M.2 NVME system drive (1TB) from my desktop to my NAS. I want to replace it with a bigger NVME drive (2TB). My current motherboard only has a single M.2 slot, that's why I bought a M.2 enclosure.
My goal is to put my new drive into the enclosure, clone my whole system disk onto it and then replace the old drive. At first I found several posts about using clonezilla to clone the whole drive, but some posts mentioned it not working well with btrfs (/ and /home subvolume), which is the bulk of my drive.
I have some ideas how I might to pull it off. My preliminary idea is:
1. clone my boot partition with clonezilla
2. use btrfs-clone or moving my butter to transfer the btrfs partition
3. resize the partitions with gparted (and add swap?)
The two aspects I'm uncertain about are:
1. UUIDs
2. fstab
I plan to replace the old drive, so the system will not have two drives with the same UUID. If the method results in a new UUID I need to edit fstab.
As you can see I'm not sure how to proceed. Maybe I can just use clonezilla or dd to clone my whole drive?
If someone has experience with such a switch or is just a lot for familiar with the procedures, I would love some tips and insight.
Thanks for reading.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
EDIT: Thinking about how to do it, might have actually taken longer than the procedure itself.
For anyone in a similar situation, I was able to replace the drive with these steps:
- clone the whole drive (new drive has a bigger capacity) with clonezilla
- physically switch the drives
- boot into a live medium and resized the btrfs partition on the new drive with gparted
- boot into the main system and adjust the filesystem size with
sudo btrfs filesystem resize max /
With two NVME drives (even though one was in a USB M.2 enclosure) everything took about 30 minutes. About 300 gigs of data were transferred. I haven't found any problems with the btrfs partition thus far. Using dd like others recommended might work as well, but I didn't try that option.
GitHub - mwilck/btrfs-clone: A tool for copying a btrfs file system with all its subvolumes to another btrfs file system
A tool for copying a btrfs file system with all its subvolumes to another btrfs file system - mwilck/btrfs-cloneGitHub
reshared this
Linux reshared this.
Is your system drive really that: just a system drive? Then you'd better install it from scratch and have a clean, shiny and new system.
Backup a few settings maybe. Or maybe not.
Then you’d better install it from scratch and have a clean, shiny and new system.
You know how it is, I just got my system right. Of course lots of settings can just be duplicated, but I would prefer not to set up some systemd services, cron jobs, etc. again.
When you say system drive this will also have your efi system partition (usually FAT-formated as that's the only standard all UEFI implementations support), maybe also a swap partition (if not using a swap file instead) etc... so it's not just copiying the btrfs partition your system sits on.
Yes clonezilla will keep the same UUID when cloning (and I assume your fstab properly uses UUIDs to identify drivees). In fact clonezilla uses different tools depending on filesystem and data... on the lowest level (so for example on unlocked encrypted data it can't handle otherwise) clonezilla is really just using dd to clone everything. So cloning your disk with clonezilla, then later expanding the btrfs partition to use up the free space works is an option
But on the other hand just creating a few new partitions, then copying all data might be faster. And editing /etc/fstab with the new UUIDs while keeping everything else is no rocket science either.
The best thing: Just pick a method and do it. It's not like you can screw up it up as long if your are not stupid and accidently clone your empty new drive to your old one instead...
I had a similar case.
My minipc has a microSD card slot and I figured if it could be done for a RPI, why not for a mini PC? :P
After a few months I bought a new m2nvme but I didn't want to start from scratch (maybe I should've looked into nix?)
So what I did was sudo dd if=/dev/sda of=/dev/sdc bs=1024k status=progress
And that worked perfectly!
Things to note:
- both drives need to be unmounted, so you need a live OS or another machine.
- The new drive will have the same exact partitions, which means the same size, so you need to expand them after the copy.
- PS: this was for a drive with ext4 partitions, but in theory dd works with the bytes so it shouldn't be an issue what fs you use.
I would recommend using this as an opportunity to build out and use a backups system. Whenever I get a new laptop, for example, I just make a(nother) backup on the old laptop and restore whatever I want to the new one. If there are any files I want that are normally excluded from backups, I either tweak my rules to include those files/put them in a different directory and repeat the process or just make a new manual external backup copy temporarily.
If you have good backups then your new drive can be populated from them after creating new partitions. Optionally, you can also take this opportunity to reinstall the OS, which I personally prefer to do because it tends to clean up cruft.
Also, if you go this route, your data on your old drive is 100% intact throughout the process. You can verify and re-verify that all the files you want are backed up + restored properly before finally formatting the old drive for use in the NAS.
like this
TVA likes this.
Personally, if the NAS is up and running, I'd migrate the home directory and anything else important from the desktop to that, and intend to network host those folders; set aside the 1TB, install the 2 TB, and do a fresh install and see if I can still get to everything happily.
Alternatively--if you want to preserve stuff locally--new drive in an enclosure, attach to desktop, boot from an install USB, fresh install to 2TB, reboot from 2TB, mount 1TB, migrate data, install 2TB. I don't think there should be a UUID problem doing that, but even if there was you could still boot from the install stick and try manually fix it
rsync to copy everything over. Updating /etc/fstab with the new UUIDs isn't a big deal (though you can also manually specify the partition UUIDs at time of format - mkfs.btrfs --uuid ...) (you didn't say what file system your /boot partition was using, so I don't want to guess).
you didn’t say what file system your /boot partition was using, so I don’t want to guess
It's actually easy to guess. There is exactly one filesystem UEFI has to support by its specification, everything else is optional... so unless you produce for Apple -because they demand apfs support for their hardware- no vendor actually cares to implement anything but FAT.
Do you have pci-e slots?
I had to decide between a M.2 enclosure and a PCIe card. Since I plan to build a new system (with more M.2 slots) I will have more slots in the future. And maybe I will not like the M.2 enclosure and return it. wink
Clonezilla can clone BTRFS without issues
Afterwards on the system use sudo btrfs filesystem resize max / to make it use that space. Maybe add a balance.
If you're feeling adventurous:
- You can use a thumb drive to boot.
- Verify the device path for your normal boot disk and for your new drive using gnome disks or similar. In this example I'll call them /dev/olddisk0n1 and /dev/newdisksda
- really, really don't mix up the in file and out file. In file (if) is the source. Out file (of) is the destination.
sudo dd if=/dev/olddisk0n1 of=/dev/newdisksda bs=128M- or, of you want a progress indicator:
sudo pv /dev/olddisk0n1 > /dev/newdisksda - wait a long time
Not that this is the recommended method if you're new to the terminal, but it's totally viable if you have limited tools or are comfortable on the command prompt.
Unless you're using three new disk on the same system, you don't have to worry about UUIDS, though they will be identical on both drives.
Your system is likely using UUIDs in fstab. If so, you don't have to worry about fstab. If not, there's still a damned good chance you won't have to worry about fstab.
To be sure, check fstab and make sure it's using UUIDs. If it's not, follow a tutorial for switching fstab over to using UUIDs.
What the fuck is an SBAT and why does everyone suddenly care
Follow up to: “Something has gone seriously wrong,” dual-boot systems warn after Microsoft update
SBAT was developed collaboratively between the Linux community and Microsoft, and Microsoft chose to push a Windows update that told systems not to trust versions of grub with a security generation below a certain level. This was because those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain, and we've seen real world examples of malware wanting to do that (Black Lotus did so using a vulnerability in the Windows bootloader, but a vulnerability in grub would be just as viable for this). Viewed purely from a security perspective, this was a legitimate thing to want to do.
...
The problem we've ended up in is that several Linux distributions had not shipped versions of grub with a newer security generation, and so those versions of grub are assumed to be insecure (it's worth noting that grub is signed by individual distributions, not Microsoft, so there's no externally introduced lag here). Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself. Unfortunately, as is now obvious, that didn't work as intended and at least some dual-boot setups applied the update and that distribution's Shim refused to boot that distribution's grub.
...
The outcome is that some people can't boot their systems. I think there's plenty of blame here. Microsoft should have done more testing to ensure that dual-boot setups could be identified accurately. But also distributions shipping signed bootloaders should make sure that they're updating those and updating the security generation to match, because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract around all of this.
like this
ShaunaTheDead likes this.
reshared this
Linux reshared this.
So they claimed it wasn't supposed to affect dual boots, yet it was specifically to patch a vulnerability in GRUB, something a Windows-only user has no reason of ever using (that I'm aware of)?
So how could this have affected anyone but people who dual boot? Sketchy.
like this
originalucifer likes this.
And I do, generally. But like I said, I did not read it carefully because I had no reason to.
So if they addressed what I said, I didn't read that part. 🤷🏻
I have secure boot and tpm disabled on my rig. I’ve been called a fool for this. But I don’t understand how it works, and this is an example.
If I was smart enough to code a new OS or a new boot loader (which I’m not) - how does it become different than a virus? Who approves my code is “safe” to run?
Clearly in this case Microsoft said “those versions of grub are not safe.” So what does that mean? I’m not allowed to run them now because Microsoft decided? That’s all it takes? The whole “what’s safe to run” thing baffles me.
Am I supposed to believe that a govt agency like the nsa could NEVER put malicious backdoors into Microsoft’s products, that Microsoft would NEVER allow that to happen, and that code would NEVER be flagged as safe?
I get it…. It helps with obvious viruses and whatnot. But in my experience, all secure boot has ever done for me is cause problems and lock me out of my computer.
Microsoft, by default, decides which code is safe to run, yes.
However, that's not the only way to use Secure Boot; I enroll my own certificates in addition to Microsoft's, allowing code that I sign to be booted into. This requires some UEFI setup once.
For most machines, Secure Boot should never lock you out completely; you can always disable it, fix your boot chain and reenable.
I think it's actually sensible technology, but as every security feature, it usually comes at the cost of some convenience.
It's to protect the user against malware that would insert itself in the boot chain and run at higher privilege than the kernel. Just booting a malicious ISO can insert malware in the boot chain without your knowledge. Once you're in the boot chain, you boot before the kernel, so you can inject whatever drivers you want.
That's particularly important on corporate computers where they don't want users to bypass IT policies, but also important for the average Windows user that won't stop loading malware on their computers. Without secure boot there's nothing stopping you from forcing yourself local admin privileges or even silently exfiltrate data.
That's been a thing forever: DOS boot sector malware for example. By only booting signed bootloaders and kernels, you can ensure this doesn't happen.
I have a friend that abused an insufficiently locked down GRUB to root his workstation at work by using the init=/bin/sh trick to patch a SUID binary to make his own sudo.
However, that’s not the only way to use Secure Boot; I enroll my own certificates in addition to Microsoft’s, allowing code that I sign to be booted into. This requires some UEFI setup once.
Do you by chance have a guide or documentation you followed to do this that you could link?
Don't know how much this would help you; I did this on NixOS, however the steps for creating the key pair and enrolling is the same on all distributions, while your UEFI steps can vary depending on the manufacturer.
github.com/nix-community/lanza…
wiki.archlinux.org/title/Unifi… for Arch
lanzaboote/docs/QUICK_START.md at master · nix-community/lanzaboote
Secure Boot for NixOS [maintainers=@blitz @raitobezarius @nikstur] - nix-community/lanzabooteGitHub
like this
TVA likes this.
I don't think Microsoft cares that much anymore. The OS wars are over.
Every Windows now ships with a one-button Linux installer.
Powershell has default aliases so you can use bash commands for basic stuff.
Microsoft is one of the top contributors to the Linux kernel.
They provide documentation on how to install Linux.
They have published a Linux distro.
They don't care cause that's not where they make their money. Their focus is on keeping their market dominance in Office, Exchange and AD, or M365, Exchange Online and Entra, respectively (all of which can be accessed from Linux). With those products, they can basically demand a tax of ~$20-30/month/employee from every business in the world.
How to download and install Linux
Download and install Linux in this tutorial that covers how to choose a distribution, how to use the install command with Windows Subsystem for Linux, create a bootable USB for Bare-metal, or set up a Virtual Machine.learn.microsoft.com
like this
TVA likes this.
FOSS JS extension? (blocking by default non-FOSS JS)
I am a long-time NoScript extension (noscript.net/) user. For those who don't know this automatically blocks any javascript and let you accept them (temporarily or permanently) based on the scripts' origin domain.
NoScript as some quality-of-life option like 'accepting script from current page's domain by default' so only 3rd parties would be blocked (usefull in mobile where it is tedious to go to the menu).
When I saw LibreJS (gnu.org/software/librejs/) I though that would be a better version of NoScript but it is quiet different in usage and cares about license and not open-source code (maybe it can't).
Am I the only one who thought about checking for open-source JS scripts filtering (at least by default)? This would require reproducibility of 'compilation'/packaging. I think with lock files (npm, yarn, etc) this could be doable and we could have some automatic checks for code.
Maybe the trust system for who checks could be a problem. I wanted to discuss this matter for a while.
What is it? - NoScript: Own Your Browser!
The NoScript Security Suite is Free Software protecting Firefox (on Android, too!), Chrome, Edge, Brave and other web browsers. Install NoScript now!noscript.net
like this
timlyo likes this.
reshared this
Open Source reshared this.
Publishing lock files of running services would be a big security risk for the service owner as it gives an easily parsable way for an attacker to check if your bundle includes any package versions with vulnerabilities.
You then also have tools like snyk used by many big organisations which has the ability to patch dependencies before the actual dependency published the patch themselves. This would lead to a version not corresponding with the bundled code.
In fact given bundling is pretty ubiquitous, but infinitely configurable at this point, even validating the integrity of the bundle Vs the versions in a lock file is a problem that will be hard to achieve. It's kinda like wanting to untoast bread.
Also given many JS projects have a lock file which describes both the deficiencies of the front end bundle, server & build tooling, there is a risk of leaking information about that too (it's best practice to make as little as possible about your server configuration publicly viewable)
IMO, the solution to this problem today is to use a modern, updated browser that sandboxes execution, run a adblocker with appropriate trusted blocklists for what you're avoiding, try to only use sites you trust & if you can, push web developers to use CSP & SRI to prevent malicious actors from injecting code into their sites without them knowing. Many sites already take advantage of these features, so if you trust the owner, you should be able to trust the code running on the page. If you don't trust the owner with client side JS, you probably shouldn't trust them with whatever they're running on the server side too.
I believe you missed the point, I am not in defense of Security through obscurity (en.wikipedia.org/wiki/Security…), quiet the opposite.
The point: "[...] risk for the service owner as it gives an easily parsable way for an attacker to check [...]" is well known and not the discussion here. You can choose close source for 'security' this is opensource community so I am wondering about such a tool.
Maybe I have missed your point, but based on how I've understood what you've described I think you may have also missed mine, I was more pointing out how the practicalities prevent such a tool from being possible from a few perspectives. I lead with security just because that would be the deal breaker for many service owners, it's simply infosec best practice to not leak the information such a tool would require.
Your filtering idea would require cooperation from those service owners to change what they're currently doing, right?
Perhaps I've completely got the wrong end of the stick with what you're suggesting though, happy to be corrected
Thanks for your answer.
First I don't even grasp what a "service owner" is.
Second, for JS front-end openness there are already a bunch of app (web, android) that are open-source and secured. Everything has dependencies nowadays, this doesn't prevent good security. Think all the python app and their dependencies, rust, android... even c\c++ packages are built with dependencies and security updates are necessary (bash had security issues).
I think with JS scripts it's actually even easier to have good security because the app is ran in our web browser so the only possible attacker is the website we are visiting itself. If they are malicious then the close-sourced JS script is even worse. Unless you count 3rd party scripts embedded that bad dev uses in their website without even thinking about trusting them. That is also awful in both open or close source environment.
So even having imperfect security (which happens regardless to openness), who is the attacker here? I would rather run js script on my end if the code can be checked.
First I don't even grasp what a "service owner" is.
The people who build & run the software & servers that serve the website, who amongst other things have an interest in keeping the service available, secure, performant, etc.
Particularly with laws like GDPR, these services owners are motivated to be as secure as practically possible otherwise they could receive a bankrupting fine should they end up leaking someone's data. You'll never be able to convince anyone to lower the security of their threat model for that reason alone, before anything else.
there are already a bunch of app (web, android) that are open-source and secured.
The code published and the code running on a server cannot be treated as equivalent for several reasons, but here's two big ones:
Firstly, there's the similar issue as with compiled binaries in other languages: it's tough (or impossible) to verify that the code published is the same code that's running. Secondly the bundled and minified versions of websites are rarely published anyway, at most you get the constituent code and a dependency list for something completely open source. This is the bit I referred to before as trying to untoast bread, the browser gets a bundle that can't practically be reversed back into that list of parts and dependencies in a general purpose way. You'd need the whole picture to be able to do any kind of filtering here.
who is the attacker here?
The only possible attacker is not the website itself (though it's a lot more limited if the site implements CSP & SRI, as mentioned in my other comment). XSS is a whole category of attacks which leverage an otherwise trusted site to do something malicious, this is one of the main reasons you would run something like noscript.
There have also been several instances in recent years of people contributing to (or outright taking over) previously trusted open source projects and sneaking in something malicious. This then gets executed and/or bundled during development in anything that uses it and updates to the compromised version before people find the vulnerability.
Finally there are network level attacks which thankfully are a lot less common these days due to HTTPS adoption (and to be a broken record, CSP & SRI), but if you happen to use public WiFi, there's a whole heap of ways a malicious actor can mess with what your browser ultimately loads.
OK I got it, you are completely out of the loop here.
You do not grasp the idea of NoScript and other JS filtering extension. This is not about server code, your all arguments is baseless here.
By the way JS refered to Javascript and not NodeJS.
Anyway I got you whole company/business talk about "keeping the service available, secure, performant" and "GDPR [...] bankrupting fine"... yeah lemmy.world.
No need to get aggravated, I completely grasp it, you've possibly misunderstood or not entirely read my comment if that's your takeaway.
I'm not talking about server code specifically, I'm going through the stages between the source code repo(s) and what your browser ends up receiving when you request a site.
NodeJS is relevant here because it's what runs nearly all major JS bundlers (webpack, vite, etc), which are what produces the code that ultimately runs in the browser for most websites you use. Essentially in a mathematical sense, the full set of dependencies for that process are a part of the input to the function that outputs the JS bundle(s).
I'm not really sure what you mean with that last part, really, anyone hosting something on the internet has to care about that stuff, not just businesses. GDPR can target individuals just as easily as for-profit companies, it's about the safety of the data, not who has it—I'm assuming you would not want to go personally bankrupt due to a deliberate neglect of security? Similarly, if you have a website that doesn't hit the performance NFRs that search engines set, no one will ever find it in search results because it'll be down on page 100. You will not be visiting websites which don't care about this stuff.
Either way, all of that is wider reasoning for the main point which we're getting away from a bit, so I'll try to summarise as best I can:
Basically unless you intend your idea to only work on entirely open source websites (which comprise a tiny percentage of the web), you're going to have to contend with these JS bundles, which as I've gone into, is basically an insurmountable task due to not having the complete set of inputs.
If you do only intend it to work with those completely open source websites, then crack on, I guess. There's still what looks to me like a crazy amount of things to figure out in order to create a filter that won't be able to work with nearly all web traffic, but if that's still worth it to you, then don't let me convince you otherwise.
Edit: typo
I'm a full-stack software developer working in the financial sector, their statement is factual.
Companies will never want to take on liability that has the potential to bankrupt them. It is in their best interest to not reveal the version of libraries they are using as some versions may have publicly known vulnerabilities, and it would make it incredibly easy for attackers to build an exploit chain if they knew the exact versions being used.
Securing client code is just as important as securing server code, as you don't want to expose your users to potential XSS attacks that could affect the way the page gets displayed, or worse, leak their credentials to a third party. If this happened in the EU or some parts of Canada, and it's been found that the company reduced their threat model "for the sake of openness", they would likely be fined into bankruptcy or forced to leave the market.
Unfortunately, this is one of those cases where your interests and ethics will never be aligned with those of service owners as they are held to a certain standard by privacy laws and other regulations.
Can't say that what you are looking for is common. This is the first time I've heard this requirement bring described.
Librejs started a long while back. I'm no js historian but I reckon things have changed a ton in jsland since then. My guess is that there assumption is that since JavaScript files are just scripts, they contain the source code and therefore all it checks for is is the license.
I don't know at which point things like obfuscation through minification and systems like webpack came along. I'm only theorising but I feel librejs has not been able to keep up with the times.
Som en konsekvens av Elon Musks övertagande av Twitter urartade plattformen. Moderering avskaffades i praktiken och högerextremism, rasism och antifeminism med mer abredde ut sig.
blog.zaramis.se/2024/08/22/elo…
Elon Musk träder in på arenan - Fediversums historia - Svenssons Nyheter
Elon Musk träder in på arenan. Som en konsekvens av Elon Musks övertagande av Twitter urartade plattformen. Moderering avskaffadesAnders_S (Svenssons Nyheter)
en petitess som bara nämns kort i början, men värt att nämna:
Det var inte att moderingen upphörde, utan att den ändrades. Musk släppte in nyss utkastade högerextremister (Trump bla.) och började stänga av en massa vänstersidor.
Detta har trollats bort i mainstreammedia där det istället hävdats just att modereringen skulle upphört. Därefter uppstod en debatt på felaktiga premisser huruvida detta är bra eller dåligt där högertomtar kunnat spela rollen som censurmotståndare.
Har själv hört flera från vänster som gett sig in i detta och börjat debattera huruvida det är rätt eller fel med moderering/censurering av högerextremister när den verkliga storyn är att Elon Musk censurerar vänstern, antirasister och sina egna kritiker.
Resume work from backup on another device?
I use 2 different computers in 2 different locations both running Universal Blue.
I was wondering if there is any way to create a backup system where i could backup Computer1 over the internet to Computer2 and continue work like nothing happened with all the user data and installed applications being there. The goal is to only need to transfer the user data/applications and no system data (that should be the same for both because of Ublue, right?), to keep the backup size small.
To be clear, i need help figuring out the backup part, not the transfering over the internet part.
If I were to backup the directories on Computer1, which store user data, with for example borgbackup, could I restore them on Computer2 and have a working system? Or would there be conflicts because of more low level stuff missing like applications and configs? Which directories would I need and which could be excluded?
Is there a better option? Any advice is appreciated!
I also came across btrfs snapshot capabilities and thought they could possibly used for this. But as far as I understand it, that would mean transferring the whole system and not only the data and applications. Am i missing something?
Universal Blue - Powered by the future, delivered today
Universal Blue is a diverse set of images using Fedora Atomic's OCI support as a delivery mechanism. That's nerdspeak for the ultimate Linux client!universal-blue.org
like this
ShaunaTheDead likes this.
reshared this
Linux reshared this.
like this
TVA likes this.
Regardless of what technical solution you decide to rely on, e.g borgbackup, Synchting or rsync, the biggest question is "what" do you actually need. You indeed do not need system files, you probably also applications (which can fetch back anyway) so what left is actually data. You might want to then save your ~ directory but that might still conflict with some things, e.g ~/.bashrc or ~/.local so instead you might want to start with individual applications, e.g Blender, and see where it implicitly or you explicitly save the .blend files and all their dependency.
How I would do it :
- over the course of a day, write down each application I'm using, probably a dozen at most (excluding CLI tools)
- identify for each where data is stored and possibly simplify that, e.g all my Blender files in a single directory with subdirectory
- using whatever solution I have chosen, synchronize those directories
- test on the other device while being on the same network (should be much faster and with a change of fixing problems)
then I would iterate over time. If I were to often have to move and can't really iterate, I would make the entire ~ directory available even though it's overkill, and only pick from it on a per needed basis. I would also insure to exclude some directories that could be large, maybe ~/Downloads
PS: I'd also explore Nix for the system and applications side of things but honestly only AFTER taking care of what's actually unique to you, i.e data.
Thank you for the detailed response!
Yes, the what data and how to not create conflicts has been troubling me the most.
I think I might first narrow it down with test VMs first, to skip the transfer part, before I actually use it “in production“.
Honestly a very imperfect alternatives but that's been sufficient for me for years is... NextCloud of documents.
There are few dozen documents I need regardless of the device, e.g national ID, billing template, but the vast VAST majority of my files I can get on my desktop... which is why I replied to you in depth rather than actually doing it. I even wrote some software for a "broader" view on resuming across devices including offline, namely git.benetou.fr/utopiah/offline… as a network of NodeJS HTTP servers but ... same, that's more for the intellectual curiosity than a pragmatic need. So yes explore with VMs if you prefer but I'd argue remain pragmatic, i.e what you genuinely do need versus an "idealized" system that you don't actually use yet makes your workflow and setup more complex and less secure.
Nya Mediafonden har som syfte att hjälpa nya progressiva medier att växa. De hjälper många olika projekt varje år med en mindre summa pengar. Min blogg har i år fått 1 000 kronor från Nya Mediafonden och det tackar vi för.
The torture crimes in “Ofer” camp are no less severe than the torture crimes recorded in “Sde Teman” camp.
The Prisoners and Ex-Prisoners Affairs’ Commission and the Palestinian Prisoners Club announced in an alarming statement yesterday, that the level of abuse in “Ofer” camp against Palestinian prisoners is not less severe than the testimonies that emerged from the “Sde Teman” concentration camp, which has been the most prominent site of torture crimes against Gaza detainees.
“Ofer,” which holds hundreds of Gaza detainees, is one of several camps and prisons where prisoners face systematic and unprecedented crimes since the start of the genocide war.
Details of the torture crimes against prisoners and detainees in the occupation’s prisons and camps are ongoing, with daily shocking and horrifying testimonies being recorded by various organizations.
Testimony from the detainee G.W. about torture crimes following his arrest on March 2, 2024, at a checkpoint in the city of Hamad (Gaza):
“During my interrogation, the interrogators tried to drown me using the water from the toilet bowl.”
“To this day, we are subjected to torture, humiliation, and beatings.”
“The occupation forces stripped me of my clothes, tied my hands behind my back, blindfolded me, and transferred me to a truck. They assaulted me and all the other detainees who were with me. We were then taken to a roofed area (zinko), where the occupation kept us for 100 days along with dozens of other detainees. This phase was the most severe, in terms of the torture methods used against us.”
For the 100 days, the detainees were beaten for any movement they made. During this time, the detainees’ hands remained tied, and their eyes were blindfolded, or they were forced to sit on their legs or stomachs. Prolonged standing as a punishment method was commonly used against the detainees.
The detainee continued: “During my interrogation, I was subjected to an attempt to drown me using the water inside the the toilet, in addition to prolonged standing as a punishment method.”
The detainee is currently held in “Ofer” camp. The management of this camp is under the occupation army, and according to several visits made to Gaza detainees there, each room holds at least 20 prisoners who are subjected to torture, humiliation, abuse, and beatings.
One detainee reported that the prison administration recently raided the room-cell where he was held after the detainees hid bread slices. The raid lasted several hours, during which the forces used methods such as bending the hand and severe beating on the shoulder and fingers, resulting in a broken hand for one detainee and a broken nose for an elderly detainee.
Visits conducted under high levels of surveillance.
The lawyer confirmed that the visits to Gaza detainees in “Ofer” camp are conducted under high levels of strict surveillance. Most of the detainees refused to give any details about their detention conditions, and signs of fear and intimidation were evident on them. One detainee refrained from talking about anything, fearing being beaten, and only mentioned that he experiences severe shaking for long periods after any attack.
The Commission and the Club confirmed that the level of surveillance imposed on lawyers, detainees, and prisoners – in various prisons – is unprecedented, casting a shadow over the work of legal teams and affecting the behavior of detainees and their testimonies during visits, especially since some prisons have adopted assaults on prisoners during their transfer for visits, with Naqab Desert prison being one of the most notable. The level of surveillance has been one of several systematic obstacles and policies that have significantly impacted lawyers’ visits.
The most prominent crimes reflected in the testimonies of Gaza detainees over the past period:
• The crime of enforced disappearance, which has been the most prominent crime imposed by the occupation on the majority of Gaza detainees.
• Using detainees as human shields for extended periods during ground military operations.
• Committing torture crimes against them through various methods, including electric shocks, prolonged standing, continuous handcuffing, repeated beatings that caused broken limbs for many detainees, and the use of police dogs during attacks.
• Systematic medical crimes by depriving them of basic treatment, performing surgeries without anesthesia, and amputating limbs of detainees due to continuous restraint.
• Committing rape and sexual assault crimes at various levels.
• Committing the crime of starvation against them.
• Forcing detainees to utter demeaning words that insult their dignity and offend their families.
• Forcing them to sit in certain positions that cause severe pain and aim to humiliate them.
• No detainee is allowed to speak to another detainee, and those who do are subjected to severe beatings.
• Depriving them of practicing any religious rituals.
Key facts about Gaza detainees in the occupation’s prisons:
Since the start of the genocide war, the occupation has detained thousands from Gaza, with the occupation’s prison administration acknowledging the detention of 1,584 detainees from Gaza, whom the occupation classified as unlawful combatants. This figure does not include all Gaza detainees, particularly those in camps under military management. Despite some legal amendments made by the occupation regarding Gaza detainees, which allowed institutions to reveal the fate of Gaza detainees through a specific mechanism, the vast majority of them remain in enforced disappearance, including martyrs who ascended due to torture, estimated to be in the dozens. The specialized institutions face significant challenges in following up on the issue of Gaza detainees, especially since visits are still limited.
As of early August, the number of prisoners in the occupation’s prisons exceeds 9,900, a figure that does not include all Gaza detainees, particularly those held in military camps.
The Commission and the Club renew their ongoing call to the international human rights system to reclaim the role for which it was established, and to move beyond merely documenting the occupation’s crimes and issuing statements and calls, to another level that upholds the values of human justice. This begins with holding the occupation leaders accountable for their ongoing systematic crimes as part of the ongoing extermination war against our people in Gaza, and its other face represented by the torture crimes and severe violations against prisoners and detainees in “israeli” prisons.
Microsoft’s latest security update has ruined dual-boot Windows and Linux PCs
Microsoft’s latest security update has ruined dual-boot Windows and Linux PCs
Microsoft has issued a security update that has broken dual-boot Linux and Windows machines. The update wasn’t supposed to reach dual-boot PCs.Tom Warren (The Verge)
like this
ShaunaTheDead likes this.
Microsoft breaks bootloader and nixes Linux partition
Microsoft: "patch seems to be working as intended"
I don't think dual boot has ever been a good solution (unless you also run one or both of the OS's under the other in a VM).
Like, if you are unsure about linux, trying it out, learning, whatever, you can just boot a live"cd", or maybe install it on an external (flash) drive.
If you are kinda sure you want to switch, just nuke Windows; it's easier to switch that way than to have everything on two systems, having to switch.
I recently moved from proton to a W11 KVM with my 4080 passed through.
Unfortunately those hostile GAAS probably would be able to detect and block you (I don't play those games)
You can have a own EFI partition per Drive (and on it whatever bootloader you want). You then need to use the UEFI boot menu if you want e.g. boot the Windows one.
If you have 2 different OS on different drives they should never interfere with each other.
Well, i mean you could of course use the Linux Bootmanager to then forward to the Windows boot manager on the other disk. but i never experimented with that.
even if you have two drives, you still have only one bootloader, not?
The idea is to have completely separate boot and OS drives. You select which one you want to boot through the BIOS boot selection (ie. pressing F10 or F11 at the BIOS screen).
This functionally makes each OS "unaware" of the other one.
Unfortunately it really doesn’t. And it’s actually Linux that’s the bigger problem: whenever it decides to updates GRUB it looks for OSes on all of your drives to make grub entries for them. It also doesn’t necessarily modify the version of grub on the booted drive.
Yes I’m sure there’s a way to manually configure everything perfectly but my goal is a setup where I don’t have to constantly manually fix things.
If you install each OS with it's own drive as the boot device, then you won't see this issue.
Unless you boot Windows via the grub boot menu. If you do that then Windows will see that drive as the boot device.
If you select the OS by using the BIOS boot selection then you won't see this issue.
I was bitten by Windows doing exactly this almost 15 years ago. Since that day if I ever had a need for dual-boot (even if running different distros) each OS will get it's own dedicated drive, and I select what I want to boot through the BBS (BIOS Boot Selection). It's usually invoked with F10 or F11 (but could be a different key combo.
My install does not seem to do this. I removed the windows drive when installing Linux on a new drive. Put both drives in and select which one to boot in the bios. Its been that way for about a year and, so far, grub updates have never noticed the windows install nor added to grub.
That's with bazzite, can't speak for any other distro as that is the only dual-boot machine I own. Bazzite does mention they do not recommend traditional dual boot with the boot loader and recommend the bios method so maybe they have something changed to avoid that?
Oh you sweet sweet summer boy....
We're talking Microsoft here, they'll make sure they're aware and they'll make sure to f you over because Microsoft
While I generally agree with that, that's not what seems to be happening here. What seems to be happening is that anyone who boots Windows via grub is getting grub itself overwritten.
When you install Linux, boot loaders like grub generally are smart and try to be helpful by scanning all available OSes and provide a boot menu entry for those. This is generally to help new users who install a dual-boot system and help them not think that "Linux erased Windows" when they see the new grub boot loader.
When you boot Windows from grub, Windows treats the drive with grub (where it booted from) as the boot drive. But if you tell your BIOS to boot the Windows drive, then grub won't be invoked and Windows will boot seeing it's own drive as the boot drive.
This is mostly an assumption as this hasn't happened to me and details are still a bit scarce.
I did that and a Windows update nuked Linux from the BIOS boot loader a few weeks ago.
The only safe option is to have completely separate machines. Thankfully with the rise of ridiculously powerful minipcs that's easier than ever.
This means that it is impossible for them to make a patch or PR because it would conflict with the projects licence and fact its open source.
That's not how it works. It just means the company owns the code for all intents and purposes, which also means that if they tell you that you can release it under a FOSS license / contribute to someone else's project, you can absolutely do that (they effectively grant you the license to use "their" code that you wrote under a FOSS license somewhere else).
Also, youre telling somebody who has worked with big companies not allowing it in their employer contract that he is lying? Riiiight...
A lot of google devs also are not allowed to do any linux work outside of work without explicit permissions because of all the internal docs, teams and other work being done on linux from within google. Development rights is an absolute mess, legally.
I usually dont care and do what is right, despite what my emploter contract says, but i have gotten in trouble for it
They can forbid you to work on opensource stuff while being in free time? I mean, I understand that you are not allowed to generate open code that utilises private know how of the company you work for. But not working on Linux in free time seems very strange to me 😮
Edit: deleted wrong “Edit:”
I keep Linux and windows on separate disks, grub or windows boot manager don't know about each other.
I have the Linux disk as the primary boot, if I need to boot into windows i use the bios boot selection screen.
It's a bit of a pain at times(have to mash F12 to get the bios boot menu) bit it's less of a headache than trying to fix grub
Your assessment of probability is speculation
It is, but anecdote is insufficient to counter it.
And yes, some companies need to give you a green light to work on projects in your free time, because they might have a team doing similar things somewhere, it might compete in something they would like to do in the future or like you said, might use company know how which is a huge nono.
Its bs imo, but those clauses and rules are found in some employment agreements.
Remember, always read your employment agreements!
Legit have never had an issue with multi boot and windows like ever, tbf I don't go into windows that frequently anymore but it's never given me grief in at least a decade. I know my experience isn't universal though, so sorry to anyone who does have boot issues after windows updates.
In the worst case, could use bcdedit and use the windows boot loader (tbh I have no idea if that works here, but could be worth a try)
bcdedit
Reference article for the bcdedit command, which creates new stores, modify existing stores, and add boot menu parameters.learn.microsoft.com
I'm not saying you're lying, but you said
do not allow software developers to send a patch or PR to open source projects.
But this sentence in particular was misleading. Maybe you specifically did not have the right to do so, but in the Linux and BSD codebases there are a lot of @microsoft @netflix @oracle contributions, so at least there is someone in those companies authorized to do so
Yeah if you write proprietary code and then work on a similar project in your spare time, your company might sue you because you're likely reusing code you've seen or written at work.
For example Windows developers are forbidden from working on ReactOS
Fair, and ill edit my post accordingly!
There are teams that are allowed, and within those companies are teams that are directly related to foss projects because those companies are in the foundation or supports of the foundation. However, thats doesnt mean every (product) team in the company is allowed to or that they can do or change whatever they like. Its a complex mess
EFI can also live in firmware memory.
You can pull the linux drive, boot from the windows drive, and if one of the firmware updates was for efi, windows will trash the entry for your Linux disk.
This has happened for me many times, I had to use a grub rescue disk to rebuild the efi table.
Semi-O/T: I feel Microsoft is such a violation of personal security that I would not dual boot anything with Windows. I forget exactly what happened (the details), but I remember when I had upgraded my desktop from Win7 Pro to W10 Pro from the free upgrade feature, it broke the MBR/GRUB.. from that day on, I've kept my OS completed separated by device.
If it's just sandboxing / VMs, that's whatever, not sweating that at all.
Remove your Microsoft installation, done.
Yes but...
But what? This is Microsoft, they fucked it up so many times that it's either incompetence or sabotage, and knowing Microsoft, it's probably both.
This is the same company that invented millions to sabotage Linux through the legal system (hello sco), and the same company that in purpose left gaping security holes open as to not lose any money, causing China to hack the US government through said holes.
Then we decide that just that money isn't enough so we'll spy on you at every step of the way, we will force feed you ads, and we'll use you to train our shitty AI
Frack Microsoft, frack any and all of their software.
Really depends on the virtualization technology, hardware, configuration and game. Not a gamer myself.
Gaming on linux has come a long way in recent years though, in no small part thanks to Steam.
stupidity is a once-off
🎶 ...this iiiiis my one an only wiiiiiiish! 🎶
I recently discovered that Rufus has an option to set up a Windows ISO as "Windows on the go" so I dug out an old 500Gb SSD that had a USB adapter with it and installed Windows on that. So now instead of dual booting I can just hit F12 and boot from USB on the rare occasions when I need to run something in Windows.
It's also quite satisfying to be able to physically remove Windows and shove it into a drawer when it goes full Windows too lol.
Rufus - Create bootable USB drives the easy way
Rufus: Create bootable USB drives the easy wayrufus.ie
I should have been more clear,
Assuming dev/sda is Linux and dev/sdb is Windows, I have grub on sda and Windows bootloader on sdb. I use a hotkey at boot to tell the bios which drive to boot from.
Theoretically windows thinks it's the only OS unless it's scoping out that second hard disk.
Windows update breaks Linux dual boot - but there is a fix for some users
If you've recently updated Windows and found your system no longer dual boots, here's why and what you can do about it.Jack Wallen (ZDNET)
So, excusing my ignorance as a fairly recent Linux convert, what does this mean for my dual boot system?
I haven’t booted windows for weeks and am pretty sure there have been no updates since it was freshly reinstalled (maybe 6 months ago) as a dual boot with Debian.
Is this only a problem if I allow Windows to update?
Are Microsoft likely to fix the issue in a subsequent release?
Yes, you don't have to worry as long as you don't boot up windows and let it install the update.
This is not the first time they break dual boots by touching the partitions, but this is the first time they deliberately break it (that I know of).
I always had windows on its own drive because of that. If you don't use windows a lot then I would suggest to do the same. You have to change to windows through bios but it isn't that much more work.
Thanks for the reply, and good to know!
I think I’ll blow away the windows install on this machine completely.
I still have another pc for some audio tools that don’t run under Linux, but this machine is my daily driver now and I couldn’t be happier.
Not saying youre wrong, but you took the wrong project as an example hehe.
Visual code is not open source. Its core is, but visual code isnt.
The difference is what visual code ships with, on top of its core.
Its like saying chrome == chromium ( it isnt ).
Visual code comes with a lot of features, addins and other stuff that isnt in the core.
.net debugger for example, is not found in vscodium ( build of the vscode core ). And there is more stuff i cant think of now but have come across.
Source: been using vscodium for a few months instead of vscode
Also, its more complex than that. Some teams can, some cant. And if they can it all depends on what project or context. The business world isnt that cut and dry hehe
Episode 250 of The Linux Lugcast Podcast is out – The Linux Tiredcast
This episode we talk about using your Samsung Gear VR in 2024, Apache’s name and logo change drama, haunted computers, a Linux application that lets you use your tablet as a touch and drawing interface for your PC, vertical tabs in Firefox, hacking the Kindle to use as a second monitor, and the 2020 straight to streaming movie “Virtual Death Match”.
Clips of the show:
tech.minnix.dev/episode-250-of…
#apache #bmovies #Firefox #gearvr #hardware #linux #linuxpodcast #podcast #raspberrypi #weylus
The Linux TiredCast - The Linux Lugcast Ep 250 for 8.16.24
Welcome to Episode 250 the LinuxLUGcastWe are an open podcast/LUG that meets at 9 PM EST every first and third friday of the month using mumble.
We encourage anyone listening to join us and participate on the podcast. Check out linuxlugcast.com/ for the server details.If you have any questions, comments, or topic you would like us to discus and are unable to join us during the live show you can
send us email at feedback@linuxlugcast.comLemmy community at lemux.minnix.dev/c/linux_lugca…
Join in the conversation at matrix.to/#/#lugcast:minnix.de…
Join the Jitsi meeting at jitsi.minnix.dev/lugcast
Youtube Channel
youtube.com/@thelinuxlugcastPeertube Channel
nightshift.minnix.dev/c/linux_…Previous Movie: Virtual Death Match 2020
imdb.com/title/tt8444510/Next Movie: Kimi 2022
imdb.com/title/tt14128670/?ref…Netminer:
new lenovo machineJoe
Samsung Gear VR - still usable with Google Cardboard
oculus quest appsHonkey
Weylus github.com/H-M-H/Weylusminnix Apache drama
fossforce.com/2024/07/apache-f…firefox tabs
blog.nightly.mozilla.org/2024/…kindle as e ink monitor
gist.github.com/adtac/eb639d3c…Our next recording date will be Sept 6, 2024
Our music is "Downright" provided by Klaatu and Broam
and we would like to thank Minnix for the mumble serverFor Good Reason, Apache Foundation Says ‘Goodbye’ to Iconic Feather Logo - FOSS Force
The Apache Software Foundation attempts to right a wrong it unintentionally created when it adopted its name 25-years ago.Christine Hall (FOSS Force)
Linux Market Share Reaches New Peak: July 2024 Report
The Linux operating system has reached a notable milestone in desktop market share, according to the latest data from StatCounter. As of July 2024, Linux has achieved a 4.45% market share for desktop operating systems worldwide.While this percentage might seem small to those unfamiliar with the operating system landscape, it represents a significant milestone for Linux and its dedicated community. What makes this achievement even more thrilling is the upward trajectory of Linux's adoption rate.
...
According to the statistics from the past ten years, It took eight years for Linux to go from a 1% to 2% market share (April 2021), 2.2 years to climb from 2% to 3% (June 2023), and a mere 0.7 years to reach 4% from 3% (February 2024). This exponential growth pattern suggests that 2024 might be the year Linux reaches a 5% market share.
Linux Market Share Reaches New Peak: July 2024 Report
The Linux operating system has reached a notable milestone in desktop market share, according…sk (OSTechNix)
People are converting. Not entirely on its own merit, of course: Its competition repeatedly is enshitifying the user experience and pushing people to try other options. Combine that with steam and their work on linux's compatibility layer and you get most of the movement.
That said once you hit a certain market share developers become more willing to port or provide binaries for the growing platform. It can accelerate further from there. Linux mainstream isn't there yet but it's starting to get in striking distance of its competition.
It is finally upon us.
THE YEAR OF THE LINUX DESKTOP!
Terms and conditions apply. It could be the next year, or the year after, or not at all.
Did anybody bother to look at the numbers?
I checked the stats for the last 4 years here and it looks really strange.
Statistics isn't my thing... But it looks like it's wise to be cautious and not to fully trust the numbers.
Around the beginning of last year there was a huge dip in the Windows market share that seemed to be correlating with a peek in "unknown".
Windows then catched up in a somewhat erratic way.
Mac OS also shows a weird behavior.
Starts at 16%, up to 21% and the down to 14% between October and November...
It's not likely that a huge number of people decided to buy a Mac and then trash it one month later. Same but opposite goes for the windows stats.
I think it looks like there is an uncertainty of more than the total market share Linux is shown to have..
Not saying that Linux isn't increasing on desktop market share.
Just saying that numbers seen to have quite a bit error margin and to be cautious if referring to these numbers.
Desktop Operating System Market Share Worldwide | Statcounter Global Stats
This graph shows the market share of desktop operating systems worldwide based on over 5 billion monthly page views.StatCounter Global Stats
Consolo is a modular cyberdeck with a Raspberry Pi 5 for brains, a 7 inch display and 7 hours of battery life - Liliputing
Consolo is a modular cyberdeck with a Raspberry Pi 5 for brains, a 7 inch display and 7 hours of battery life - Liliputing
Consolo is a modular cyberdeck with a Raspberry Pi 5 for brains, a 7 inch display and 7 hours of battery lifeBrad Linder (Liliputing)
Einar
in reply to thingsiplay • • •This looks great.
Suggestion: a step-by-step "howto" with an example or three to make it more useful for beginners.
thingsiplay
in reply to Einar • • •?script or the output from thecheat.shweb service? Because I'm not the author of the web service itself, I just created this script to make use of it.