Steam: Controller not working anymore in Forza Horizon 4
So I've been playing Forza Horizon 4 for a while without any issues using a XBox One Controller via Bluetooth. First i used Proton experimental and later Proton GE. Absolutely no problem.
Now however, the controller is not being recognized in game anymore (the on screen buttons show keyboard keys, not gamepad buttons) and I can't use any of the buttons (except the screenshot one). In the Steam menu, the controller test settings, big picture mode etc. it works fine and its recognized normally.
I didn't make any changes (before it happened, now of course it tried a bunch of stuff) but I did upgrade the system normally.
Any ideas what might have caused this issue?
like this
ShaunaTheDead likes this.
Why can ffmpeg kmsgrab capture the tty without root permissions?
I'm using sunshine for remote gaming on my Linux PC. Because I use Wayland and don't have an Nvidia I use kmsgrab for capture (under the hood sunshine uses ffmpeg).
I have noticed that I can enter tty and kmsgrab will capture it as well. If it just captured after logging in my user I wouldn't be surprised, but it also captures the login screen.
I autostart it at login using my systemd user configuration (not systemwide) so it should just have my user's permission level. I get the same results if I put it in KDE's autostart section, so it's not a systemd thing.
Why does that work? Shouldn't you need special privileges to capture everything?
The installation instructions tells you to do sudo setcap -r $(readlink -f $(which sunshine))
is this the reason why it works? What does the command do exactly?
GitHub - LizardByte/Sunshine: Self-hosted game stream host for Moonlight.
Self-hosted game stream host for Moonlight. Contribute to LizardByte/Sunshine development by creating an account on GitHub.GitHub
like this
ShaunaTheDead likes this.
Enable permissions for KMS capture.Warning
Capture of most Wayland-based desktop environments will fail unless this step is performed.
Note
cap_sys_admin may as well be root, except you don’t need to be root to run it. It is necessary to allow Sunshine to use KMS capture.
Enable
sudo setcap cap_sys_admin+p $(readlink -f $(which sunshine))
Disable (for Xorg/X11 only)
sudo setcap -r $(readlink -f $(which sunshine))
Their install instruction are pretty clear to me. The actual instruction is to run
sudo setcap cap_sys_admin+p $(readlink -f $(which sunshine))
This is vaguely equivalent to setting the setuid bit on programs such as sudo which allows you to run as root. Except that the program does not need to be owned by root. There are also some other subtleties, but as they say, it might as well be the same as running the program directly as root. For the exact details, see here: man7.org/linux/man-pages/man7/… and look for CAP_SYS_ADMIN.
In other words, the commands gives all powers to the binary. Which is why it can capture everything.
Using KMS capture seems way overkill for the task I would say. But maybe the wayland protocol was not there yet when this came around or they need every bit of performance they can gain. Seeing the project description, I would guess on the later as a cloud provider would dedicate a machine per user and would then wipe and re-install between two sessions.
setcap adds Linux capabilities to an executable. Capabilities are elevated privileged within the kernel for specific privileged "actions".
docs.redhat.com/en/documentati…
man7.org/linux/man-pages/man7/…
Chapter 8. Linux Capabilities and Seccomp | Red Hat Product Documentation
Chapter 8. Linux Capabilities and Seccomp | Red Hat Documentationdocs.redhat.com
I completely broke Kubuntu
So like
I was trying to install Davinci resolve (an editing program) and while doing so it basically said "removing" followed by that appears to be everything installed on my computer
So I nope right out of there and I notice a bunch of important things are missing ex: the terminal, file manager, etc
So I just decided
Maybe if I reboot everything will be a ok
And now on this screen and it won't even let me enter my logic
This was the latest update of Kubuntu
And idk what I did wrong or how I got here
I've only been using Kubuntu for probably about 4 months ish
Edit: please help
Edit 2: I got it working by reinstalling Kubuntu as suggested, Thank you for the help :>
like this
ShaunaTheDead likes this.
There is not enough information in your post to help you. Here's a preliminary list of questions that need an answer before anyone can give you a meaningful contribution.
Where did you get "Davinci resolve" from?
What instructions were you following to install it?
Did the installation finish?
Have you attempted to login using a text console?
Which version of Kubuntu were you using and which version of "Davinci resolve" were you attempting to install.
1, directly from the website Link
2, it was a basic installer except it was angry about some dependencies, specifically I installed libasound2 I believe and it started removing stuff
3, Nope
4, I'm not sure how
5, what ever the latest is
6, again what ever the latest is
DaVinci Resolve 19 | Blackmagic Design
Professional video editing, color correction, visual effects and audio post production all in a single application. Free and paid versions for Mac, Windows and Linux.www.blackmagicdesign.com
For number 4 since it is very useful in such situations: press Ctrl + Alt + one of the F keys (usually one of 3,4,5)
And to go back it is usually one of 1,2,7,8
It saved my ass many times.
1, directly from the website Link
I hope you've now understood why -on Linux- you should never try to install stuff like how you were used to on Windows. Unless, you 100% know what you're doing.
DaVinci Resolve 19 | Blackmagic Design
Professional video editing, color correction, visual effects and audio post production all in a single application. Free and paid versions for Mac, Windows and Linux.www.blackmagicdesign.com
On your phone, do you search the software you want to install through your browser? After which, do you download the install script and try to run it?
No, of course not. Instead, you pay a visit to the accompanied software center. Searching, installing and upgrading all occur through that.
Similarly, on Linux, your chosen distro comes with a (or perhaps multiple) package manager(s) and a software center. Those should first and foremost be consulted. And for 99% of the cases; this is the intended, supposed and supported way of installing said software.
This should suffice for the sake of brevity. If you've still got questions, please feel free to ask them.
On your phone, do you search the software you want to install through your browser?
Yes. Not everything I have is installed through the Google store. I grew up in an era before walled-gardens.
Similarly, on Linux, your chosen distro comes with a (or perhaps multiple) package manager(s) and a software center. Those should first and foremost be consulted. And for 99% of the cases; this is the intended, supposed and supported way of installing said software.
I should clarify - I know what a package manager is. But you're acting like one needs to have some expert skills to install things outside of the package manager. It's generally preferred for a number of reasons but it's not bad "per se" to install something outside of it.
Used to be a time where the install instructions were ./configure && make && make install
...
Yes. Not everything I have is installed through the Google store.
I understand from this, that it is implied, that the majority of what you have installed, has been done through the Google store though. By extension, I assume that -by default- you entrust installing software to the Google store. Hence, if all of the above is correct, then you actually don't commit to 'the Windows-way' by default; but only by exception. Which is exactly my point.
But you're acting like one needs to have some expert skills to install things outside of the package manager.
I feel you're reading too much into it. In my first comment, I didn't even mention package managers. In the second comment, I only wrote -and I quote- "Those should first and foremost be consulted. And for 99% of the cases; this is the intended, supposed and supported way of installing said software.". I don't see where expert skills are implied if one chooses to go outside of it. Please feel free to help me understand where I did.
It's generally preferred for a number of reasons but it's not bad "per se" to install something outside of it.
I never implied otherwise.
I hope you’ve now understood why -on Linux- you should never try to install stuff like how you were used to on Windows. Unless, you 100% know what you’re doing.
That's pretty strong language and what I was responding to. Perhaps you were being hyperbolic.
Thanks for clarifying!
That’s pretty strong language
I agree. But in this case it was 100% justified as OP just (hopefully reversibly) destroyed their installation.
and what I was responding to.
Thanks for properly nuancing my stance. Though, perhaps consider to do so right away next time 😜.
Perhaps you were being hyperbolic.
It was deliberate. But I wouldn't refer to it as hyperbolic. Perhaps more in the style of an elder sibling scolding their younger sibling to be better next time 😉. Apologies if I missed the mark, though.
I agree. But in this case it was 100% justified as OP just (hopefully reversibly) destroyed their installation.
And yet they did so using the package manager. They just installed a apt.source that they shouldn't have. THAT I would say one should not do unless one really knows what they are doing. If they had just installed some .appimage or compiled something from source they would have been fine.
Thanks for properly nuancing my stance. Though, perhaps consider to do so right away next time 😜.
And yet:
It was deliberate. But I wouldn’t refer to it as hyperbolic.
So... I'm not going to nuance your stance if it shouldn't be nuanced. It's a bit up to you to be clear about your nuance. And in this case you're being very ambiguous about it.
I do
But I could not find it in the intended ways
I infact did not 100% know what I was doing obviously lol despite having complete confidence that I did
What guide did you follow to install Davinci?
It probably contained something that removes a lot of stuff. Like replacing a dependency with a davinvi specific one, which uninstalled most of the system.
Possibly this contains the reason why it broke:
wiki.debian.org/DontBreakDebia…
I don't know how you went about installing davinci, but if you added a repo or ppa that is incompatible with the version you had, apt would try to resolve it by removing everything incompatible.
Easiest way to fix it would be to reinstall Kubuntu and all the packages you had, while keeping your old home partition/folder. That way all your data, downloads and most of the configs will stay.
The installer used to have a checkbox for that somewhere, at least back in the day when I used Kubuntu. Afaik it would automatically detect that a home already exists, even if it is not on a seperate partition.
But just to be extra safe, I'd recommend just live booting some other OS and backing up your home to an external drive.
I fixed it
It's finally working
It took me longer then I'm willing to admit but
There's no reinstall button in the installer
But to do it is to select manual partition and simply set the original partition as /
Amazingly everything is exactly how I left it
I expected to have to reconfigure my settings n such but it managed to retain my previous configurations
The top answer here worked for me a long (~10 years) time ago, it might still work. Backup your home folder with a livecd before trying anything though.
unix.stackexchange.com/questio…
Can I rollback an apt-get upgrade if something goes wrong?
Is there a way, before starting an aptitude upgrade or apt-get upgrade, to set up something so that you can "easily" rollback your system to the "apt" state it was before the actual upgrade, if som...Unix & Linux Stack Exchange
As far as I can tell, DaVinci Resolve is not available in a Debian/Ubuntu package. The standard installer, designed for Red Hat, doesn't seem to interact with the package manager either. This makes me think some kind of wrapper script you downloaded from the internet was the culprit here.
There are some guides online that will make Resolve into a package, but they seem to be pulling all kinds of weird tricks. I would not recommend using those guides without some kind of backup and recovery tool set up for your computer.
It's hard to tell what exactly got removed, so I don't know what you need to reinstall. If you use a tool like Timeshift or Snapper, now would be the time to restore a previous system snapshot. If you don't, you'll need to do the recovery manually. Either way, this isn't an easy fix, especially if this was caused by a script like MakeResolveDeb which seems to also modify other system files.
To get a running Kubuntu install back, you basically have two options: either use the command line to sudo apt install
every package you notice missing (sudo apt install dolphin konsole…
) to reinstall them, or, what I would do in your case, do a clean reinstall to get everything back in working order. First make a copy of your entire home folder (and any other folder you may want to save) to another drive, then do a clean install, and copy the files back to where they're supposed to be.
If you can't log in, try logging into the console (ctrl+alt+f3, type username and password when prompted). From there, you can run a command like sudo apt install kubuntu-desktop
. That should fetch most Kubuntu files it it installs successfully. If it refuses because of package conflicts, you'll need to remove the conflicting packages first (i.e. sudo apt remove davinci-resolve
if apt complains about kubuntu-desktop conflicting with Resolve).
A reinstall is probably quicker and easier, but you'll need to make sure to copy over everything (including hidden files!) you may need off the broken system. You can do this from the Kubuntu installer by running the "try kubuntu" option when prompted and simply launching a file manager. Any system modifications you made to your system (additional drivers and programs, configuration) will need to be made again. If you haven't messed with the system too much, this shouldn't take long; all you need is to install your old programs, and the config files from your backup should leave you right where you left off.
As for system snapshot tools:
If you're comfortable with messing around with partition layouts, I highly recommend looking into setting up BTRFS+TimeShift; it could undo the damage in seconds after rebooting.
Unfortunately, Kubuntu doesn't offer this tool as a simple option in the installer, so there's a bit of manual work involves to get it to work, and if you don't know what BTRFS is you may not want to deal with that nerd shit.
I think setting the partition type to btrfs during setup is all you need to do (that, and installing timeshift of course), but I haven't verified that this still works.
I'm lucky enough to have other systems around to back up the drive with for the reinstall
I am absolutely going to fiqure out how it set up timeshift now
You can go to /var/log/apt/
and read the history.log
as it will contain every single package that you did install/remove.
Based on that you can just restore it to working state by manually undoing the changes (removing installed, installing removed)
Lol
Please install Davinci Resolve in a Podman/Docker container.
- install podman and distrobox
- clone the git repo
- place the davinci binary in there
- run setup.sh
And this looks like just sddm-breeze is missing
GitHub - zelikos/davincibox: Container for DaVinci Resolve installation and runtime dependencies on Linux
Container for DaVinci Resolve installation and runtime dependencies on Linux - zelikos/davinciboxGitHub
If you can intercept boot ( press a key to get to the grub menu or whatever... I haven't used Ubuntu in a while so maybe it's not so simple anymore) you may be able to enter rescue / single-user mode and let apt complete the changes and then revert them.
A clean reinstall may be easier depending on how much you've changed on the system. Easier isn't always better, fix this and you'll know how to do it again in the future.
Find (and reinstall) packages with corrupted files (without breaking anything)
I usually prefer to fix a Linux system than to reinstall from scratch. My computers have seen many distribution upgrades and a list of PPAs or third-party repositories. APT usually makes sure thatAsk Ubuntu
On your next OS reinstall, perhaps consider using an atomic distro. They’re WAY harder to break in this fashion - primarily because you can just roll them back to the previous known-good state.
Edit: genuinely curious what the downvotes are for - I thought atomics were quite popular here?
Easiest fix:
1.- Download Fedora
2.- Install Fedora
3.- Never look back
4.- Be happy the rest of your life
At login, press ctrl-alt-f4 or f5 or one of the F* keys until you get a text based login screen. (Might need to press enter on a blank screen for the login to appear)
Login as your user and password.
Sudo apt install kubuntu-desktop
Sveriges två största fiskeriföretag har sina huvudkontor på Rörö i Göteborgs norra skärgård. I Öckerö kommun. Men de flesta av de stora fiskeriföretagen finns i Fiskebäck som sen länge är en stadsdel i Göteborg. Det allra största fiskeriföretaget i Fiskebäck är Fiskeri AB Ginneton.
fiske.zaramis.se/2024/07/24/st…
Största fiskeriföretaget i Fiskebäck - Fiskeri AB Ginneton - Svenssons Nyheter - Njord
Sveriges två största fiskeriföretag har sina huvudkontor på Rörö. Det allra största fiskeriföretaget i Fiskebäck är Fiskeri AB Ginneton.Anders Svensson (Svenssons Nyheter - Njord)
Kina är ett kapitalistiskt land med planekonomi. Regeringen gör planer och subventionerar olika sorters produktion. Exempelvis bilindustrin för att Kina ska bli den dominerande och ledande producenten av elbilar. De över 100 företag som producerar elbilar får enorma subventioner av stat, regioner och städer.
blog.zaramis.se/2024/07/24/pla…
Planekonomi leder till överskott av bilar - Svenssons Nyheter
Planekonomi leder till överskott av bilar. Kina är ett kapitalistiskt land med planekonomi. Regeringen gör planer och subventionerarAnders_S (Svenssons Nyheter)
Kill a Process Running on a Specific Port in Linux (via 4 Methods)
Kill a Process Running on a Specific Port in Linux (via 4 Methods)
Learn different ways to kill a process running on a specific port in Debian, Ubuntu, Linux Mint, Red Hat, Fedora, Arch, and other distros.Linux TLDR
like this
ShaunaTheDead, willismcpherson and hornface like this.
As a German, it's always fun to use the ss
command. The SS was the organization that did most of the genocide under Hitler. That's a bad name around here, so people are always surprised that a command is named that.
But what's even more fun is that we can memorize the standard set of flags as -tulpn
, because it's basically spelled+pronounced like "Tulpen", which is German for tulips.
So, occasionally I get to tell people to type "SS-tulips" into their terminal and it always confuses the hell out of them. 🙃
I think most Americans think of that as well. It's even the first several Google search results for "ss". Bad name choice.
Though we (Americans) didn't get the fun "tulip" bit.
like this
willismcpherson likes this.
There's so many poor names in FOSS but people refuse to change them out of attachment for history. One other example comes to mind: Gimp.
Basically, devs are terrible at naming things.
gimp
1. an unpleasant or stupid person: 2. a person with a physical disability…dictionary.cambridge.org
ss -tulpn
was a welcome find for me. I have it memorized for netstat and dislike always having to install it on a new box, very handy tool
And it's used for killing... processes.
On a separate note, I wish such tutorials explained what the commands are abbreviations of. Would make it easier to remember.
Everyone Wants to Control the Internet.
Everyone Wants to Control the Internet.
Everyone wants to prevent us from using the Internet the way we want, but we still have options.cheapskatesguide.org
Need help to find OSTree-like project
Couldn't find the project in my browser history or Lemmy saves. I'm pretty sure it was Lemmy though that led me to find a GitHub project similar to OSTree. It sounded like it was maintained by one person and it hasn't been updated in a long time because the author thought it was "done" and they used it frequently.
It was a tool that let them basically create images that could be booted from and it was easy to layer software on top of a base image and I think there were config files similar to Containerfiles but didn't look the same. Don't think it be was "goldboot" either but that might be a little closer to what the project does. I don't think it was something Fedora specific either like bootc.
Update: Found it! It was in the history of a laptop I rarely use (of course). The project is github.com/godarch/darch and it does appear to be those things I said: layered, docker-like, bare metal, and OS agnostic.
GitHub - godarch/darch: A tool for building and booting stateless and immutable images, bare metal.
A tool for building and booting stateless and immutable images, bare metal. - godarch/darchGitHub
like this
ShaunaTheDead likes this.
Could've been something Fedora-ish but based on the GitHub I don't think that's it. The most distinct thing I remember is that it appeared abandoned but the author just didn't feel it needed any changes.
I use like four different devices to browse and some have multiple browsers so checking history has been rough.
what exactly am I doing adding deb http://ftp.us.debian.org/debian sid main to my etc/apt/sources.list? trying to install newest yt-dlp on debian 12.6
All I wanted is to install the current yt-dlp (2024.07.16-1) on debian 12.6.
Suggested way to that according to packages.debian.org/sid/all/yt… is to add that line to that file (etc/apt/sources.list), but do I really need to download the 1600 files that upgrade would entail?
I don't want to download the tar.gz 'cause upgrading that would be a pain.
like this
echomap and ShaunaTheDead like this.
~/.bin
or ~/.local/share/bin
and dropping it in there. As long as you have permission to that directory, yt-dlp should be able to easily update itself.
this is the way. easy. no install. no extra steps. update when you want.
or you can add the ppa that's listed in the yt-dlp install instructions (scroll down to third-party package managers > apt) and use apt to install it like any other package.
like this
eshep likes this.
In best scenario you'll turn your Debian to SID. Worst case scenario you'll break your system.
I do not suggest this operation unless you're sure what you're doing.
Alternatively you can install yt-dlp
using snap or using Nix Package manager
Debian sid is their unstable branch; it contains all new packages before they are tested. As such, if you try to install updates from it, you'll likely get a very unstable system.
You can set it up so that you only get a specific package ( wiki.debian.org/DebianUnstable… ), but honestly, if you need the very latest version, I'd recommend just grabbing it from github or wherever. Iirc, yt-dlp has a -U
flag which will automatically update it.
like this
echomap likes this.
github.com/yt-dlp/yt-dlp/wiki/…
Normally I try to use apt for everything, but yt-dlp is an exception since when you want it, you probably do actually want the latest version. I think the only thing it depends on is python, so simple enough to get it from git one way or another.
PS: Now that I actually look at that page I linked to, I see there's a PPA repo you could use. I don't know who runs it or how up-to-date it is, but it's probably a better bet than what you were trying.
$PATH
.like this
echomap likes this.
pipx
, since it'll create python virtual environments for each app installed, and won't mess with system packages.
pipx install yt-dlp
This will install yt-dlp with everything it needs but without fucking anything else up, both system-wise and for your user (because installing python packages in your home manually can cause problems). You must have your $HOME/.local/bin
in $PATH
to then be able to run yt-dlp
, but I think pipx will check and warn you.
pipx upgrade yt-dlp
to update it (or upgrade-all)
pip install yt-dlp
. No messing up with my system.
pipx
does that without this manual process - it's meant for these standalone apps that are in your $PATH
.
like this
echomap and SaltySalamander like this.
If you're not running sid, do not look for install instructions on the sid page. If you're on 12.6, that's Bookworm (current stable name), look there for help with 12 stuff.
Best way to use the current #yt-dlp is to uninstall the one from the repo, and grab the current release from the github page and drop it in $PATH
somewhere.
The latest yt-dlp is in bookworm-backports.
Just install it via pip
and then symlink its binary file to /usr/bin
.
t. Am running a live stream 24/7 on my orange pi zero 3 (via ffplay/yt-dlp) since forever.
"Why not simply add $HOME/.local/bin
to $PATH
?"
Because it breaks things. While symlinking it does not.
"Why?"
No idea, honestly.
Also, you can take a step further and make a tmpfs partition @ $HOME/.local
and then add the following line to your .bash_profile file:TMPDIR=$HOME/.local pip install --break-system-packages -I --no-input yt-dlp &&
.
pipx install
or your distro's package instead of pip install --break-system-packages
What you are doing: adding the unstable repository to your Debian system. Debian has three levels of software stability, stable, testing and unstable.
Stable does what is says on the tin. It’s stable, but older. Testing is gonna be the next major version when it’s deemed stable enough to be called stable. Unstable is for trying out new shit and seeing what breaks. It has the most recent packages and the most problems.
Stable and testing will be named after different characters from Toy Story, unstable will always be named after the character “Sid” from Toy Story.
In the context of what you’re trying to do, you are fucking up.
Yt-dlp can (and should in most cases) update itself by using the command “yt-dlp -U”. But it will only update itself that way if you manually install it from the git page.
You can do this by downloading it and putting it somewhere in your users $path. This is just like putting a program folder in windows in c:\program files and making a start menu entry manually, except you won’t make the start menu entry because your shell will always look in $path to see if it can run what you just typed. If you’re familiar with Macs, it’s literally like copying the program to your applications directory.
There’s instructions how to manually install on the yt-dlp git.
You should do yt-dlp this way unless you have a good reason to use the Debian repos or pip.
E: once you get yourself straightened out, make sure to add “yt-dlp -U” to all your scripts before they actually run. It keeps you from getting the wrong quality profile or downloads from failing or whatever.
Proton Experimental gets fixes for Ubisoft Connect, Once Human, Burnout Paradise Remastered
Proton Experimental gets fixes for Ubisoft Connect, Once Human, Burnout Paradise Remastered
July 22nd saw a new release of Proton Experimental from Valve, as work continues as always towards the next main release of the Windows compatibility layer for Linux desktop and Steam Deck.Liam Dawe (GamingOnLinux)
like this
wagesj45 likes this.
NVIDIA 560 Linux Driver Beta Released - Defaults To Open GPU Kernel Modules
NVIDIA 560 Linux Driver Beta Released - Defaults To Open GPU Kernel Modules
NVIDIA today released their first Linux beta driver in the new R560 driver release branchwww.phoronix.com
like this
wagesj45 likes this.
I tried Linux on my desktop end of last year (like I always did on about a yearly basis) and decided that if I was gonna make the switch, I needed an AMD card. NVIDIA + Wayland had a lot of flickering issues and whatnot, but I didn't want to use X11 because Wayland has way better support for multi-monitor with different refresh rates and also VRR.
So, I sold my RTX 3080 and got a Radeon 7800 XT and switched to Linux on my main desktop full-time January 1st. A few months later and NVIDIA finally decides to stop fucking around and properly improve their Linux driver. Could've saved a few bucks there (sold the 3080 for like 350,-€ to a friend and got the 7800 XT for like 550,-€, and the 7800 XT is pretty much in the same performance ballpark, so I spent 200,-€ on better compatibility/less pain).
Good to know that NVIDIA will be an option for me for a GPU upgrade in the future. It's always good to have more choice. While my experience with AMD Radeon under Linux was okay, it wasn't really perfect either. I had the odd crash here and there with kernel versions from earlier in the year (6.6), 6.7 had black screen issues with RDNA3 (maybe RDNA2 as well) after standby and hot restarts (fixed in 6.7.4 or 6.7.5 iirc), and ever since 6.7 I have stability issues with enabled VRR and multi-monitor as well, unless I force the memory clock to stay at a higher frequency. Then there's also this issue that just got fixed with 6.10 it seems.
So if NVIDIA really ups their game now and consistently improves their Linux driver, I could see myself going NVIDIA again. I'm also excited to see what Intel has in store though.
With vrr and atomic modesetting, some cursor plane updates are dropped (#2186) · Issues · drm / amd · GitLab
Whenever the cursor does not drive the refresh rate with vrr and the refresh rate is below the maximum refresh rate of the monitor, amdgpu only updates the...GitLab
I'm not regretting the switch, no worries :). Overall the Radeon 7800 XT is still a great card, it's a decent step up in terms of efficiency compared to the RTX 3080 as well and the PowerColor Hellhound model I got is the first card I ever had (well, with active cooling at least) where I actually agree with the reviews that the card stays pretty quiet even under load.
I also know how to work around each problem: KDE has a built-in workaround for the cursor stutters (as of version 6.something) and in GNOME you can disable hardware cursor which can decrease performance, but so far I haven't really noticed anything. The artifacting and eventual crashing after standby with enabled VRR can be worked around by reconfiguring any display: I usually change the refresh rate of my second display between 144 and 165 hertz. The frequency of random crashes decreased a lot with newer kernel versions, and I'm not even sure if the crashes I had in KDE 6/6.1 were caused by the AMD driver or by KDE - which seems quite a bit more moody to me than the more mature KDE 5. That's also why I'm trying GNOME now (which I actually enjoy using way more than I thought). A few days ago AV1 decoding on AMD was borked in Mesa 24.1.something, but was hotfixed a few days later. My self-compiled kernel 6.10 refused to boot with errors related to a network card, but I'll check it out again as soon as Fedora releases their official test build (potentially this weekend) and will report the bug should it still occur. As soon as 6.10 is working, that's one less workaround for me to worry about (unless that fix somehow doesn't work for me).
My comment was more about the fact that I'm happy NVIDIA starts taking Linux serious (again). It's probably not quite there yet, but NVIDIA seems to be committed to delivering a good Linux driver now and their latest releases each brought big improvements. There still seem to be some bigger issues (like the one you described), but now I'd assume we'll get there sooner rather than later.
I tried the new installer out the other day to see if it made ALVR more stable for doing Steam VR with my Quest 3...
The installer was very user friendly, and ALVR is way more stable now.
I'm pretty happy, the process to install nvidia drivers now can be done in a single one liner command, which is ideal.
FPS is double of what it was on Windows on the same machine.
I honestly don't believe you.
Hahaha, we have straight up liars lurking in the thread!
A first for the Lemmy platform! /s
on Win 11 I was getting 30 to 40 fps on maximum settings. on CachyOS i'm getting 75+ fps on the exact same settings...
(....On upgraded hardware) ;p
NVIDIA's user-space components remain the same and are closed-source, but great to see the NVIDIA open-source kernel driver bits being mature enough to now be preferred over the proprietary ones on supported GPUs.
How is it open source? In the history of the whole repository, there were 11 merged PRs in 2022 (when the project began), and no merged PRs after, even though lots of PRs have been submitted since then. There has never been an issue-fixing PR merged, and no issues or PRs are submitted by the maintainers of the project.
All of their commits are tagged versions, none of which tell you in words what they did or what changed, it's clear that they still do their actual development internally, and the GitHub repository does not contain that incremental work. Because the commits are releases only, there are only 65 commits on the main
branch from May 2022 to the latest commit/release 4 days ago.
:::spoiler so NVIDIA,
:::
Pull requests · NVIDIA/open-gpu-kernel-modules
NVIDIA Linux open GPU kernel module source. Contribute to NVIDIA/open-gpu-kernel-modules development by creating an account on GitHub.GitHub
Same for me with Black Mesa. Native version has all sorts of graphical glitches while Proton looks as it should.
OTOH some games like Valheim runs very well native.
What is Firefox supposed to do?
What is Firefox supposed to do?
Firefox now collects data for advertisers. It's not actually scary, but there is a bigger problem.Corbin Davenport (The Spacebar)
I do know that Mozilla's Privacy Preserving Attribution is not something you should worry about
I believe Corbin is correct based on my own assessment of this feature however he isn’t providing any evidence either.
Adevrtisers arent going to give up their existing tracking methods unless the alternative is cheaper and more effective or driven by regulation.
With only 3% market share and little ability to sway regulators PPA could be the best solution in the world and still won’t see significant adoption.
So no you don’t need to be concerned about it… because it will be forgotten in a few years.
like this
DaGeek247 likes this.
Right, Apple doesn't have an ad-revenue & tracking empire to protect, and should Safari adopt PPA, the discussion changes. It would no longer be the API used merely by Firefox with its (estimated) 2.7% user base trying to gain any traction, it could be Chrome holding back the tech used by a cumulative (estimated) 20% of web users. That's a very different conversation.
Also, despite advertisers and big tech's best efforts, the chance remains that legislation is passed somewhere imposing stricter privacy protections on the web. Again, should that happen, PPA might be well positioned as an alternative to past methods of measuring ad effectiveness that advertisers wouldn't necessarily like... but any alternative that works could make them less resistant to such an important change.
All hypothetical, of course, but if you never consider future possibilities, what are you even aiming for?
Browser Market Share Worldwide | Statcounter Global Stats
This graph shows the market share of browsers worldwide based on over 5 billion monthly page views.StatCounter Global Stats
"A good compromise leaves everybody mad," as Calvin would say. So, what is Firefox (or Mozilla) supposed to do here? What are any web browsers supposed to do here?The solution isn't simple
The advertising industry swallows up too much personal data so it can make valuable targeted advertisements. Websites, publishers, and independent creators now rely on the elevated income that comes from targeted advertising, and it's difficult to convince people to pay for content.
Um we must only look at the amount of profit made by these different parties to inform ourselves on where the problem might lie and therefore who might have to take a hit. The advertising industry makes humongous amounts of profit. They make that on the backs of users and comtent creators. You can easily see that by imagining the effects of removing either one of these from the equation. Removing advertising companies on the other hand does not have such effect. In fact prior to the Internet there was no third party advertising middle man between say newspapers and the actual advertisers paying for ads. If we abandon the nonsense notion that everyone gets paid what they deserve, then we can clearly point to redistribution needed from the advertising companies to the content creators and perhaps users. For the latter, either in the form of less data collection or direct payments for data. We probably wouldn't be in this position if we didn't live with an advertising industry oligopoly as some companies would have paid more to content creators and preserved privacy for users. However the free market doesn't tend to produce competitive equilibria in the long run. So it has to be distribution. Get these fuckers by their necks and shake 'em down for a big chunk of the profits they make and subsidize content and data privacy.
And you know how much it would cost any OECD government to publicly fund the development of a web browser? Yeah exactly. But our brains have been brainwashed to the point of not even imagining such solutions.
like this
Atelopus-zeteki likes this.
I agree that advertising companies take too much off the top and a lack of competition has probably made that worse. That's also an issue with a lot of publishers, many of them make buckets of money but still pay writers/editors/other staff poorly. That's just normal capitalism stuff that won't be fixed until there's a major global economic shift.
In fact prior to the Internet there was no third party advertising middle man between say newspapers and the actual advertisers paying for ads.
Right, because there were very few newspapers, and all of them were well-known enough that finding advertisers was not difficult. Independent creators and smaller publishers don't have the brand recognition or massive initial audience to make that happen. You can see this in action with a lot of YouTube channels; most of them only have access to YouTube's own ad system and offers for in-video ads from shady companies and mobile games (Better Help, Raid Shadow Legends, Opera, etc).
until there's a major global economic shift
Like when the Joker burnt a Trillion dollars?
Not use 100% of my CPU at idle and become a zombie process when I kill it.
I think it might be a packaging problem but still I'm salty...
Nvidia 560 beta driver release
Linux x64 (AMD64/EM64T) Display Driver | 560.28.03 | Linux 64-bit | NVIDIA
Download the English (US) Linux x64 (AMD64/EM64T) Display Driver for Linux 64-bit systems. Released 2024.7.23www.nvidia.com
Lols. :)
Nvidia programmers strike again...
AMD used to have the same issue - their drivers were proprietary and buggy (anyone remember fglrx?). The difference is that they did something about it. Their modern drivers are open-source and mainlined so it's easy for anyone to work on them. New kernel display/GPU features always come to AMD first, because the kernel developers working on the new feature can just add it to the AMD driver themselves.
Nvidia have open-source drivers now, but they're still out of tree (so they'll always lag behind the kernel) and AFAIK they have no plains to merge them into the kernel.
I appreciate Nvidia's efforts, and their newer drivers are much better than older ones (especially now that they support explicit sync), but they're just not as good as AMD's.
So instead of accepting that the driver should be GPL and part of the kernel, you turn things around and pretend the development of the kernel is the way that it is because of a conspiracy against Nvidia?
The bit regarding Wayland doesn't make sense, no idea what you're getting at. Though maybe you don't follow Linux developments?
It's not a conspiracy. Here's Linus, himself, publicly picking a fight with NVidia. All because of a driver not being open source. I love open source, I love the GPL, but no individual or company should be required to do business that way. It's up to them, as is their right.
All because of a driver not being open source
Do you even assemble the sentences in your head before you post?
That is precisely the issue, it's closed source.
Now you're just trolling. Did your dad block all the porn in your home network and now you're bored?
Closed source isn’t a crime. However trying to ruin a company with exclusionary tactics can be. Linux kernel devs and Wayland devs have all conspired to harm a company.
NVIDIA kinda shoot themselves in the foot on Linux and excluding themselves. Refusing to support generally supported APIs like;
- VA-API
NVIDIA rather wants the OSS community the use their VDPAU or NVENC / NVDEC API's. Whilst everything and dog uses VA-API.
- GBM
Not true anymore (for driver above 495), but in the past NVIDIA refused to support GBM (for Wayland) and rather have compositors use EGLStreams instead of GBM.
Next to that modern NVIDIA hardware (GTX 900 and 1000 series) on the opensource Nouveau drivers cannot be reclocked because it needs some magically blessed signature by NVIDIA. NVIDIA refuses to supply that signature for that hardware but did release it for 1600 and up series.
That's just two things where I am like, dafuq are you doing NVIDIA....
Looks like the birdie has escaped phoronix...
In the small chance that this comment is serious, Nvidia is found this because the corporate server-based customers need the ability to troubleshoot and debug the driver.
The actual trade secrets are being moved into the proprietary firmware blob and out of the driver.
like this
chameleon likes this.
How is it nonsense? Linus himself in the kernel mailing list and in public speaking has repeatedly gone after NVidia due to their licensing. In the kernel, he's repeatedly cut NVidia off from using various kernel internals because they aren't open source; attempting to cripple their driver. That's fact. Check your history on it.
As for wayland, it could have been written to do absolutely anything they wanted it to do and be. They chose to not support NVidia due to the licensing, purposely choosing an incompatible way to display to try and force NVidia to change or to for NVidia to fall from it's spot as market leader.
I feel bad for NVidia, caving this. An open source driver coming out, them adding features to work with wayland instead of the other way around. It wreaks of extortion by the kernel and wayland devs, to damage market share if the devs don't get what they want. I hope they get sued for it and lose everything for it. It casts a terrible light on the open source community that it would make companies either capitulate, or the community tries to cut the company off at the knees. It was wrong and should be severely punished to prevent it ever happening again. As it is, no hardware company should trust Linux or offer to support it in any way, because it might turn around and bite you as it did NVidia.
I would love to buy an AMD, but I can't afford it, so I'm stuck with the Nvidia I have.
It. blows.
Effectively Use History Commands in Linux
Effectively Use History Commands in Linux
Master the history command and learn some interesting usage of the bash history feature in this tutorial.Abhishek Prakash (It's FOSS)
I think it's the only shell shortcut I know haha
You can install fzf to make it fancier.
GitHub - PatrickF1/fzf.fish: 🔍🐟 Fzf plugin for Fish
🔍🐟 Fzf plugin for Fish. Contribute to PatrickF1/fzf.fish development by creating an account on GitHub.GitHub
GitHub - cantino/mcfly: Fly through your shell history. Great Scott!
Fly through your shell history. Great Scott! Contribute to cantino/mcfly development by creating an account on GitHub.GitHub
Fortunately, you can just use your favourite package manager instead: docs.atuin.sh/guide/installati…
fzf
installed, it is easy to integrate it with your bash history.In my
.bashrc
, I have:# Introduce fzf-driven functionality as described here: https://wiki.archlinux.org/title/fzf.
source /usr/share/fzf/key-bindings.bash
source /usr/share/fzf/completion.bash
Also, you may be interested in
zoxide
, which keeps track of paths you have navigated to.Also from my
.bashrc
:# Enable an autojump-like 'j' command. Use 'ji M' to select paths starting with M using fzf.
# This needs to always come last.
eval "$(zoxide init --cmd j bash)"
GitHub - dvorka/hstr: bash and zsh shell history suggest box - easily view, navigate, search and manage your command history.
bash and zsh shell history suggest box - easily view, navigate, search and manage your command history. - dvorka/hstrGitHub
To use the last argument of the last ran command, use the Alt+.
keys.
Sounds like a poor-man's !$
to me!
$_
also works. I love Alt+.
but sadly it doesn't work on any Mac terminal emulator I've found and, even more sadly, I am forced to use a Mac at work.
I haven't tried !$
so I'm not familiar with its function, but one nice thing about Alt+.
is that you're not limited to the last argument of the most recent command; instead, it allows you to scroll backwards like Ctrl+R
.
No, it's a shell feature. Terminal emulators don't even know what shell are running typically, and I haven't heard of them adding shell features. That would require the terminal emulator knowing you're using bash, knowing how to interrogate history etc..
From man bash
:
yank-last-arg (M-., M-_)
Insert the last argument to the previous command (the last word
of the previous history entry). With a numeric argument, behave
exactly like yank-nth-arg. Successive calls to yank-last-arg
move back through the history list, inserting the last word (or
the word specified by the argument to the first call) of each
line in turn. Any numeric argument supplied to these successive
calls determines the direction to move through the history. A
negative argument switches the direction through the history
(back or forward). The history expansion facilities are used to
extract the last word, as if the "!$" history expansion had been
specified.
is there a way to save commands from history? i tried to figure this out when i was starting to use linux regularly, to help learn commands and to make a reference for myself as to what the commands do. i'm familiar with things like man, info, tldr and others but i wanted to put things in my own words since i remember better that way.
what i'm wanting but can't seem to automate:
-save commands from bash history to a file with only the command and arguments used, no line numbers or time stamps.
-filenames can be kept, but if filenames are removable easily, that would be better.
-file saved in should have the list sorted with any duplicates removed and happen after any terminal session ends.
-i've read about changing the prompt but not done it correctly and not sure if possible or the safest way.
-i've tried using .bash_logout but it doesn't seem to do anything and i'm not sure why.
this isn't too important anymore, as i've grown more comfortable with linux and bash but it bugs me that i never got it to work. i can copy and paste more detailed notes of what i tried but i'd need to redact a bunch of cursing and frustrated whining.
You mean sth like cat <(history | cut -c 8-) history.txt | sort | uniq > history.txt
? Not sure if it possible to remove the file names.
It should probably work to put it in .bash_logout
.
yeah that looks exactly like what i wanted, thanks! i probably should have asked my question a couple years ago but i was still very new to linux and didn't quite know the lingo. i'm still not quite sure how <
works in general but i get the pipe and other redirects at least.
putting it in .bash_logout
doesn't always work. something involving login shells i don't quite understand yet but i'll read more about it. i saw mention of puttingexit_session() { . "$HOME/.bash_logout" } trap exit_session SIGHUP
in .bashrc
to make it always work but i also don't understand trap yet either so i'll look into that too.
thanks again, your reply helped point me in the right direction of things i want to learn!
cat <(echo data from the stdin stream) from_file.txt
, you get the data in the first argument from a stream.With the
.bash_logout
I do not have much experience yet.
As a noob where do I find more handy tips like this? Alone with handy/popular apps?
Almost every windows app I had was on Linux (most were FOSS already) but I know there will be some unique or interesting ones.
For example in android there is Obtanium now to update apps direct from git, or the many was to use YT without ads.
This is not bad for a start (common commands):
linuxblog.io/90-linux-commands…
90 Linux Commands frequently used by Linux Sysadmins (updated to 100+)
Linux Commands listed and explained with examples. Browse over 90 Linux Commands frequently used by Linux Sysadmins.Hayden James (linuxblog.io)
Depending how deep you want to dive into Linux, there is a great ebooks collection available:
humblebundle.com/books/linux-f…
Humble Tech Book Bundle: Linux for Seasoned Admins by O'Reilly
Get 15 books from O’Reilly on a range of topics, including DevOps, containerization, version control with Git & more! Your purchase helps Code for America.Humble Bundle
Here's something I use to search history for commands or keywords. I have this as a function in my profile:
function hgr() {
history | grep "$1"
}
h
istory gr
epUsage: hgr git
to search for commands containing git
.
Someone more knowledgeable may be able to point out ways to improve this.
fzf
you can even get fuzzy history searching (the first search result has a video). atuin
puts history into a proper db, optional syncs across hosts, and, like fzf, enhances control+rFuzzy Search Your Bash History in Style with fzf
If you spend a lot of time in a terminal then knowing how to search your history efficiently saves a ton of time. Here's how.Nick Janetakis
fuzzy_arg
that I bind to Alt-a to uses fzf for interactively inserting arguments from previous commands. It's Ctrl-r for Alt-. -- I've found it super useful for essentially inserting partial commands (single arguments) from the historyGitHub - WillForan/fuzzy_arg: Ctrl-r for Alt-.
Ctrl-r for Alt-. Contribute to WillForan/fuzzy_arg development by creating an account on GitHub.GitHub
GitHub - atuinsh/atuin: ✨ Magical shell history
✨ Magical shell history. Contribute to atuinsh/atuin development by creating an account on GitHub.GitHub
From Linux to NetBSD with SSH only
CloudBSD.xyz
Overcome most cloud providers' limitations and use the system of your choice (NetBSD)cloudbsd.xyz
BSD is on its death bed
netbsd.org/releases/formal-10/…
Considering OpenBSD and NetBSD have had two new releases just this year, and how well funded the BSDs are by major corpos who like ripping source code, I think their so called "Deaths" have been majorly overstated.
Give a BSD a try, it's a lot less like shoving systemd/apache2/red hat together and reading 300000 line long config files with documentation that clearly was never intended to be read and more like using an actual operating system designed to be cohesive.
First Pop!_OS 24.04 Alpha with COSMIC DE Drops on August 8 - OMG! Ubuntu
First Pop!_OS 24.04 Alpha with COSMIC DE Drops on August 8
Course set: the first alpha of Pop_OS 24.04 is scheduled for release on August 8th. So if you’ve been counting the days until you can try the new COSMIC desktop environment first hand… Well...Joey Sneddon (OMG! Ubuntu!)
like this
Rakenclaw, timlyo, massive_bereavement and jwr1 like this.
Depends on your point of view.
Their motivation was “we have a vision for our UX and GNOME won’t let us do it — so let’s write our own.”
It was only after deciding to write their own that they decided to write it in Rust.
They like Rust, but that is not what motivated them to make COSMIC.
My view is that if the goal was to effectively make good software they wouldn’t start from scratch.
If they used wlroots the desktop would be usable today with a good feature set.
If they used Qt or GTK they would have feature rich well supported software. (GTK4 could have been an improvement for them, it’s designed around being minimal and having platform libraries implement design choices)
They didn’t take a practical approach imo. You could argue its a long term investment but because of it it’s probably years off of feature parity. The only upside today is.. it’s written in Rust.
usable
No current distro is currently installable for blind users due to Wayland.
They dix not build the compositor from scratch, they built it on top of smithay, a library similar to wlroots but written in Rust.
I don't know if you've actually tried to use GTK or QT, but it's insanely painful. There is a reason almost all apps are written in Electron. Native GUI toolkits suck. If they had used GTK they would have still had an outdated and hard to maintain toolkit, and to deal with Gnome politics. Using GTK was actually the initial idea.
If we want Linux Desktop to succeed, at some point we have to build tools that people want to use. I'm glad they're doing it.
Yeah.
Don't get me wrong I guess I'm glad to see a bit more diversity in the DE space, but the design of cosmic has always been "Gnome but a bit dated and uglier" to me.
Still, theming exists despite the quirks it can cause sometimes, so it's not the end of the world.
I'm still going to have a little mess around with it and see what it's like though.
like this
massive_bereavement likes this.
What I am most excited for in COSMIC is the promise of tiling in a full DE. I like the idea that you can switch back and forth.
I started trying it out a month or so ago. Still pretty incomplete. Promising though.
The fact that it may drive the Rust GUI ecosystem forward is exciting as well. I do not need to see everything re-written in Rust but it will be great if Rust is a realistic option for new app dev.
Tiling
It's actually really good. I've been running the prealpha at times, and I've had no issues with tiling.
I'm missing 2 things from a real tiler: sloppy focus (WIP), and static workspaces.
Like, crazy far or just far?
Yes, I agree. Pop!_OS gets a lot of hate for some reason, but it's actually a really, really good distro.
I was asking about COSMIC though, since I'm really looking forward to try it!
More importantly to me, can blind users even install the OS.
All current mainstream distros now use Wayland, which has broken screen reading, so the OS cannot be installed.
Honestly, it's not as important. These projects are working with very limited resources, typically dependent on free labour. Accessibility is incredibly hard to get right and half arsing it isn't going to work. The priority should be pushing out a reliable, working prototype that people want to use. Once that's accomplished you can refocus on expanding the features.
Demand for reliable multi monitor support is going to be far higher than screen reading capabilities.
Let's bring back the webring.
Back in the day the best way to find cool sites when you were on a cool site was to click next in the webring. In this age of ailing search engines and confidently incorrect AI, it is time for the webring to make a comeback.
This person has given his the code to get started: Webring
like this
echomap, massive_bereavement, Lasslinthar, timlyo, and TheFederatedPipe like this.
like this
timlyo likes this.
Then the entire browser becomes useless. I couldn't even post this comment without JavaScript.
Edit: I wish a search engine that only showed websites without JavaScript existed.
noscript is like a screwdriver. umatrix is the whole toolbox.
both have their place
The idea comes up again and again on the fediverse. It feels ripe for some app/platform to kinda nail it.
I’m not sure this is it or even something that does exactly the old web ring thing. I think a simple enough system for the human curation of web pages in a standardised way that can easily be consumed and aggregated would go a long way though. The fediverse feels like its close to something.
That seems interesting!
In the end, I'm wondering if all the pieces are here on something like the fediverse but just need to be connected. I haven't thought about this at all until now (so I'm just riffing here) ... but the essence of such a system seems to me:
- Recommendations are human curated
- Recommendations come from a single human (or well defined collective)
- Reccommendations are organised in a navigable structure
Point 3 seems to be the unclear part. A "ring" is obviously a bunch of connections (not unlike a linked list). But other structures probably have a lot to provide here, especially if they're amenable to some basic search facility.
You might be overthinking it, or I might be underthinking it.
When I hear "webring" I think of a simple list of sites, curated by the ring creator. And all members have a badge on their site, complete with a few nav buttons.
It was never broke, why fix it?
It was never broke, why fix it?
Totally fair! I don't claim to know what I'm talking about! I'm just riffing on what I suspect would work for me, but also motivated by what I feel is a relatively urgent need to create some robust and diverse human curation of the internet. So in a way I'm not really interested in remaking web rings, but more coming from the perspective of what else can be done with the same general idea along side webrings.
other structures probably have a lot to provide here, especially if they’re amenable to some basic search facility
I got real excited about the webring returning. This... not so much. Keep it simple.
well the central site of the web ring could be searched for any particular page that's part of the ring, and that search could be surfaced on any page that's part of the ring.
The full set of pages could be decentralised and cached across all members for robustness, and even include each page's own description and recommendations for every other page if they like.
And then, of course ... rings of webrings with as many levels of aggregation as people are interested in maintaining, again with decentralised caches of pages, their links and descriptions (all human curated of course) that can all be searched whenever a member page or aggregating page opts into it.
Tech capabilities have advanced since the 90s enough now that basic text search in a web page over a small data set is not hard or too much to ask.
And nested rings of rings of rings are scalable because at each level the data will just be links (and descriptions or names if available) while it would be on the user to navigate the various layers however they wish until they find something they're interested in.
I'm aware of it (and while not being super enthused about it, I can my personal interest growing over time as the internet keeps tracking the way it is).
But how does it help with a page recommendation system? Is there a strong culture of that sort of thing on Gemini?
Iirc it's Geminispace.info
I could be wrong since I only visit their Gemini capsule.
like this
and Proddedcow like this.
one of my favorite things back in the day was the old-school "StumbleUpon" which was like webrings on crack.
Unfortunately, advertising and profit-seeking happened.
like this
tiredofsametab likes this.
Stumbleupon was great. I remember having a browser plug in for it. Then I stopped using it for a little while and never went back to it.
Does it still exist?
What Happened To StumbleUpon? Here's Why It Was Shut Down
StumbleUpon failed because of intense competition, leadership turnover, as well as due to a buggy product.Viktor Hendelmann (productmint)
FTA
eBay announced that it had agreed to acquire StumbleUpon for a whopping $75 million. The acquisition ultimately went through on May 30th, 2007. One of the major reasons why the team decided to sell to eBay was that it was promised complete autonomy and independence from its mother company
...sad trombone
You didn't have a good experience with it, many of us did have some food experiences with it.
But it made going out on the Internet interesting. Today I'm not sure if its less or more risky to view a sketchy site, is it more risky now with ransom ware, data scraypers, and such.
Ide consider viruses to be less of a risk today, but my results probably vary
My experience was that those webrings often worth checking out if you didnt have something specific you were looking for today.
Its not the same at all, but theres a sense of my experience when i suddenly realize im on wikipedia and have opened 50+ tabs after I've finished what i was reading. Then just going through the tabs you have open
like this
falseprophet and like this.
Webrings were themed though, so if your interest was cars, or cats, or ham radio, you could get on a webring for one of those topics and cycle through them.
And it wasn't all random, you could move left or right on the ring , or jump randomly. So a good webring manager could group sites together as you went around the ring as well.
like this
dhhyfddehhfyy4673 and like this.
Yeah! StumbleUpon was cool. Something about how it tried to engender serendipity.
Such a pity that so many other good recommendation engines died or succumbed to enshittification.
So would people having webpages instead of social media accounts
And there's your problem... (in the voice of Jamie Hyneman, Mythbusters). To see a real return of webrings, people would need to have (make) their own pages and curate some links.
Thinking about it, with the rise of selfhosted, it's actually really viable, cobble together a docker stack with a WYSIWYG HTML editor somewhat oriented to the task (pretty sure something out there can be repurposed), a web server, proxy, and that's about it (probably missing a fair bit, not my bailiwick, still, once the stack is made and solid, I'm guessing many would host, I would). Set a threshold of how many people you're willing to host, say 50 or whatever so you're able to check for CSAM or other legal minefields, and Bob's your uncle, stir in some solid security to keep it isolated if you're using it at home (or VPS) and it's golden.
OK, more complicated than I initially thought, and it's way less friction to use something like faceplant, which is entirely their point. Still, I think, if given the opportunity, and functional tools, and low enough friction, many would prefer to have a hand curated presence on the web above a facebook page.
I'll stop, but thanks for the interesting thought seed.
It will happen out of necessity once LLMs make search engines useless. Bookmarks and human-curated content will be the only way to find stuff.
It's already affecting small businesses worldwide, who aren't being discovered anymore by searches in their local area.
like this
and TheFederatedPipe like this.
like this
falseprophet likes this.
how would you federate? it comes natural for lemmy to have each community on a seperate server, but how would you do this for a project like dmoz?
i don't think it would be a good idea that one server could own "art" for example, and no one else could contribute.
and on the other side it would not be a good idea if everyone could add sites for "art" as then it's just a federated wiki? you still would have to fight spam? do all entries in "art" have the same priority? or should there be some voting, or verifying from other instances maybe? but then rough instances could vote for each other?!
how big is the spam problem on lemmy?
I don't know, but it could be interesting to try. I could easily imagine topic-focussed servers that go into more depth on specific topics. Perhaps you would only federate things that are at a high level, or directly linked. Kinda like a wiki, but with each community doing it's own decentralised curation and moderation..
I haven't seen any spam on Lemmy yet, and only a tiny amount on mastodon (I'm much more active there).
Stumbleupon was fun.
I miss old web shit.
Ninety zeros dot com was one of the Internet's weirdest best things.
[guide] How to install Aslain's modpack for World of Warships on Steam
I just spent half an hour trying to figure this out so I thought I'd write it down somewhere in case it helps someone else in the future.
Aslain's modpack contains a whole lot of quality-of-life mods for WoWs, for example Battle Expert (formerly known as Navigator) which shows the exact relative angles between your ship and the enemy's. Almost feels like cheating to me, but Wargaming has endorsed this modpack and it even has a dedicated channel on the official discord server. Theoretically you have the same information without the mod, but it can be difficult to see how a ship is turning or changing speed by just looking at it.
These instructions are for when the game is installed through Steam, which looks like it uses some kind of overlay filesystem. This led to that the game install folder didn't show up for the modpack installer when I tried other methods.
- Install protontricks, I used the version available in Fedora's repos.
- Download the modpack installer from the official site
- Find the WoWs install folder in Steam. Right-click World of Warships in the Steam games list, select Manage and "Browse local files" and the folder should open in your default file manager.
- In a terminal, run the modpack installer .exe file in the game's Wine prefix. I'm not entirely sure this makes any difference compared to running it in a new prefix as long as it can access the game files, it mostly seemed convenient to me. The app id for WoWs is 552990 and it should never change, but you can get it with
protontricks -l
if you're curious. Change the file path so that it matches the file you downloaded and run:protontricks-launch --appid 552990 ~/Downloads/Aslains_WoWs_Modpack_Installer_v.13.6.1_01.exe
It will print a lot of "failed to create" error messages for system dlls and exes, but that appears to be normal, and the setup window should open after a while. - After some release notes etc. the installer will eventually ask you for the game's install dir. As far as I can tell, the game files do not show up anywhere on C:, but Steam mounts your Linux file system on Z: so we can use that instead. Browse to the game install folder, which we located in step 3, and select it. My install folder on Linux is
/mnt/faststore/SteamLibrary/steamapps/common/World of Warships/
so I selectZ:\mnt\faststore\SteamLibrary\steamapps\common\World of Warships
in the modpack installer. - Either manually select the mods you want or use the recommended selection. As I wrote before, many for these mods feel like they give you an in-game advantage over other players, but WG has said they're legal...
- The first time I ran the installer it hung on "Finishing installation". It appears to happen to a few Windows users too but the mod dev doesn't know what causes it. I noticed that there was a cleanup process running in Wine
C:\windows\system32\cmd.exe /C DEL /s /f *.orig
which shouldn't take so long time so I killed it (in Linux) and the installer continued. The next time I ran it this didn't happen, and it only took a few seconds to finish the installation.
If you have the game installed as standalone, e.g. Lutris, then I think you can just run the modpack installer in the same Wine prefix, and you should see the game's install folder under C:\Program Files as you would on Windows. I.e. select the game in Lutris, click the tiny arrow next to the wine glass button and select "Run EXE inside Wine prefix" and then choose the installer you downloaded. But I haven't done this so I promise nothing.
Please don't take this as an endorsement of World of Warships, I borderline hate this game and only play it because some of my friends are obsessed with it. The gameplay is a bit too slow paced for my taste, there are a lot of hard counters which you can't do anything about in random matchmaking, and carriers (planes) can turn any game into pure suffering. I also dislike the game's monetization scheme, lootboxes are expensive and most have a tiny chance to give something really good and a big chance to give you complete garbage. The game might be f2p, but at higher tiers it becomes unplayable without a premium subscription (€10/month) since ship maintenance gets more expensive than your earnings. To maximize your ship's performance you need a high level captain, expensive modules and also buffs which are consumed each game. My friend tries to argue that the game is not pay-to-win because you can also grind ingame resources to buy those, but you'll spend many hours playing at a disadvantage if you don't buy your way past it. Just my personal opinion of course.
If you despite my warnings felt an urge to try this game (honestly I thought it was quite fun at lower tiers) then check if any of your friends are already playing it and ask them for a referral code. Both of you get free stuff from being recruited by someone else and once you've created an account it's too late, unless you stop playing completely for 3 months. If you do that it is possible for your friend to send you a recruiting link if you want to start playing again.
Just a heads up, I've read that it's impossible to connect an existing wargaming.net account to a Steam account on Linux, so make sure you authenticate through Steam when you create the account if you plan on playing it through Steam. Though if you have Windows dual boot then I think you can link the accounts there if you need to.
Download ★ World of Warships ★ Modpack
Aslain.com is powered by ...support us today and get a professional Quality game server from BlackBoxServers.net Spoiler Downloads for World of Warships 0.12.0.0 Aslain's WoWs ModPack v12.0.0 #14 main download link (.Aslain (Aslain.com)
Is Pegasus the best Steam alternative for launching games with a gamepad?
Do you know of interesting and equally customizable alternatives like Pegasus that are not proprietary and are comfy with the gamepad?
ES-DE Frontend (EmulationStation Desktop Edition)
ES-DE (EmulationStation Desktop Edition) is a gaming frontend for Linux, macOS, Windows and AndroidES-DE Frontend (EmulationStation Desktop Edition)
(edit)
Ok there's an AppImage. Btw looks like it can't see my games.
You need to put games into specific folders, but that's the whole setup. Everything else is automated.
For PC games you can put the .desktop
shortcut files for the games into the steam
folder
There is github.com/ShadowBlip/OpenGame…
As well.
RetroDECK uses ES-DE but don't support native games yet, so pure ES-DE is better for this.
GitHub - ShadowBlip/OpenGamepadUI: Open source gamepad-native game launcher and overlay
Open source gamepad-native game launcher and overlay - ShadowBlip/OpenGamepadUIGitHub
Deckweiss
in reply to archer • • •archer
in reply to Deckweiss • • •PotatoesFall
in reply to archer • • •archer
in reply to PotatoesFall • • •PotatoesFall
in reply to archer • • •archer
Unknown parent • • •warmaster
in reply to archer • • •archer
in reply to archer • • •So... no idea what happened but it works again.
Just in case anyone stumbles upon this thread at some point here are my current settings:
- The compatibility layer is proton-ge-custom (from AUR)
- Steam Input Translation in the Forza Horizon 4 Controller Settings is enabled using the "Official Layout for Forza Horizon 4 - Gamepad"
- Regular mode (not big picture mode)
- Steam Overlay is enabled
I still have the issue that on some screens it shows the keyboard buttons and not the controller buttons but since the controller works anyway I don't really care (it's just the optics).
Thanks everyone for your suggestions! :)
Alchalide
in reply to archer • • •