Beslut utan konsekvenser om fisket i EU. Idag finns en begränsningsregel om fiskekvoter i EU som i princip säger att allt riktat fiske efter bestånd som riskerar kollaps kan stoppas helt. Den har i princip aldrig haft nån praktisk betydelse och det fanns ett förslag om att ta bort regeln. Förslaget röstades dock ner i fiskeriutskottet i EU.
Announcing: frog-protocols for wayland
From the repo
Wayland Protocols has long had a problem with new protocols sitting for months, to years at a time for even basic functionality.This is hugely problematic when some protocols implement very primitive and basic functionality such as
frog-fifo-v1
, which is needed for VSync to not cause GPU starvation under Wayland and also fix the dreaded application freezing when windows are occluded with FIFO/VSync enabled.We need to get protocols into end-users hands quicker! The main reason many users are still using X11 is because of missing functionality that we can be shipping today, but is blocked for one reason or another.
Mesa MR to add support for the 'frog-fifo-v1' protocol :(github.com/misyltoad/frog-prot…)
GitHub - misyltoad/frog-protocols
Contribute to misyltoad/frog-protocols development by creating an account on GitHub.GitHub
like this
imecth, Virkkunen and ShaunaTheDead like this.
Octopuses and fish caught on camera hunting as a team (ft. octopus punching fish)
"Octopuses normally hunt alone, but footage captured by divers has revealed that they can collaborate with fish to find their next meal. The videos, described today in Nature Ecology & Evolution (citation 1), show that the different species even adopt specific roles to maximize the success of joint hunting expeditions."
Associated research article (open access):
Sampaio E et al. Multidimensional social influence drives leadership and composition-dependent success in octopus–fish hunting groups. Nature Ecology & Evolution (2024). doi.org/10.1038/s41559-024-025…
Same news that was independently reported by Science News (might need membership):
science.org/content/article/so…
like this
bacon_saber, NoneOfUrBusiness, PokyDokie and ShaunaTheDead like this.
like this
PokyDokie likes this.
Earth may have breached seven of nine planetary boundaries, health check shows
Earth may have breached seven of nine planetary boundaries, health check shows
Ocean acidification close to critical threshold, say scientists, posing threat to marine ecosystems and global liveabilityDamien Gayle (The Guardian)
like this
originalucifer, Atelopus-zeteki, aramis87, Maeve, willismcpherson and ShaunaTheDead like this.
Sorry to be a pessimist, but, I truly believe Earth will smother the vast majority of humanity within 6-10 years. I'm squeezing in as many adventures as I can until then.
Good luck y'all.
like this
Maeve likes this.
Announcing: Frog Protocols for Wayland
like this
Virkkunen and ShaunaTheDead like this.
Unauthenticated RCE vs all GNU/Linux systems to be fully disclosed in 2 weeks with no working fix yet
https://nitter.poast.org/evilsocket/status/1838169889330135132
like this
dhhyfddehhfyy4673 and ShaunaTheDead like this.
A 9.9 is pretty bad no matter what. They wouldn't rank it almost a 10 if it was some obscure bug that is very hard to exploit.
With that being said it is hard to know without details
This link should be working.
Quoting from the OP tweet:
* Unauthenticated RCE vs all GNU/Linux systems (plus others) disclosed 3 weeks ago.
* Full disclosure happening in less than 2 weeks (as agreed with devs).
* Still no CVE assigned (there should be at least 3, possibly 4, ideally 6).
* Still no working fix.
* Canonical, RedHat and others have confirmed the severity, a 9.9, check screenshot.
* Devs are still arguing about whether or not some of the issues have a security impact.I've spent the last 3 weeks of my sabbatical working full time on this research, reporting, coordination and so on with the sole purpose of helping and pretty much only got patronized because the devs just can't accept that their code is crap - responsible disclosure: no more.
Since this affects Linux and others, I'm guessing this is about OpenSSH. But I'm not very certain. Just can't think of another candidate.
But holy sh, if your software has been running on everything for the last 20 years
This doesn't sound like glibc as someone in the thread guessed.
Could be quite a few different things.
Could be the kernel itself, gnupg, openSSH or even bash.
But we won't know for sure, until it's publically disclosed.
Could be the kernel itself
Wouldn't make sense to me because the thread says GNU/Linux and others, though this could relate to Android or distros not using any GNU.
gnupg
Usually not exposed to the network though, but it's generally a mess so wouldn't be too surprising
Another candidate I have in mind is ntpd, but again that is usually not easily accessible from outside and not used everywhere, as stuff like systemd-timesyncd exists.
Just want to stress that I'm not sure about it being OpenSSH, it was more supposed to be a fun guess than a certain prediction
I can't think of anything except the kernel that is genuinely obligatory on all Linux systems, including embedded. Not glibc (musl). Not udev (mdev). Not systemd (OpenRC/runit/etc). My guess is that this is another exploit of something the reporter hasn't realized isn't mandatory because they're not familiar with non-mainstream distros. I suppose it could be a kernel issue that Android has specifically patched, but if that's it it'll be fixed in short order.
In other words, not exactly holding my breath.
If it's only GNU Linux - and not regular Linux - then we know it's not the Linux where the issue occurs.
(Just analyzing what's said. It's probably all linuxes if it's not a glibc issue)
It says GNU/Linux but also says "and others" which could mean anything. eg doesnt specify if something like Alpine would be affected—is that "and others"?
In any case, I'll wait 2 weeks and find out.
Looks like its out there now:
evilsocket.net/2024/09/26/Atta…
Short version (correct me if I'm wrong):
If you have CUPS service cups-browsed on your machine and you for some reason exposed that to the internet (port 631), you are about to get pwned.
EDIT: It also requires the user to print to the malicious fake printer.
Disappointment? Only if you mean the person that came up with FoomaticRIP.
For those who did not read the entire thing, it's a so called "filter" that converts the document before it's sent to certain nasty types of printers. Except it's not executed on the print server. The unauthenticated print server can just ask a client to run it on their side. And it's designed to be able to execute ANY command.
Getting very close?
At this point they should just have that announcement when they actually have the thing.
God idk what version I'm even using. I never update programs like this unless I have to. From breaking things, to confusing my workflow and moving things around; I've always been more frustrated than thankful. I haven't updated Reaper in ages too and I'm certain they're better about keeping continuity than anyone.
Edit: lol people are mad about my own software updating habits? why and how are .worlders this way? I can't imagine the realities they live in, and am glad so
Maybe some people just click dislike because it has lots of dislikes? Who knows :)
like this
KaRunChiy likes this.
like this
dhhyfddehhfyy4673 and KaRunChiy like this.
like this
KaRunChiy likes this.
like this
originalucifer likes this.
like this
originalucifer likes this.
like this
KaRunChiy likes this.
we may see more US politically motivated investment to stay ahead in the game vs our global enemies.
Yes Boeing et al should be given more taxpayer money to waste like
Ain't the free market grand
This is expensive cgi, it screams it.
Not quite sure why they made it but someone needs to be convinced of something for sure.
(Just compare it to any footage of SpaceX landings, this video is far to clean, the sky alone is "perfect")
SaltySalamander likes this.
I'm thinking of building a PC - any advice?
My laptop is running out of storage space and I don't have anything I can remove anymore to increase it by much, so I'm thinking about building a pc. I'd also like to find a better gpu for doing video editing.
It will be the first one I've built, so I don't really know what I need. Also, does it matter for compatibility for Linux whether I go with AMD or Intel?
The high end of what I want to use it for is video editing with Kdenlive or Davinci Resolve, some modeling and animation in Blender, and some light gaming, like Minecraft or TUNIC.
I figure one of these guides might be useful, but I don't really know which.
Is there anything else I should know for setting up a PC to run Linux?
Edit: Maybe these guides from Logical Increments can help actually.
like this
offendicula, ShaunaTheDead and bizarroland like this.
reshared this
Tech Cyborg reshared this.
Just don’t bother with a 13th/14th gen intel right now. Either go 12th gen intel, or straight up AMD which is what I’d recommend.
like this
rem26_art likes this.
Uh… are you not aware of the catastrophically bad lithography issues Intel has had lately across both the 13th and 14th gen, and the subsequent ass-tier fashion in which they handled it?
Do not buy a 13th or 14th gen Intel CPU.
like this
imecth likes this.
Whatever you do, do not get an Nvidia GPU. I've only ever had problems with Nvidia drivers on Linux. Meanwhile, the AMD drivers (both the ones baked into the kernel and proprietary) work nearly flawlessly.
Intel's most recent generation of CPUs were also frying themselves and Intel (at least last I checked) were not accepting RMAs from affected customers. Something to consider for your CPU at least.
pcpartpicker.com is a good place to start and can help you know if specific parts are compatible but it's just a place to start and is often still missing important info.
So you still need to do due diligence and do things like check measurements to make sure, for example, your video card will actually fit inside your case, etc.
Also, since its your first time, you want to avoid any motherboards that require you to do a BIOS update to handle a newer processor, because that's just complicated stuff that you're going to want to skip as a beginner.
It's more expensive but go for a newer motherboard that is compatible with your processor out-of-the-box. BIOS updates are a pain and scary even for advanced users.
like this
rem26_art likes this.
High use Blender users tend to avoid AMD for the reasons you point out.
This leads to less updates due to amd users not being to interested in the community.
It is an issuw without any practicle solution. Because as I need a long overdue update. Again nvidia seems the only real choice.
Everyone is sorta forced to do that unless we can convince amd users to just try out blender and submit results.
So hi any AMD users who dont care about blender.
Give it a try and submit performance data please.
I'd avoid a 13th or 14th gen Intel processor right now because they've had a lot of problems with their manufacturing process. Otherwise, there's not really much difference between AMD and Intel in terms of like, OS compatibility or anything.
I've done some basic work with Davinci Resolve on linux and I haven't really had any issues with my Radeon 7800XT. I can't really speak for using the proprietary drivers for AMD, but with the open source drivers, as long as you install rocm-opencl through your package manager, Davinci Resolve should be fine. Overall, I'd recommend an AMD GPU.
Edit: You mentioned blender in a comment. For AMD's open source drivers you'd need to install rocm-hip for Cycles to work
Edit 2: I hadn't tried blender in a bit and I realized apparently at least on Fedora 40, you also need rocm-hip-devel at least as of 09/24/24 for supported AMD GPUs to show up in Blender. Idk how that would translate to other distros
PC Part Picker is good cuz when you start a new build, you start with the CPU and then it'll only show you parts compatible with that CPU. As someone else mentioned tho, its not perfect and you still may want to check clearances between parts, like that your CPU cooler isnt too tall for your case, or that your Power Supply isnt too long (been there, lmao)
From my own personal experience with buying brand new RAM and it being bad a few times, I'd probably run memtest86+ for a few hours once the computer is together to make sure that the RAM actually works. You can download the linux ISO w/ GRUB option and make a bootable flash drive out of that and let it run. Afterwards, I usually install my OS. Might save you a few headaches down the road if you get into your new OS and things behave strangely, but its up to you.
Other than that, the setup shouldn't be too hard.
- Stay clear from nvidia. AMD if you buy a graphics card, if you just use integrated graphics both AMD and Intel are fine
- When picking a motherboard, look what wifi chipset is used and check Linux compatibility. Some wifi chipsets require to manually install drivers, and some just don't work at all
ebay, ebay, ebay (and also pcpartpicker).
Unless you want to frag people at 4k@140Hz in the latest AAA game, you probably don't need the latest generation components (and I'd say your requirement are quite low here, consider how the only thing you complain about is storage space).
Unless you really want to assemble everything by yourself, consider buying one of the second-hand, previous-gen gaming rigs on ebay (but watch out for scams!). Even if you do want to assemble the PC yourself, consider buying used parts on ebay (or buying a full PC to cannibalize reselling the excess).
What are the specs of your current rig?
Except for storage, are you satisfied with how it runs?
How much storage do you need for the projects you are working on? How much to archive things?
Do you want to do anything about backups?
Is a full size tower ok?
How good a video do you want?
What is your budget?
Here's a video with some good builds at different price points. That should be a decent starting point.
As you have in your post, Logical Increments is a good place to start.
As others have said, AMD is your best bet currently, mostly because of raw performance compared to recent Intel offerings. If you have no limited budget or power requirements, here are my recommendations:
If you have the paid version of Davinci Resolve, AMD does not have the best selection of hardware encode/decode options, but people have reported that Intel Arc GPUs work, so I would get and Intel A310 as a secondary GPU if that is something that you need.
If you want the best of the best GPU, without going Nvidia, the AMD RX 7900XTX is it. Also, AMD has stated publicly that they are moving away from high-end GPUs, so there probably won't be a better one coming out anytime soon.
If you want to plan for more gaming than you stated in your post, the Ryzen 7800X3D is the best gaming CPU on the market, so I would get that. If you plan to focus on video editing, the 9950X is the best, but probably not worth the cost compared to cheaper 9000 or 7000 chips.
If you go with a Ryzen 7000 or 9000 CPU, get DDR5-6000 CL30 memory.
If you're getting an air cooler for your CPU, don't pay more than $50. There are a ton of great, cheap options these days.
Get either the new Antec Flux Pro case (when it's available, probably this month) or the Fractal Torrent if you care about best thermals and quiet operation. Everything else is a compromise.
If you need HDMI 2.1, you'll need a DP -> HDMI adapter on an AMD GPU because of a licensing squabble.
Those are things I could think of off the top of my head. I don't think I missed anything big.
What's ROCm? Lol
ROCm is basically AMD’s GPU compute system, like CUDA but worse but better because the card is actually usable for desktop stuff.
However, they only support it on specific distros, and they’re really weird about what cards they support. This should be changing soon - Debian’s been working on packaging it natively, and I think so has Fedora.
Some build advice:
- Be safe - don't wear socks, stand on a hard floor if possible, ground yourself if you have a wrist strap for that, and discharge any static by touching metal and/or the case before touching any components. And no matter what, DO NOT open the power supply, and definitely don't touch anything in it!
- The huge motherboard connector probably requires more force than comfortable.
- Watch through at least one build guide before starting. That way you know the process.
Hope that helps, and don't let it scare you away - it's really fun to do and if you're careful, chances are nothing major will go wrong.
I built my current PC using one of those PCPartPicker guides, and I'm very happy with it.
The only issue I had was the video driver. I use the Linux Mint Long Term Support version, and the kernel didn't have a recent enough driver for my card. I just needed to switch to the latest kernel and it was good to go. I actually had no idea how to troubleshoot it, and went to the LM forum to ask for help. I was reading through the guide on what info to supply with help requests and realised that the example fault and solution were the exact ones I was facing!
I've noticed that when I am specking out a new computer I typically fall into the trap of wanting the absolute best computer I can get for the money.
I've always been on the cheaper side, so I have found myself spending days or weeks researching various parts at various quality levels at various prices.
It becomes a huge drag.
Set the budget that you're comfortable with, find the motherboard that has the features that you want, then get a CPU that fits in that price range, a case that fits your use cases, and then if you're going to splurge on anything splurge on the power supply as a good power supply can last you through multiple computers.
If you have to save money somewhere, save money on RAM as you can always order more or upgrade the rim that you have relatively inexpensively. Maybe if you're going intel, purchase an i5 CPU and then consider upgrading if you max out its abilities or you find yourself frequently running at 100% utilization.
And don't overlook pre-builts. There are lots of refurbished computers that you can purchase for far less than the cost of the individual parts that have all of the minimum specs that you want in exchange for little things like only having a single stick of ram or having a low quality SSD.
There's nothing that stops you from upgrading later should your use case change.
like this
bizarroland likes this.
like this
bizarroland likes this.
I've used Logical Increments in the past and found it very useful to meet a budget. Now I aim for "price to performance" sweet spots (since GPU prices have been crazy I'm now well overdue for a new GPU).
Both CPU manufactures are changing their naming schemes (to make it difficult to know what it is, I wish this was hyperbole). GPU manufactures also make some weird choice on naming GPUs (same-name GPU with different VRAM). Reading/watching reviews of specific parts will likely be the best way to know what you aught to buy.
If you're confident in your technical knowledge or want to then narrow down your choices then I would recommend watching videos from:
- (GPU, CPU, Case)
- (GPU, CPU, Case, Monitor) Hardware Unboxed and .
For a casual overview of CPUs/GPUs video review I'd recommend something like Linus Tech Tips (even with the prior controversy).
GPU go with AMD, I don't think I need to give much explanations here.
CPU you can do either, BUT AMD is usually better for multi-threaded applications (like video editing, modeling or animation), also an AM5 slot should last you years to come, AMD stayed with AM4 for a long time (I had most of the same PC for almost a decade thanks to that, it's still the same AM4, but I had to replace the MOBO since the old one broke). So I would also choose AMD here, although Intel is not bad either, and if you get it in a sale it might come out cheaper.
My first question is about your laptop; is the SSD removable, because if so, even a pretty large SSD is cheap these days.
Also, the GPU question is complicated. For most use cases, AMD is better on Linux. However, since you’re doing Resolve and Blender, that gets a bit murky. It depends on if ROCm support is less dismal on later AMD cards - I have an RX 580, which AMD quickly dropped support for and I am bitter about.
This is not to say I like NVidia, but for fast video encoding and rendering, as far as I know, it’s the easier option. Someone correct me if I am wrong, please.
As for actually building the thing, you’d start by look for what CPU you want, then find a compatible motherboard, then read the board’s compatibility list for RAM. They usually have compatibility lists for storage - those don’t matter, as it’s pretty universal. Then choose a graphics card, a case with the right form factor, a PSU, and a cooler. I tend to go with liquid cooling, as it’s not that expensive anymore.
Like others have said, check kernel support for your hardware, but also, it’s generally much easier on desktop. The main things to look out for are ethernet and WiFi controllers. By the way, what distro do you prefer, because that’s definitely a factor.
you are getting advice that will make a good gaming pc but not a good workstation for what you said you're gonna do.
do the opposite of what most everyone in this thread is saying:
intel over amd (this could actually go either way depending on the price point), nvidia over amd, start at 32gb of ram and go up from there. prioritize cores over threads, sneak a rotational hard disk in, spend more on your power supply than you planned to.
plan on not using wayland.
I am not going to fight you on if x is better than Wayland.
The ops use case involves operations, software and hardware that function best with x.
The op should avoid Wayland.
The op asked for help to make their experience as painless as possible and listed two primary use cases that not only are often related to the problems people encounter with Wayland but function best with hardware that is also related to the problems people encounter with Wayland.
If someone said they need to haul hay I wouldn’t say “try it in your Saturn first and see if it works!” I’d say “make sure you have a truck or a trailer.”
The harm is in setting a person up for failure when they asked for help.
You mainly want to be able to do 3d and video editing right?
Those two, specifically with davinci resolve and blender, work best with nvenc and libcuda(?), the software libraries that let you take advantage of your nvidia cards encoders and cuda cores.
So if you were building for that workload, you’d have an nvidia card and many problems people encounter in Wayland come from using it with an nvidia card.
So yeah it’s the nvidia support. Most people will say “fuck nvidia, just don’t buy their hardware” but it’s the best choice for you and would be a huge help, so choosing between Wayland and nvidia is a no brainer.
It is a bummer that you’ll need to install x specially, but I’d be really surprised if there isn’t decent support for that.
There’s always the hope that Wayland will get better over time and you’ll be able to use it in a few years.
E: a word on encoding: both amd and intel CPU’s have video encode and decode support, but the intel qsv is more widely supported and tends to be faster most of the time. When people suggest intels arc gpus they’re saying it because those gpus use qsv and for a video editing workstation they’d be a good choice.
Part of the reason I put intel and amd cpus on an even footing for you is because any cost savings you get from going amd would likely be offset by the performance decrease. Theres some good breakdowns of cpu encoder performance out there if you want to really dive in, it remember that you’re also in a good place to buy intel because of the crazy deals from sky is falling people.
That kinda ties into the cores over threads thing too. If your computers workload is a bunch of little stuff then you can really make hay of using a scheduler that is always switching stuff around. One of the things that makes amds 3d processors so good at that stuff is that they have a very big cache so they’re able to extend the benefit of multi threading schedulers up to larger processes. You’re looking at sending your computer a big ol’ chunk of work though, so you’re not usually gonna be multithreading with that powerful scheduler and instead just letting cores crunch away.
Part of the reason I didn’t suggest intels arc stuff is that you’re also doing 3d work and being able to take advantage of the very mature cuda toolchain is more important.
Plus nvidia encoding is also great and if you were to pair it with an intel cpu you could have the best of both worlds.
You’re really looking to build something different than most people and that’s why my advice was so against the grain. Hope you end up with a badass workstation.
For DaVinci Resolve, you will need an nvidia gpu, even their amd support is half-ar3ed, and intel doesn't work at all (they don't support it under linux, while they do on windows). So you need to decide if you're going to use resolve, or kdenlive (that works with everything, since it's not really accelerated -- it's slower (their acceleration is buggy)). However, if you're going with nvidia, you will probably experience problems on the everyday desktop. So I'd suggest an amd gpu and cpu possibly.
Alternatively, just get a refurbished Dell laptop, or an older Zenbook. These usually work great with Linux.
Blender and DaVinci Resolve work better on Nvidia. AMD might work, but it will be a hassle and you'll likely need the proprietary AMD drivers anyway.
With Nvidia supporting Wayland and the open-source NVK continuing to get better, you could even switch to open source drivers for gaming at some point, if you prefer.
Edit: I've had enough issues with AMD GPU's clocking down while gaming, leading to micro stuttering. So don't buy AMD just because everyone tells you they work flawlessly.
For CPU and mainboard, everything works well — just don't buy a random unknown SSD from Amazon, then you're asking for data loss and random issues.
Not sure if this would help, but I found this channel helpful for understanding the basics and mostly avoiding wrong parts. Also he has some videos were he explains why you should choose one part over another.
Hovrättsdom om mord i Huddinge. Svea hovrätt har idag meddelat dom i ett mål om mord i Huddinge. Mordet har utgjort en del i en våldsam konflikt mellan två olika kriminella grupperingar med rötter i Södertälje.
China Is Rapidly Becoming a Leading Innovator in Advanced Industries
China Is Rapidly Becoming a Leading Innovator in Advanced Industries
There may be no more important question for the West’s competitive position in advanced industries than whether China is becoming a rival innovator.Robert D. Atkinson (Information Technology and Innovation Foundation | ITIF)
like this
ShaunaTheDead likes this.
Foreigners from outside assaulted the wholly innocent local and German population? Please don't let the AFD* hear about that!!1 😱
(AFD is a far-right/right-wing populist political German party which try blame foreigners for all violence happening there and who proclaim that all "foreigners" are "Messermänner", i. e. men who violently attack everybody with their knifes who don't agree with them in all their views.)
PS Sorry, could not resist. But that indeed is quite an interesting article. Thanks!
- Cool story. I liked it, and the visual of the skullbone with an arrowhead in it was welcome, as well as sufficiently out of context not to feel gruesome.
- I think the headline of "Europe's Oldest Battlefield" is more likely to be accurate than the article's "world’s oldest battlefield," but there may be some nuance of meaning (oldest with war dead actually found in situ?) I'm missing. Neat thing to learn about either way.
- The iamverysmart contingent that refuses to read the entire articles is out in full force in the Gizmodo comments, with several people suggesting that the foreign arrow heads were from trade ("The foreign arrowheads have not been found in tombs in the Tollense area, indicating that the arrowheads from elsewhere didn’t simply make their way to the region through trade."), and several others musing on what the metal arrowheads might have been made of ("The arrowheads were flint and bronze.").
As the team noted in their paper, no helmets and breastplates typical of the time have shown up from archaeological excavations of the site, so more digs may be necessary to reveal more about the ancient combatants at Tollense, the remains of many of whom remain on the site.
Probably picked clean after the battle. I would think scavengers knew that this was a location that yielded scrap metal following the battle.
QMK (and Kanata)
I'm posting here because I have nowhere else to post. If you squint, this meets the community rules because my current keyboard is a Piantor/42, and my issue stems from a combination of 40% and QMK behavior. Although, to be honest, this is mostly about QMK, but using Discord is painful, and I'll go there only as a last resort.
For a long while, I used Kanata on my laptop, and desktop an ErgoDox, having replaced kmonad because of one certain feature: tap-hold key sequence behavior. It's best described here, but the tl;dr is that (press lsft) (press a) (release lsft) (release a)
where a
is a tap-hold key should output "A" and not "a" -- kmonad outputs "a".
A few months ago, when I got my Piantor, I discovered that this sequence outputs no character, and although there's an option that makes it output "a", I can't find a combination that makes it output "A". I'm asking whether, in the bewildering set of QMK variables, is there a way to configure QMK s.t. the sequence (press lsft) (press a) (release lsft) (release a)
outputs "A"?
That's the main thrust of my question. As a sort of addendum, I think this behavior is behind another of my QMK irritations: I'm a reasonably fast typer, and often will be typing the next key before I've completely released the previous key. This means I have to set a large-ish time-out before tap-hold engages, which introduces an annoying delay whenever I want to chord a layer and get at, e.g. numbers. I do understand that this is may be an unsolvable issue, that it's just an unavoidable limitation on small keyboards in having so many common keys (numbers, punctuation, and arrows are the worst -- coding, nearly half the text are characters from layers). Either I have a long timeout and and live with an annoying delay when I want to type (many) punctuation characters or numbers; or I have a short timeout and frequently accidentally shifting layers. However, I feel as if this might be mitigated somewhat with the Kanata-style key sequence handling, because even though my Kanata configuration is nearly an exact mirror of my QMK layer configuration, I never have this problem with Kanata.
I suppose I could give up on using QMK for anything except the most fundamental mapping, and use Kanata instead. However, there's an appeal to the portability of having the programming in the keyboard itself; it makes me a little less dependent on the computer to which the keyboard is attached.
QMK (and Kanata)
I'm posting here because I have nowhere else to post. If you squint, this meets the community rules because my current keyboard is a Piantor/42, and my issue stems from a combination of 40% and QMK behavior. Although, to be honest, this is mostly about QMK, but using Discord is painful, and I'll go there only as a last resort.
For a long while, I used Kanata on my laptop, and desktop an ErgoDox, having replaced kmonad because of one certain feature: tap-hold key sequence behavior. It's best described here, but the tl;dr is that (press lsft) (press a) (release lsft) (release a)
where a
is a tap-hold key should output "A" and not "a" -- kmonad outputs "a".
A few months ago, when I got my Piantor, I discovered that this sequence outputs no character, and although there's an option that makes it output "a", I can't find a combination that makes it output "A". I'm asking whether, in the bewildering set of QMK variables, is there a way to configure QMK s.t. the sequence (press lsft) (press a) (release lsft) (release a)
outputs "A"?
That's the main thrust of my question. As a sort of addendum, I think this behavior is behind another of my QMK irritations: I'm a reasonably fast typer, and often will be typing the next key before I've completely released the previous key. This means I have to set a large-ish time-out before tap-hold engages, which introduces an annoying delay whenever I want to chord a layer and get at, e.g. numbers. I do understand that this is may be an unsolvable issue, that it's just an unavoidable limitation on small keyboards in having so many common keys (numbers, punctuation, and arrows are the worst -- coding, nearly half the text are characters from layers). Either I have a long timeout and and live with an annoying delay when I want to type (many) punctuation characters or numbers; or I have a short timeout and frequently accidentally shifting layers. However, I feel as if this might be mitigated somewhat with the Kanata-style key sequence handling, because even though my Kanata configuration is nearly an exact mirror of my QMK layer configuration, I never have this problem with Kanata.
I suppose I could give up on using QMK for anything except the most fundamental mapping, and use Kanata instead. However, there's an appeal to the portability of having the programming in the keyboard itself; it makes me a little less dependent on the computer to which the keyboard is attached.
It looks like the feature is called Auto Shift in QMK/VIAL (I had to open up VIAL to remember the term).
I'm not sure I have a good answer for quick typing, I was in the process of implementing urob's timeless homerow mods for ZMK when I fried my ZMK board. I know there are QMK implementations (maybe in userspace?), but my quick googling didn't come up with what I was thinking.
For QMK, my go-to hrm-embetterment came from Achordion. Not sure if that helps OP, but it made hrm amazing for me, and it does the sort of timing tweaks that might help here.
Otherwise, the HOLD_ON_OTHER_KEY_PRESS mode might help.
Thanks for the suggestions.
Auto Shift is a neat idea; I suspect adding yet another delay factor would be counter-productive, for me, though. The whole shift/press/release-shift/release-shift issue is more about typing faster than my brain actually works, such that I'm not always consistent in which order I release keys. Adding a lag to get caps would be, I think, infuriating.
However, the timeless homerow mod looks fantastic. Need to think a little more about their solution, but it's encouraging in that (a) they sound like they have exactly my problem, and (b) they found a solution for it. It's a great resource, thank you!
I... had not. The permissive hold option looks like it might address my second issue, that of the fact that the fixed timing is incongruent with my far less consistent human timing.
Someone else mentioned the timeless homerow mods, wherein positional hold-tap
is used to address rolling issues, which is what I think I'm seeing with the shift-down/char-down/shift-up/char-up sequence not outputting the expected shifted character.
Thanks for pointing me back at the documentation. TBH, I'm a bit overwhelmed by the number of configurable options in QMK, and suffer information overload: pointers like that help me figure out where to start fiddling. Working with QMK, even through the incredibly convenient Vial application, makes me feel like sitting in front of a Moog synthesizer.
Open Source AI Definition – Weekly update September 23
The more maintainers are paid, the more improvements they make to their projects
In the second finding of the 2024 Tidelift state of the open source maintainer survey, we found that the more maintainers are paid, the more improvements they make to their projects.
...
In the previous finding, we reported that 60% of maintainers describe themselves as unpaid hobbyists, and 36% of maintainers describe themselves as paid (professional or semi-professional) maintainers, earning some or all of their income from their open source work.
...
When you break down the paid maintainers into professional (earning most or all of their income from their maintenance work) and semi-professional (earning some of their income from maintaining projects), it becomes clear that the amount of money a maintainer is making for their work has a large impact on the types of improvements they are able to make. Across nearly all major categories, professional maintainers are on average over 20 percentage points more likely to make key improvements to their projects than semi-professional maintainers.
...
In the previous study, 81% percent of professional maintainers earning most or all of their income from maintaining projects spend more than 20 hours a week maintaining their projects. This year, the percentage was nearly identical (82%).Conversely, in last year’s survey, we found that the vast majority of unpaid hobbyists spend ten hours or less per week on their maintenance work (81%). This percentage also stayed consistent in this year’s survey, with 78% of unpaid hobbyist maintainers working ten hours or less per week.
...
We’ve heard from many maintainers that how they are paid for their work also matters. For many maintainers there is a huge difference between getting a one-time “airdrop” of money, perhaps right after a high profile incident where people are paying attention to their projects, compared to ongoing recurring income that they can count on. So this year for the first time we asked maintainers to tell us whether they would prefer to get predictable monthly income or a one-time lump payment.An overwhelming majority of maintainers prefer to receive predictable monthly income, with 81% choosing that option.
like this
ShaunaTheDead, DaGeek247, echomap, Lasslinthar and Aatube like this.
reshared this
Tech Cyborg reshared this.
like this
Aatube likes this.
like this
Aatube likes this.
My first instinct is to say "No shit Sherlock", of course people who get paid more for their projects can afford to contribute more time to them...
but I do understand that having empirical documented evidence of something, even of it should be common sense, is really important, cause common sense isn't as common as people think it is (especially when a lot of people in power seem to quite intentionally lack it)
like this
Aatube likes this.
Official Plasma 6 Breeze UI Refresh Mockups
cross-posted from: lemmy.world/post/20092494
GujjuGang7 on Reddit found this, saying:
Found this link while looking through the upcoming theming engine (Union) repository. It has mockups for several core apps (dolphin, Kate, konsole and more) and general components such as modals and titlebars.KDE contributor Manueljlin would like to remind you:
hey folks, it's really early still. we didn't even properly show it at Akademy. there's no design system to properly back it up yet - only some tokens and components that are definitely subject to change. please keep that in mind
like this
ShaunaTheDead likes this.
It’s so sad to hear that KDE died of figma.
(For real though I’m looking forward to seeing how this turns out, love Plasma a lot so hoping for the best from it)
like this
originalucifer, IAmLamp and ShaunaTheDead like this.
How expensive would it be to make similar spacecraft now?
Assuming it's relatively cheap, what could we learn from sending out thousands today?
like this
RandomStickman likes this.
The voyager probes only got as far as they did because of their trajectory that got some massive (and rare) slingshots, it will take ages for the new horizons probe to get anywhere near as far.
We could probably spam missions to some other planets, who will pay for it though? We are not at the stage where an 'out of the box's mission can do that I think?
like this
BlackLaZoR likes this.
- Billions.
- Little to Nothing. Because they wouldn't make it as far as fast as the Voyager probes because they got a massive gravitational assist from a rare alignment that only happens every 176 years. All the other planets needed to be aligned appropriately for this journey at this speed. New horizons may leave the solar system in 43 if we don't lose contact. And they already want to shut the program down. NH is about 10000 km/h slower than Voyager 1.
Best to use targeted probes to explore things we haven't before. Ask different questions and if they leave the solar system, good on them. But I'd prefer orbital data satellites around all the ocean moons in the outer solar system.
Maybe I could have been more explicit. Without the planetary alignment that made the voyager probes possible an equivalent mission would be ridiculously expensive/impossible due to the fuel requirements (and wouldn't be able to visit all of the planets)
If starship/new glen/the rocket lab one work, it might become more feasible.
Instead, sending smaller, simpler probes that just visit one planet/moon would be much more cost effective, but still expensive.
We have already got a lot of the low hanging planetary science fruit from existing missions. New missions would need new/novel sensors or need Landers/aircraft which make them much more expensive.
Even just a 'standard' interplanetary mission isn't just an out of the box job like current earth satalites are becoming.
A drop in the bucket isn’t event close to a good understanding of how big space is. A satellite in the ocean is grossly misleading when it comes to the scope of space.
Maybe a single O2 molecule in the ocean might be closer but even then that’s not even close to the scope of space.
Space is big. So big that the light cone of our “pollution” can’t physically interact with most of it even if we did our best to “pollute” as much as we can and some alien species did their best to find that “pollution”. Space is so big that physics dictate the impossibility of our “pollution” interacting with most of space.
Fun fact this is why the chance of aliens visiting us here on earth is basically 0.
You can’t use earth scale thinking, that’s how big space is.
This all being said we should do our best to not pollute the earth. We should use earth scale thinking when it comes to earth.
There's about 4.6*10^46 molecules in the ocean. There are about 8.5*10^47 cubic meters in a cubic light-year
Surprisingly close orders of magnitude
For reference, the closest next star system is 4.25 light years away. The diameter of our Galaxy is about 105 700 light years, with a thickness of about 1000 light years (much less than the diameter, since our Galaxy lies on a plane)
Huh. That’s crazy. And that’s just one cubic light year.
Now if we multiply that cubic light year to match the volume of space we have a similar comparison. Infinite oceans to sift through for a single molecule.
Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.
Douglas Adams
I've always enjoyed this video to give a perspective on size
I usually increase playback speed, it's a bit slow at start.
Fun fact this is why the chance of aliens visiting us here on earth is basically 0.You can’t use earth scale thinking, that’s how big space is.
But that is earth scale thinking. You know, in a "things heavier than air can't fly" way.
That's what i meant. Even our civilization, with our limited understanding of physics, can think of theoretical workarounds.
Dunno if aliens are on earth. But that argument against it, is only guesstimating.
Fair. But that’s not really earth scale thinking in my book. It’s more our best understanding based on what we know.
I know of these theoretical work arounds. They’re more mathematical models that say if such a thing as negative mass exists, then we might be able to go faster than the speed of light. Issue is that the model does nothing to show that negative mass exists.
That and everything we know shows that it does not exist. If it did I would be incredibly happy. It’s just wishful hoping at this point though. We don’t even have a model or theory that shows how negative mass could exist. We only have theories that show what could happen if it did exist.
It’s like saying hm we know how F=m*a works. What could happen if we set m to a negative number? Yah in the math we can but that does not mean we can in reality.
The only thing we can realistically pollute is our immediate orbit
Everything beyond would be impossible for us to pollute effectively even if we tried. You might not know this, but space is very very very big LMAO
So comparing to New Horizons mission
- New Horizons mission cost is estimated to around 780mln in 2001-2017
- Voyager cost is estimated at 850mln in 1977, which is ~2.8bln in 2006 dollars
like this
bobburger likes this.
like this
bobburger likes this.
CZ: forest care, natural history
Ohrožuje je sucho a kůrovec, mizí nám před očima, jsou v krizi – říká se o lesích. Jenže v krizi je možná spíše náš pohled na lesy v podobě, jakou jsme sami vytvořili. Tématu se věnujeme v aktuálním čísle časopisu A / Věda a výzkum. Většinou holá, vypleněná půda. Zůstalo jen pár stromů na těžko...
drwankingstein
in reply to captainkangaroo • • •Yay, another set of protocols that will just lead to more and more fragmentation.
You do acknowledge one issue with Wayland, probably the biggest issue with Wayland, but then fail to acknowledge the second biggest issue with Wayland being fragmentation.
Solve one issue by making another issue worse.
Mactan
in reply to drwankingstein • • •drwankingstein
in reply to Mactan • • •merthyr1831
in reply to drwankingstein • • •Wayland's approach has always been to make 3rd party protocols easier to opt in and out of. Sway and Hyprland both used custom protocols whilst official solutions were being designed iirc. Nothing stopping anyone from switching from one protocol to another if they implement the same thing down the line.
At least this way, compositors may be able to use something like frog as a shared "experimental branch" which can be enabled for users who need them, but otherwise disabled whilst Wayland core isn't pressured to work faster.
It's up to Wayland to make these projects obsolete if it causes them or users a problem.
drwankingstein
in reply to merthyr1831 • • •that's just the thing, This is again, more fragmentation, Some compositors support always on top, some don't, you choose x protocol for your app, and now your app works great on sway, but not on KDE or gnome, or it works great on gnome and not kde or sway etc. As an app developer the situation is a bloody joke. My current stance is "just use xwayland because wayland will never be suitable" and thankfully with cosmic and kde both supporting "don't scale xwayland" this seems to work well.
EDIT: they also make enough deviances from the upstream protocols that this can't really be considered a "experimental branch"
EX: github.com/misyltoad/frog-prot… vs gitlab.freedesktop.org/wayland…
Virkkunen
in reply to drwankingstein • • •Drito
in reply to drwankingstein • • •Gamma
in reply to captainkangaroo • • •merthyr1831
in reply to captainkangaroo • • •I like the approach here, but the requirements are a little vague and prone to bikeshedding. Stuff like "could this be used by multiple clients" might mean a protocol is held in limbo whilst it's given extra scope for example.
It'll need some strong moderation which might rub people the wrong way, but if this keeps Wayland's cutting edge moving whilst the official solutions are found, I'm all for it.