Linux Patches Would Allow RISC-V To Use A 64K Page Size
Linux Patches Would Allow RISC-V To Use A 64K Page Size
Patches from a Bytedance engineer for the Linux kernel allow for overcoming the current 4K page size limitation of RISC-V and introduce a new 64K page size option.www.phoronix.com
reshared this
DND confirms malfunction of new anti-tank missiles heading to Latvia. There were problems with five out of eight of the new Canadian Forces anti-tank missiles.
DND confirms malfunction of new anti-tank missiles heading to Latvia
There were problems with five out of eight of the new Canadian Forces Spike missiles.David Pugliese, Ottawa Citizen (Ottawa Citizen)
like this
Syrian rebel commander: Israel, opposition 'fighting a common enemy'
One of the commanders of rebel forces in Aleppo, known as Abu Abdo, gave a special interview with i24NEWS’ Matthias Inbar on Wednesday, revealing that, despite differences, “we are fighting against a common enemy.”
“We look at Israel and the US, with the arrival of President Donald Trump, and we have a lot of respect and sympathy for them, for their actions against Iran – the country that leads terrorism in the region and all over the world.”
He said that his faction “looks forward to cooperate and eliminate this enemy and restoring stability.”
like this
Man, this is a really bad precedent to be setting. First pardoning family members, then handing out pardons for "future crimes" to political allies. Imagine if Trump considered doing this shit, how poorly that would be received.
Seems like a quick way to make a farce of the pardon system.
“There is no terrorism, there is [only] France,” says president of the West African Peoples’ Organization
“There is no terrorism, there is [only] France,” says president of the West African Peoples' Organization : Peoples Dispatch
Philippe Noudjènoumè blames France for terrorist presence in the Sahel and criticizes the Benin government's collusion with MacronPedro Stropasolas (Peoples Dispatch)
like this
7.0 magnitude earthquake reported off Northern California coast, tsunami warning canceled
7.0 magnitude earthquake reported off Northern California coast, tsunami warning canceled
The epicenter is near Petrolia, in Humboldt County, the USGS said.Meredith Deliso (ABC News)
Live California earthquake updates: 7.0 quake felt in Bay Area, Tsunami Warning canceled
Live California earthquake updates: 7.0 quake felt in Bay Area, Tsunami Warning canceled
A magnitude 7.0 earthquake that hit Humboldt County prompted a Tsunami Warning and was felt in the Bay AreaABC OTV Website
Live California earthquake updates: 7.0 quake felt in Bay Area, Tsunami Warning canceled
Live California earthquake updates: 7.0 quake felt in Bay Area, Tsunami Warning canceled
A magnitude 7.0 earthquake that hit Humboldt County prompted a Tsunami Warning and was felt in the Bay AreaABC OTV Website
Taylor Lorenz Says 'We Want These Executives Dead' Hours After Health Insurance CEO Murdered
Taylor Lorenz Says 'We Want These Executives Dead' Hours After Health Insurance CEO Murdered
ZeroHedge - On a long enough timeline, the survival rate for everyone drops to zeroTyler Durden (www.zerohedge.com)
realcaseyrollins likes this.
Okay?
I don't see a problem here. Right wing personalities encourage harming and killing LGBTQ+ people and liberals, far-right people bomb abortion clinics and kill doctors, but now it's suddenly bad to say that oh, yeah, some people deserve a good killin'?
Seems like crocodile tears to me.
realcaseyrollins doesn't like this.
The funny thing about this response is I bet you'll continue to criticize the right for doing this after greenlighting the left to do this since the right does this
(Side note: it's fair to acknowledge that both Taylor Lorenz and abortion clinic bombers are extremists in their own respective political wings)
I haven't seen nearly as many right-wing people praising/encouraging obvious murder at I have here and on Bluesky. If something horrible happened to Nancy Pelosi, you sure wouldn't see me posing with a model of her decapitated head - unlike a certain comedian.
Even granting what you say, is putting out veiled death threats under one's real name really the kind of thing leftist figures should be doing in the wake of an election where Trump was nearly killed twice and the US decided that the left is too extreme for them?
the US decided that the left is too extreme for them?
Er. No. That's not what happened. When you look at vote totals from 2020, you can see that Biden got 81M votes to Trump's 74M. Looking at vote totals from 2024, Harris got 74M votes--the same as Trump in 2020--and Trump eked out a slightly improved performance at 77m. This isn't a 'mandate'; this is fewer people showing up to the polls. If the same number of people had voted in 2024 as voted in 2020, it's probable that Trump would have lost again. That isn't people saying the left is "too extreme", that's apathy.
Joe Kernan CNBC, scolds network for negative coverage of Trump
- YouTube
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.www.youtube.com
What happened to actor-relative URLs proposal?
I saw this FEP months ago and thought it was pretty promising. However, it seems that there is no update from then. Does anyone know about this?
link: codeberg.org/fediverse/fep/src…
For those who don't know this proposal brings portable identity across fediverse.
like this
I skimmed the first few pages. And it seems it's just concerned with the content? You can store your notes (posts, file uploads, ...) on arbitrary instances and move them around. But you still need a fixed instance that hosts your actor identity (your account) which then tells where to go to fetch a post. And that one can't change. So your account and username would still be tied to a fixed domain handle. And you can't move it. And even for the content, it seems like you'd need that fixed instance to do the 302 forward, so it needs to be contacted to resolve each location.
Edit: But you might be right. I don't grasp the full concept. Maybe it enables us to configure a webserver on our own domain to forward a user handle to some external server. Meaning we don't have to install a server ourselves. And the servers would then be interchangable (if this translates to fetching everything). You'd still be tied to your domain name. But not to a service anymore. That'd be great.
like this
Joe Biden reads Rashid Khalidi, and so should American Jews
from TheForward
[Jewish publication from USA]
Rob Eshman - Senior Columnist
December 3, 2024
[this might be an article to be shared with someone who still has illusions about Israel. Also has links to some excellent interviews with Khalidi.]
Joe Biden reads Rashid Khalidi, and so should American Jews – The Forward
Joe Biden is reading Rashid Khalidi's book "The 100 Years' War Against Palestine." American Jews should do the same.The Forward
What can the android app see when running in Waydroid?
cross-posted from: leminal.space/post/12999238
Hi,when running an Android app in Waydroid, what data can it see? Can it read my local hard drive? Can it scan my network? Can I manage it to just live in it's container and get nothing but an internet connection?
If you want waydroid to see files on the host, you need to muck around with bind-mounting a directory, or just using abd to move files manually.
I think waydroid can't see anything beyond itself normally. I had a hell of a time trying to get files on there, so if there's an easy way to get Waydroid to see files on the host, I couldn't find it.
COSMIC Alpha 4 Released For System76's Rust-Based Desktop
COSMIC Alpha 4 Released For System76's Rust-Based Desktop
System76 today released the newest development/testing version of their Rust-based desktop environment designed for their Pop!_OS Linux distribution.www.phoronix.com
like this
reshared this
At their self-imposed rates of 1 new Alpha at the last Thursday of every month, and assuming only 2 Betas, and assuming they can get Alpha 5 done in December, we're looking at the end of April for release, though I'd realistically expect Epoch 1 at the end of June or July, or maybe even after that.
This is normal in Software Development.
I know. I'm a CS student, but they are still in a pretty early stage of their project and don't have anywhere near the technical debt or size of projects like Plasma or GNOME, and as such, I think they should still be able to keep on going at a pretty fast pace.
And we got Alpha 4 a week late, compared to the old release schedule of the last Thursday of every month
i bet most likely due to the US holiday. 28/11 and 29/11 were the Thanksgiving and Black Friday holiday in US
Generating the Image in the Windows side?
Forgot to include the boot/system volume. It's a lovely time waster when you're dealing with disk images that are hundreds of gigabytes in size that have to be copied over the network. 😆
I'll add Disk2hvd screenshots when I get a sec.
Situation gets slightly more complicated if you had multiple drives in your system when you installed Windows, of course. Installer might put system volume on a different drive, so you'd have to image more than one drive to get a working system. Might get a little confusing as to which volumes should go in which image. There's a tool called GWMI that might help with that since afaik the volume guids don't show up in the Windows Disk Management snap-in.
Edit: The promised screenshot. In my case, I knew the volume labelled SYSTEM resided on the same disk as my C: drive. Probably don't have to include the recovery partition, strictly speaking, but I did.
unknowing8343
in reply to petsoi • • •like this
TimeSquirrel likes this.
TimeSquirrel
in reply to unknowing8343 • • •pastermil
in reply to unknowing8343 • • •Don't worry, it's quite esoteric to begin with. The only reason I can comprehend this is the years-long following news like this, on top of my computer science degree.
Also, this wouldn't matter (yet) to your daily life.
like this
TimeSquirrel likes this.
Croquette
in reply to unknowing8343 • • •Unless you are at the edge of the firmware and software, this isn't something you work with a lot.
When you transfer files or data to a memory space, you can't drop the whole file/data to memory directly because the resources are limited on the cpu/mcu. It wouldn't make sense to have a page as big as your biggest theorical data size.
Page size determine how much data at a time can be transferred into memory.
In term of performance, writing the page to memory is usually the bottle neck. So 4k vs 64k means you need to write to memory 16 times more and thus making the performance better on 64k page size.
Markaos
in reply to Croquette • • •That's more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single "operation", so it does 64 bytes at once (or more). And CPUs don't waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.
The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it's not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).
chaos
in reply to unknowing8343 • • •Back in the olden days, if you wrote a program, you were punching machine codes into a punch card and they were being fed into the computer and sent directly to the CPU. The machine was effectively yours while your program ran, then you (or more likely, someone who worked for your company or university) noted your final results, things would be reset, and the next stack of cards would go in.
Once computers got fast enough, though, it was possible to have a program replace the computer operator, an "operating system", and it could even interleave execution of programs to basically run more than one at the same time. However, now the programs had to share resources, they couldn't just have the whole computer to themselves. The OS helped manage that, a program now had to ask for memory and the OS would track what was free and what was in use, as well as interleaving programs to take turns running on the CPU. But if a program messed up and wrote to memory that didn't belong to it, it could screw up someone else's execution and bring the whole thing crashing down. And in some sys
... show moreBack in the olden days, if you wrote a program, you were punching machine codes into a punch card and they were being fed into the computer and sent directly to the CPU. The machine was effectively yours while your program ran, then you (or more likely, someone who worked for your company or university) noted your final results, things would be reset, and the next stack of cards would go in.
Once computers got fast enough, though, it was possible to have a program replace the computer operator, an "operating system", and it could even interleave execution of programs to basically run more than one at the same time. However, now the programs had to share resources, they couldn't just have the whole computer to themselves. The OS helped manage that, a program now had to ask for memory and the OS would track what was free and what was in use, as well as interleaving programs to take turns running on the CPU. But if a program messed up and wrote to memory that didn't belong to it, it could screw up someone else's execution and bring the whole thing crashing down. And in some systems, programs were given a turn to run and then were supposed to return control to the OS after a bit, but it was basically an honor system, and the problem with that is likely clear.
Hardware and OS software added features to enforce more order. OSes got more power, and help from the hardware to wield it. Now instead of asking politely to give back control, the hardware would enforce limits, forcing control back to the OS periodically. And when it came to memory, the OS no longer handed out addresses matching the RAM for the program to use directly, instead it could hand out virtual addresses, with the OS tracking every relationship between the virtual address and the real location of the data, and the hardware providing Memory Management Units that can do things like store tables and do the translation from virtual to physical on its own, and return control to the OS if it doesn't know.
This allows things like swapping, where a part of memory that isn't being used can be taken out of RAM and written to disk instead. If the program tries to read an address that was swapped out, the hardware catches that it's a virtual address that it doesn't have a mapping for, wrenches control from the program, and instead runs the code that the OS registered for handling memory. The OS can see that this address has been swapped out, swap it back in to real RAM, tell the hardware where it now is, and then control returns to the program. The program's none the wiser that its data wasn't there a moment ago, and it all works. If a program messes up and tries to write to an address it doesn't have, it doesn't go through because there's no mapping to a physical address, and the OS can instead tell the program "you have done very bad and unless you were prepared for this, you should probably end yourself" without any harm to others.
Memory is handed out to programs in chunks called "pages", and the hardware has support for certain page size(s). How big they should be is a matter of tradeoffs; since pages are indivisible, pages that are too big will result in a lot of wasted space (if a program needs 1025 bytes on a 1024-byte page size system, it'll need 2 pages even though that second page is going to be almost entirely empty), but lots of small pages mean the translation tables have to be bigger to track where everything is, resulting in more overhead.
This is starting to reach the edges of my knowledge, but I believe what this is describing is that RISC-V chips and ARM chips have the ability for the OS to say to the hardware "let's use bigger pages than normal, up to 64k", and the Linux kernel is getting enhancements to actually use this functionality, which can come with performance improvements. The MMU can store fewer entries and rely on the OS less, doing more work directly, for example.
rien333
in reply to chaos • • •Zoop
in reply to chaos • • •JustVik
in reply to unknowing8343 • • •1985MustangCobra
in reply to JustVik • • •whyNotSquirrel
in reply to petsoi • • •MTK
in reply to petsoi • • •My stupid brain reading that Linux can now use AK47
Me: the fuck does that mean?
scholar
in reply to MTK • • •1985MustangCobra
in reply to petsoi • • •