I had hoped – and really wanted – to be at Wuthering Bytes / Open Source Hardware Camp this weekend, but didn’t get that sorted out because of Reasons.
So, I spent today doing various other things instead.
- I’ve been invited to a couple of events that are arranged using Gathio recently. Gathio has support for ActivityPub federation (there’s an ActivityStreams type for an Event, so this is all another piece of how the fediverse can be used to follow and participate in a broad range of activities). I noticed that there’s a new documentation site for Gathio, but certain links and messages still point to the project’s now-discontinued wiki, so I sent a pull request to fix those.
- I’ve been playing a bit with Picotron, a fantasy virtual workstation from the folks that also make the PICO-8 virtual retro gaming system. I’ll probably write a bit more deeply on that sometime soon (I hope / if I get around to it).
- Our elderly neighbour asked me to go in to help with some issues with her email. It turned out that Microsoft has completely turned off the old DeltaSync service that her mail client had been set up with (the clue was a 404 error which was part of the otherwise incomprehensible message from her computer), so I reconfigured things to avoid going near Outlook / Live Mail – there was absolutely no way she would have been able to work any of that out. Really, I would like to refresh her machine with a nice clean and fast Linux distribution, but she’s older and very used to how things currently work, so I tried to keep the changes to a minimum.
- I learned that CloudHiker is a thing. Many rabbit holes to investigate.
- I tried out a few different Wayland compositors on the Pocket Reform. wlmaker is interesting – old-style WindowMaker but on Wayland (although I don’t think it supports rotation yet, which is a deal-breaker for now). I also tried hyprland, but couldn’t get it to display anything – the debug log checked out but the display was black. I’m not too bothered as I’m aware of certain issues with that project. The author of
[url=https://github.com/Dr-Noob/cpufetch]cpufetch[/url]
has a patch ready that adds support for the CPU module that the Pocket Reform uses, I tested that today as well. I spotted a couple of config improvements that we can make in the default Pocket Reform image, so I’ll aim to submit merge requests on those. I posted some wallpapers to the community forum. - I quickly made a Fediwall to aggregate conversations about #OSHCamp, since I was missing out!
- The great folks at WeDistribute have made an Events hub for Fediverse (and adjacent) events, which is excellent. I spotted there’s an ATProto talk in London on Tuesday, and I intend to get along to that. They also posted an interesting piece on Flohmarkt, a marketplace instance with Fediverse compatibility. So much interesting development lately.
- Caught up-to-date on The Repair Shop.
Tomorrow: a Makeroni meetup in Wimbledon.
(Bank Holiday) Monday: Hampton Court Food Festival.
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/24/tod…
#Blaugust2024 #100DaysToOffload #activitypub #activitystreams #atproto #debian #fediverse #fediwall #gathio #Linux #London #OSHCamp #pico8 #picotron #pocketReform #retro #techSupport #Technology #wayland
Flohmarkt is a Fediverse Marketplace
Historically speaking, the Fediverse has lacked apps related to money, trade, or commerce. Although there's a growing number of small businesses, services, and contractors within the space, the networSean Tilley (We Distribute)
A couple of weeks ago I received something I’ve been eagerly expecting from a Crowd Supply crowd funding campaign.MNT have been making devices that aim to be “open source, accessible and modular” for a number of years. I didn’t get their original Reform laptop, but I’ve seen those around at events like FOSDEM and I’ve been following the team’s progress with interest. When the Pocket Reform was announced I was immediately intrigued – a smaller form factor, 7 inch full Linux system with open hardware that is easily portable.
makertube.net/videos/embed/744…
I went for the Hyper edition, which came with a beautiful Piñatex sleeve, SSD, and printed manual. Purple, of course, because I want to be on brand 🤓 and, also because I love it!
I posted a very brief unboxing video. I’ve now had a couple of weeks to occasionally tinker, and I’ve been putting off writing about it all in part because there’s a lot of things to dig into!
One of the things I like, surprisingly, is the chunkiness of the machine. It is really well constructed, solid, and feels great. Fits in a small cross-body bag or satchel. It’s less than half the size of a 14 inch MacBook Pro – you can more than fit two of them side by side on top of the Mac – but it is about 3 times thicker – and that’s OK, because, it is in service of making the innards very accessible and user serviceable. It comes with a complete manual and schematic, which is something I’ve not had in a computer since the 8-bit machines of my youth! The top half contains the mainboard and display (all the ports are in the top half), and the bottom contains the battery cells and mechanical keyboard. The upper panel has a copper layer and acts as a large heatsink for the processor module.
The keyboard is ortholinear, which means it is a direct grid layout rather than offset row-by-row. It’s the first time I’ve used this format, which – along with the smaller keycaps – has made it a little challenging to learn, but I’m doing pretty well now. The trackball is nice and responsive. The backlit keys are easy to adjust.
The screen is excellent – bright, and sharp. Actually I think the screen is probably the aspect that has impressed me most so far.In terms of ports and connectivity – I’ve yet to hook up to an additional screen via the micro-HDMI connector, but I’ve used the USB-C connections (one of which is used to charge / power the machine), and the micro SD slot. The industrial iX connector for ethernet is likely a good choice for the target niche, and certainly does give more space on the motherboard than an RJ45 socket would… but I’ve yet to put my hands on a passive adapter to enable me to plug in to a wired connection, so it’s currently not as useful to me.
There’s a lot more to talk about, primarily (but not exclusively) on the software side, and also around hardware enablement. Out-of-the-box the Pocket Reform runs Debian unstable from MMC, with some customisations to provide a nice getting started wizard. It is important to point out that this is a machine for hackers and tinkerers – although it works very nicely, it’s not all fully baked in places – for example, the firmware for the system controller (a Raspberry Pi RP2040 chip) is being tweaked and tested, and I’ve already tested one update to that. The other day I posted about some issues with an NVMe SSD – that, unfortunately ,was on this machine. I actually think there was a physical hardware issue with the drive, as I’ve now replaced it with a higher-performance NVMe and things are moving along nicely (while the problematic SSD continues to report errors when accessed in an adapter over USB). I also managed to temporarily brick the machine by corrupting the uboot in flash, and needed to rig it up with Dupont wires on headers and access the machine from another via USB to get back to where I wanted to be. Very Open Source! 😎 but, I’m comfortable with this, and knew what I was purchasing. The forums and IRC have proven to be useful so far and I’m enjoying learning as well as hopefully (!?) helping the MNT team through my feedback and bug reports. I have a huge amount of respect for what they have built, their ethos, and their commitment to making this as open hardware as they can.
I should be receiving a modem / WWAN card for the second internal expansion slot shortly. I’ve been both learning the Sway desktop environment and also working out how best to organise my setup, so there will be more to cover in future. I particularly want to play more with the onboard I2C, and other hardware opportunities, as well – for example, potentially swapping in a Raspberry Pi CM4 if that becomes a modular option in the future.
andypiper.co.uk/2024/08/06/mnt…#Blaugust2024 #100DaysToOffload #debian #firstImpressions #hacker #Linux #maker #mnt #openHardware #openSource #pocketReform #purple #review #rp2040
MNT Pocket Reform
A newer, smaller, lighter, more-affordable, seven-inch mini Reform laptop that remains fully open sourceCrowd Supply
John Lewis memorial unveiled on the Decatur Square
From Decaturish:
Decatur, GA — Hundreds of spectators, including various local, state and federal officials, celebrated the unveiling of the John Lewis memorial on the Decatur Square on Saturday, Aug. 24. The 12-foot statue replaces a Confederate obelisk that stood on the Square in front of the Historic DeKalb County Courthouse. The obelisk was removed in 2020. […]
https://decaturish.com/2024/08/john-lewis-memorial-unveiled-on-the-decatur-square/
SurrealEngine: Open-source reimplementation of Unreal Engine with playable UT99
GitHub - dpjudas/SurrealEngine: Unreal Tournament Engine Reimplementation
Unreal Tournament Engine Reimplementation. Contribute to dpjudas/SurrealEngine development by creating an account on GitHub.GitHub
like this
KaRunChiy likes this.
reshared this
Open Source reshared this.
13ft: Self-Hosted 12ft.io Alternative
GitHub - wasi-master/13ft: My own custom 12ft.io replacement
My own custom 12ft.io replacement. Contribute to wasi-master/13ft development by creating an account on GitHub.GitHub
like this
𝔻𝕚𝕖𝕘𝕠 🦝🧑🏻💻🍕, LostXOR and PandaInSpace like this.
reshared this
Open Source reshared this.
How does this, (or 12ft.io for that matter) actually work? Client-side trickery? Magic cookies? Something like adblock?
EDIT: Apparently it just blocks JS and disguises itself as an SE crawler. This still doesn't work on sites like Bloomberg, and if I understand correctly nothing can be done there.
magnolia1234/bpc_uploads
Участвуйте в разработке magnolia1234/bpc_uploads, создав учетную запись в GitFlic.gitflic.ru
Issues · wasi-master/13ft
My own custom 12ft.io replacement. Contribute to wasi-master/13ft development by creating an account on GitHub.GitHub
Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility
Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility
Martin Bernklau is a German journalist who reported for decades on criminal trials. He looked himself up on Bing, which suggests you use its Copilot AI. Copilot then listed a string of crimes Bernk…Pivot to AI
Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!
and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears
Bypass Paywalls Add-On Takedown Notice
dmca/2024/08/2024-08-09-news-media-alliance.md at master · github/dmca
Repository with text of DMCA takedown notices as received. GitHub does not endorse or adopt any assertion contained in the following notices. Users identified in the notices are presumed innocent u...GitHub
reshared this
Open Source reshared this.
How is the accused project designed to circumvent your technological protection measures?The identified Bypass Paywalls technology circumvents NM/A’s members’ paywalls in one of two ways.
[private]For hard paywalls, it is our understanding that the identified Bypass Paywalls technology automatically scans web archives for a crawled version of the protected content and displays that content.
If the web archives have the content, then a user could just search them manually. The extension isn’t logging users in and bypassing your login process; it’s just running a web search for them.
like this
subignition and like this.
Dunno. Regardless of the method used by the extension, I think any extension called "Bypass Paywalls" that does what it says on the tin can pretty unambiguously be said to be designed to circumvent "technological protection measures". In this case, it circumvents the need to login entirely and obviously it circumvents the paywall.
Though as you said, these guys should probably be sending DMCAs to the Internet Archive if they actually want to stop their paywalls from being bypassed. I know they do honor takedown requests. Maybe archive.today is the problem? Maybe they don't honor DMCA requests. I very often see them used on Hacker News whenever someone wants share a paywall-free link to an article.
Dunno, I think regardless of the method used by the extension, I think any extension called "Bypass Paywalls" that does what it says on the tin can pretty unambiguously be said to be designed to circumvent "technological protection measures".
“Bypass” and “Circumvent” are nearly synonymous in some uses - they both mean “avoid” - but that’s not really the point.
From a legal perspective, it’s pretty clear no circumvention of technological protection measures is taking place*. Yes, bypassing or circumventing a paywall to get to the content on the site itself would be illegal, were that content effectively protected by a technological measure. But they’re not doing that. Rather, a circumvention of the entire site is occurring, which is completely legal (an obvious exception would be if they were hosting infringing content themselves or something along those lines, but we’re talking about the Internet Archive here).
* - to be clear, I’m referring to what was detailed in the request, not the part that was redacted. That part may qualify as a circumvention.
In this case, it circumvents the need to login entirely and obviously it circumvents the paywall.
Following the same logic, Steam could claim that a browser extension showing where you can get the same game for cheaper or free circumvents their technological protection measure. It doesn’t. It circumvents the entire storefront, which is not illegal.
That’s the same thing that’s happening here - linking to the same work that’s legally hosted elsewhere.
Though as you said, these guys should probably be sending DMCAs to the Internet Archive
Yes - if they don’t want their content available, that’s what they should do. They might not want to do that, because they appreciate the Internet Archive’s mission (I wonder if it’s possible to ask that content be taken down until X date, or for content to be made inaccessible but for it to still be archived?) or they might be taking a multi pronged approach.
Maybe archive.today is the problem? Maybe they don't honor DMCA requests.
Good point. If so, and if their site isn’t legally compliant in the same ways, then the extension becomes a lot less legally defensible if it’s linking there. That’s still not because it’s circumventing a technological protection, though - it’s because of precedent that “One who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, going beyond mere distribution with knowledge of third-party action, is liable for the resulting acts of infringement by third parties using the device, regardless of the device’s lawful uses,” (Source), where “device” includes software. Following that precedent, plaintiffs could claim that the extension promoted its use to infringe copyright based off the extension’s name and that it had knowledge of third-party action because it linked directly to sites known to infringe copyright.
The Digital Media Law Project points out that there are two ways sharing links can violate the DMCA:
- Trafficking in anti-circumvention tools - which is obviously not what’s going on here
- Contributory copyright infringement - which is basically doing something described by the precedent I shared above.
I’m not sure how the extension searches web archives. It if uses Google, for example, then it would make sense to serve Google ae DMCA takedown notice (“stop serving results to the known infringing archive.piracy domain”), but if the extension directly searches the infringing web archive, then the extension developers would need to know that the archive is infringing. Serving them a DMCA takedown (“stop searching the known infringing archive.piracy domain”) would give them notice, and if they ignored it, it would then be appropriate to send the takedown directly to their host (Github, the browser extension stores, etc) citing that they had been informed of the infringement of a site they linked to and were de facto committing contributory infringement themselves.
Given that they didn’t do that, I can conclude one of the following:
- The lawyers are incompetent.
- The lawyers are competent and recognize that engaging in bad faith like this produces faster results; if this is contested they’ll follow up with something else, possibly even the very actions I described.
- The archives that are searched by the extension aren’t infringing and this was the best option the lawyers could come up with.
Eligible libraries, archives, and museums have a few exemptions to the DMCA’s anti-circumvention clauses that aren’t available to ordinary citizens, but these aren’t unique to the Internet Archive. For example:
Literary works, excluding computer programs and compilations that were compiled specifically for text and data mining purposes, distributed electronically where:(A) The circumvention is undertaken by a researcher affiliated with a nonprofit institution of higher education, or by a student or information technology staff member of the institution at the direction of such researcher, solely to deploy text and data mining techniques on a corpus of literary works for the purpose of scholarly research and teaching;
(B) The copy of each literary work is lawfully acquired and owned by the institution, or licensed to the institution without a time limitation on access;
(C) The person undertaking the circumvention views the contents of the literary works in the corpus solely for the purpose of verification of the research findings; and
(D) The institution uses effective security measures to prevent further dissemination or downloading of literary works in the corpus, and to limit access to only the persons identified in paragraph (b)(5)(i)(A) of this section or to researchers or to researchers affiliated with other institutions of higher education solely for purposes of collaboration or replication of the research.
This exemption doesn’t allow them to publish the content, though, nor would it provide them immunity to takedown requests, if it did.
These exemptions change every three years and previously granted exemptions have to be renewed. The next cycle begins in October and they started accepting comments on renewals + proposals for expanded or new exemptions in April, so that’s why we’re hearing about companies lobbying against them now.
Rulemaking Proceedings Under Section 1201 of Title 17 | U.S. Copyright Office
An index of rulemaking proceedings under Section 1201 of Title 17www.copyright.gov
Bilstölder och kassaskåpssprängningar. Polisen har gripit fyra män i 20-35-årsåldern som misstänks ligga bakom ett stort antal stölder och inbrott i Skåne och Halland. Tre av männen häktades den 14 augusti vid Helsingborgs tingsrätt.
blog.zaramis.se/2024/08/24/bil…
Bilstölder och kassaskåpskupper - Systemligan - Svenssons Nyheter
Bilstölder och kassaskåpskupper. Polisen har gripit fyra män i 20-35-årsåldern som misstänks ligga bakom ett stort antal stölderAnders_S (Svenssons Nyheter)
Newly added to the Trade-Free Directory:
Technology without Borders
Technik ohne Grenzen e.V. (Technology without Borders, Germany) has one goal: improving living conditions in developing countries. This goal is pursued principally through the following three areas of activity:
Coordinating on-site, tailored cooperative development work that makes the most effective use of the available resources.
Delivering education and training that empower local people effect change themselves.
Stimulating sustainable development, for example through microbusiness initiatives.
Through these activities, we aim to put our technical expertise to meaningful use in the service of others. This is the guiding principle behind the foundation of our organisation, our motto being “as sophisticated as necessary, as simple as possible“. In the organisation’s name, the term “technology“ represents an invitation for all technical enthusiasts, as well as tradespeople, technicians, artisans and engineers, to participate in our work. Our organisation also places great importance on offering students the opportunity to make a difference through the application of technical and engineering skills to a variety of challenges in different locations and cultures. If these ideas inspire you, we would love to have you work with us!
#developmentCooperation #education #training
More here:
reshared this
TROM and Priceless Planet reshared this.
Newly added to the Trade-Free Directory:
Sana Mare
Sana Mare is an international environmental organisation that works to protect the oceans. Our focus is on combating the discharge of civilisation’s waste into the ocean. Poverty is the biggest environmental toxin. In developing countries in Africa and Asia, we therefore combine our efforts to protect the ocean with the reduction of poverty.
We are organised as an association. The association was founded in 2020 by oceanographer and climate scientist Lucas Schmitz. We do not strive for profit, but to maximise environmental protection. If you identify with the goals of our association, you are very welcome as a member.
#humanitarianAid #oceanprotection
More here:
directory.trade-free.org/goods…
reshared this
TROM and Priceless Planet reshared this.
Newly added to the Trade-Free Directory:
Cochrane
Cochrane is a British international charitable organisation formed to synthesize medical research findings to facilitate evidence-based choices about health interventions involving health professionals, patients and policy makers. It includes 53 review groups that are based at research institutions worldwide. Cochrane has over 37,000 volunteer experts from around the world.
More here:
directory.trade-free.org/goods…
TROM reshared this.
Besöket i Ålesund innebar också ett besök i Geirangerfjorden, ett av Norges världsarv. Geirangerfjorden, Sunnylvsfjord, Tafjord och Aurlandsfjord är tillsammans med Nærøyfjorden på Vestlandet belv 2005 upptaget på UNESCO:s lista över världsarv på grund av naturen.
blog.zaramis.se/2024/08/24/gei…
Geirangerfjorden - kryssningsfartygen som miljöhot - Svenssons Nyheter
Besöket i Ålesund innebar också ett besök i Geirangerfjorden, ett av Norges världsarv. Geirangerfjorden, Sunnylvsfjord, TafjordAnders_S (Svenssons Nyheter)
Linus Torvalds Begins Expressing Regrets Merging Bcachefs
There's been some Friday night kernel drama on the Linux kernel mailing list... Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.
like this
𝔻𝕚𝕖𝕘𝕠 🦝🧑🏻💻🍕 likes this.
ext4 aims to not lose data under the assumption that the single underlying drive is reliable. btrfs/bcachefs/ZFS assume that one/many of the perhaps dozens of underlying drives could fail entirely or start returning garbage at any time, and try to ensure that the bad drive can be kicked out and replaced without losing any data or interrupting the system. They're both aiming for stability, but stability requirements are much different at scale than a "dumb" filesystem can offer, because once you have enough drives one of them WILL fail and ext4 cannot save you in that situation.
Complaining that datacenter-grade filesystems are unreliable when using them in your home computer is like removing all but one of the engines from a 747 and then complaining that it's prone to crashing. Of course it is, because it was designed under the assumption that there would be redundancy.
This is actually a feature that enterprise SAN solutions have had for a while, being able choose your level of redundancy & performance at a file level is extremely useful for minimising downtime and not replicating ephemeral data.
Most filesystem features are not for the average user who has their data replicated in a cloud service; they're for businesses where this flexibility saves a lot of money.
and not lose files
Which is exactly why you'd want to run a CoW filesystem with redundancy.
It's not that obscure - I had a use case a while back where I had multiple rocksdb instances running on the same machine and wanted each of them to store their WAL only on SSD storage with compression and have the main tables be stored uncompressed on an HDD array with write-through SSD cache (ideally using the same set of SSDs for cost). I eventually did it, but it required partitioning the SSDs in half, using one half for a bcache (not bcachefs) in front of the HDDs and then using the other half of the SSDs to create a compressed filesystem which I then created subdirectories on and bind mounted each into the corresponding rocksdb database.
Yes, it works, but it's also ugly as sin and the SSD allocation between the cache and the WAL storage is also fixed (I'd like to use as much space as possible for caching). This would be just a few simple commands using bcachefs, and would also be completely transparent once configured (no messing around with dozens of fstab entries or bind mounts).
One point: ext4 has a maximum file size of 16TiB. To a regular user that is stupidly huge and of no concern but it's exactly the type of thing you overlook if you "just use ext4" on anything and everything then end up with your database broken at work because of said bad advice.
Use the filesystem that makes the most sense for your use case. Consider it every single time you format a disk. Don't become complacent! Also fuck around with the new shit from time to time! I decided to format my Linux desktop partitions with btrfs over a decade ago and as a result I'm an excellent user of that filesystem but you know what? I'm thinking I'll try bcachefs soon and fiddle around more with my zfs partition on my HTPC.
BTW: If you're thinking about trying out btrfs I would encourage you to learn about it's non-trivial maintenance tasks. btrfs needs you to fuck with it from time to time or you'll run out of disk space "for no reason". You can schedule cron jobs to take care of everything (as I have done) but you still need to learn how it all works. It's not a "set it and forget it" FS like ext4.
I wouldn't say, "repairing XFS is much easier." Yeah, fsck -y
with XFS is really all you have to do 99% of the time but also you're much more likely to get corrupted stuff when you're in that situation compared to say, btrfs which supports snapshotting and redundancy.
Another problem with XFS is its lack of flexibility. By that I don't mean, "you can configure it across any number of partitions on-the-fly in any number of (extreme) ways" (like you can with btrfs and zfs). I mean it doesn't have very many options as to how it should deal with things like inodes (e.g. tail allocation). You can increase the total amount of space allowed for inode allocation but only when you create the filesystem and even then it has a (kind of absurdly) limited number that would surprise most folks here.
As an example, with an XFS filesystem, in order to store 2 billion symlimks (each one takes an inode) you would need 1TiB of storage just for the inodes. Contrast that with something like btrfs with max_inline
set to 2048 (the default) and 2 billion symlimks will take up a little less than 1GB (assuming a simplistic setup on at least a 50GB single partition).
Learn more about btrfs inlining: btrfs.readthedocs.io/en/latest…
One of the best filesystem codebases out there. Really a top notch file system if you don't need to resize it once it's created. It is a write through, not copy on write, so some features such as snapshots are not possible using XFS. If you don't care about features found in btrfs, zfs or bcachefs, and you don't need to resize the partition after creating it, XFS is a solid and very fast choice.
Ext4 codebase is known to be very complex and some people say even scary. It just works because everybody's using it and bugs have been fixed years ago.
For me the reason was that I wanted encryption, raid1 and compression with a mainlined filesystem to my workstation. Btrfs doesn't have encryption, so you need to do it with luks to an mdadm raid, and build btrfs on top of that. Luks on mdadm raid is known to be slow, and in general not a great idea.
ZFS has raid levels, encryption and compression, but doesn't have fsck. So you better have an UPS for your workstation for electric outages. If you do not unmount a ZFS volume cleanly, there's a risk of data loss. ZFS also has a weird license, so you will never get it with mainline Linux kernel. And if you install the module separately, you're not able to update to the latest kernel before ZFS supports it.
Bcachefs has all of this. And it's supposed to be faster than ZFS and btrfs. In a few years it can really be the golden Linux filesystem recommended for everybody. I sure hope Kent gets some more help and stops picking fights with Linus before that.
He just needs to know that sometime his changes will get pushed to the next cycle.
This. Well said.
Kent is reasonable, and sees Linus's need to keep order. I think he just pushes it sometimes, and doesn't understand how problematic that can be.
That said - he has resubmitted an amended version of the patch, that doesn't touch code outside of bcachefs, and is less than 1/3 the size.
if you don't need to resize it once it's created
xfs_growfs is a thing. I know nothing about xfs. Is this something I should avoid for some reason?
docs.redhat.com/en/documentati…
8.4. Increasing the Size of an XFS File System | Red Hat Product Documentation
8.4. Increasing the Size of an XFS File System | Red Hat Documentationdocs.redhat.com
Bruh, you can't just submit entirely new data structures as "fixes", let alone past the merge window.
It should not be hard at all to grasp that.
like this
qupada likes this.
He accepted Linus's needs as the project head to keep order. He resubmitted the patch set without the contentious parts. It's less than 1/3 the size and doesn't touch code outside of bcachefs. Problem solved.
Honestly, Kent seems pretty reasonable (though impassioned), and bcachefs well probably make it, and Kent will get used to just submitting things at the right time in the cycle.
Honestly I'm fine with ZFS on larger scale, but on desktop I want a filesystem that can do compression (like NTFS on windows) and snapshots.
I have actually used compression a lot, and it spared me a lot of space. No, srorage is not cheap, or else I'm awaiting your shipment.
Other than that I'm doing differential backups on windows, and from time to time it's very useful that I can grab a file to which something just happened. Snapshots cost much less storage than complete copies, which I couldn't afford, but this way I have daily diffs for a few years back, and it only costs a TB or so.
Ext4 is faster, but I love BTRFS not just because of CoW, but subvolumes as well. You could probably get something similar going with LVFS, but I prefer that to be baked in, hence why I'm waiting for bcachefs, because it'll up the ante with tighter integration, so that might translate to better performance.
Notice my use of the word might. BTRFS performance is not so great.
Btrfs doesn't have encryption, so you need to do it with luks to an mdadm raid, and build btrfs on top of that. Luks on mdadm raid is known to be slow, and in general not a great idea.
Why involve mdadm? You can use one btrfs filesystem on a pair of luks volumes with btrfs's "raid1" (or dup) profile. Both volumes can decrypt with the same key.
ZFS doesn't support tiered storage at all. Bcachefs is capable of promoting and demoting files to faster but smaller or slower but larger storage. It's not just a cache. On ZFS the only option is really multiple zpools. Like you can sort of do that with the persistent L2ARC now but TBs of L2ARC is super wasteful and your data has to fully fit the pool.
Tiered storage is great for VMs and games and other large files. Play a game, promote to NVMe for fast loadings. Done playing, it gets moved to the HDDs.
Ext4 codebase is known to be very complex and some people say even scary. It just works because everybody’s using it and bugs have been fixed years ago.
I heard that ext4s best feature was its fsck utils being extremely robust and able to recover from a lot of problems. Which does not shine a great light on the filesystem itself :/ and probably a result of the complex codebase.
FAT32 does not just work for my Linux OS.
To people who just want to browse the web, use Office applications and a few other things, ext4 just works and FAT32 really just doesn't.
I get the point you're trying to make, FAT32 also has a small file size and is missing some features, ext4 is like that to for instance Bcachefs.
But FAT32 (and exFAT and a few others) have a completely different use cases; I couldn't use FAT32 for Linux and expect it to work, I also couldn't use ext4 for my USB stick and expect it to just work as a USB stick.
Bcachefs has all of this. And it’s supposed to be faster than ZFS and btrfs. In a few years it can really be the golden Linux filesystem recommended for everybody
ngl, the number of mainline Linux filesystems I've heard this about. ext2, ext3, btrfs, reiserfs, ...
tbh I don't even know why I should care. I understand all the features you mentioned and why they would be good, but i don't have them today, and I'm fine. Any problem extant in the current filesystems is a problem I've already solved, or I wouldn't be using Linux. Maybe someday, the filesystem will make new installations 10% better, but rn I don't care.
ZFS doesn't have fsck because it already does the equivalent during import, reads and scrubs. Since it's CoW and transaction based, it can rollback to a good state after power loss. So not only does it automatically check and fix things, it's less likely to have a problem from power loss in the first place. I've used it on a home NAS for 10 years, survived many power outages without a UPS. Of course things can go terribly wrong and you end up with an unrecoverable dataset, and a UPS isn't a bad idea for any computer if you want reliability.
Totally agree about mainline kernel inclusion, just makes everything easier and ZFS will always be a weird add-on in Linux.
I know, that was an example of why it doesn't work on ZFS. That would be the closest you can get with regular ZFS, and as we both pointed out, it makes no sense, it doesn't work. The L2ARC is a cache, you can't store files in it.
The whole point of bcachefs is tiering. You can give it a 4TB NVMe, a 4TB SATA SSD and a 8 GB HDD and get almost the whole 16 TB of usable space in one big filesystem. It'll shuffle the files around for you to keep the hot data set on the fastest drive. You can pin the data to the storage medium that matches the performance needs of the workload. The roadmap claims they want to analyze usage pattern and automatically store the files on the slowest drive that doesn't bottleneck the workload. The point is, unlike regular bcache or the ZFS ARC, it's not just a cache, it's also storage space available to the user.
You wouldn't copy the game to another drive yourself directly. You'd request the filesystem to promote it to the fast drive. It's all the same filesystem, completely transparent.
phoronix.com/review/bcachefs-l…
Steam librarybackups
media library
Wonderful.
But these are libraries. Not single files.
It is only in TLS where you have to disable compression, not in HTTP.
security.stackexchange.com/que…
Could you explain how a CRIME attack can be done to a disk?
CRIME - How to beat the BEAST successor?
With the advent of CRIME, BEAST's successor, what possible protection is available for an individual and/or system owner in order to protect themselves and their users against this new attack on TLS?Information Security Stack Exchange
Looks to be an exploit only possible because compression changes the length of the response and the data can be injected into the request and is reflected in the response. So an attacker can guess the secret byte by byte by observing a shorter response form the server.
That seems like something not feasible to do to a storage device or anything that is encrypted at rest as it requires a server actively encrypting data the attacker has given it.
We should be careful of seeing a problem in one very specific place and then trying to apply the same logic to everything broadly.
It's a filesystem that supports all of these features (and in combination):
- snapshotting
- error correction
- per-file or per-directory "transparently compress this"
- per-file of per-directory "transparently back this up"
If that is meaningless to you, that's fine, but it sure as hell looks good to me. You can just stick with ext3 - it's rock solid.
Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.
Premature optimisation could kill a project's maintainability; wait a few years. Even then, despite Ken's optimism I'm not certain we'll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.
Are CDDL and GPL really incompatible?
Wikipedia seems to suggest that CDDL and GPL are incompatible, yet no one knows for sure why or how. Why and how are the CDDL and GPL are incompatible?Open Source Stack Exchange
Not true
The only condition is that CCDL and GPL don't apply to the same file. Wifi works just fine and the source code isn't GPL yet wifi drivers are in the kernel..
opensource.stackexchange.com/q…
Are CDDL and GPL really incompatible?
Wikipedia seems to suggest that CDDL and GPL are incompatible, yet no one knows for sure why or how. Why and how are the CDDL and GPL are incompatible?Open Source Stack Exchange
I also couldn't use ext4 for my USB stick and expect it to just work as a USB stick.
Why not? It can be adapted to a smaller drive size fairly easily during filesystem creation.
Yes, but note that neither the Linux foundation nor OpenZFS are going to put themselves in legal risk on the word of a stack exchange comment, no matter who it's from. Even if their legal teams all have no issue, Oracle has a reputation for being litigious and the fact that they haven't resolved the issue once and for all despite the fact they could suggest they're keeping the possibility of litigation in their back pocket (regardless of if such a case would have merit).
Canonical has said they don't think there is an issue and put their money where their mouth was, but they are one of very few to do so.
A rather overly simplistic view of filesystem design.
More complex data structures are harder to optimise for pretty much all operations, but I'd suggest the overwhelmingly most important metric for performance is development time.
The two works can live harmoniously together in the same repo, therefore, not incompatible by one definition and the one that matters.
There's already big organisations doing it and they haven't had any issues
Btrfs has architectural issues that can not be fixed. It is fine for smaller raid 0/1 but as soon as you try to scale it up you run into performance issues. This is because of how it was designed.
Bcachefs is like btrfs and has all the features btrfs does. However, it also is likely to be much faster. Additionally it has some extra features like tiered storage which allows you to have different storage mediums.
ZFS doesn't have Linux fsck has it is its own thing. It instead has ZFS scrubbing which fixes corruption. Just make sure you have at least raid 1 as without a duplicate copy ZFS will have no way of fixing corruption which will cause it to scream at you.
If you just need to get data off you can disable error checking. Just use it at your own risk.
I hope so.
It looks really promising for home users. At this point I've moved to zfs because of proxmox though, so it isn't as relevant to me as it once was.
But scrub is not fsck. It just goes through the checksums and corrects if needed. That's why you need ECC ram so the checksums are always correct. If you get any other issues with the fs, like a power off when syncing a raidz2, there is a chance of an error that scrub cannot fix. Fsck does many other things to fix a filesystem...
So basically a typical zfs installation is with UPS, and I would avoid using it on my laptop just because it kind of needs ECC ram and you should always unmount it cleanly.
This is the spot where bcachefs comes into place. It will implement whatever we love about zfs, but also be kind of feasible for mobile devices. And its fsck is pretty good already, it even gets online checks in 6.11.
Don't get me wrong, my NAS has and will have zfs because it just works and I don't usually need to touch it. The NAS sits next to UPS...
Me neither, but the risk is there and well documented.
The point was, ZFS is not great as your normal laptop/workstation filesystem. It kind of requires a certain setup, can be slow in certain kinds of workflows, expects disks of same size and is never available immediately for the latest kernel version. Nowadays you actually can add more disks to a pool, but for a very long time you needed to build a new one. Adding a larger disk to a pool will still not resize it, untill all the disks are replaced.
It shines with steady and stable raid arrays, which are designed to a certain size and never touched after they are built. I would never use it in my workstation, and this is the point where bcachefs gets interesting.
Yeah, same :D
It was a typo, I have meant compression. Specifically a per-file controlled compression, not per-directory or per-dataset.
This week in KDE: per-monitor brightness control and “update then shut down”
This week in KDE: per-monitor brightness control and “update then shut down”
This week was all about the quality of life features! As we close in on Plasma 6.2 (the soft feature freeze is in four days, eek!), some great work that’s been in progress for a long time got…Adventures in Linux and KDE
like this
ShaunaTheDead, Oofnik, Noxious and TVA like this.
reshared this
Linux reshared this.
same here! kde is one of the reasons im feeling like hopping. they really polished it a lot in areas where it was needed.
i just need a little free time, any day now.
I wonder how that will play together with Distros like OpenSUSE Tumbleweed where you basically do a whole OS upgrade and are not supposed to do "just" updates.
I hope we can easily supply our own script to run.
You know what else would be awesome? "Update, reboot, and (just this once) automatically login"
It would be super useful for when I'm alone at home working but want to do updates over my lunch break.
arm64 / aarch64 compatibility
I'm about to step into the wonderful world of ARM Linux. I work with ARM32 as an embedded developer profesionally (Cortex-M3 specifically) so I'm not a complete newbie. But I've never used ARM64, and I've never used it with a desktop OS. So I'm doing my research, as one does, to know roughly what I'll be dealing with.
I have a few questions regarding backward compatibility and architecture-naming. Maybe you specialists out there could shed some light.
From what I could find, I understand the following:
- arm64 and aarch64 are the same thing: the former is what Linus likes to say while the latter is what ARM calls their own stuff.
- arm64 / aarch64 really mean "compatible with ARMv8" as a least common denominator, meaning ARMv8.x-y (x being the extension, y being A for application or R for realtime) will run it, just without taking advantage of any extension or realtime instructions.
- ARMv9.x will run arm64 / aarch64 kernels and applications, as it's (supposedly) backward-compatible with ARMv8, just without taking advantage of the ARMv9 ISA.
- If I want to create arm64 software that takes advantage of this-or-that extension or realtime instructions, I have to compile it in explicitely. I'm not sure if gcc handles special instructions, I haven't checked yet, but I suppose it does since it knows about the Thumb mode for instance.
Do I understand correctly?
If I do create some software that relies on extended ARMv8 or ARMv9 features and I want to release my software as a package, how should I name the package's architecture? Is there even a standard for that? Will it get rejected by the package managers of the few ARM distros out there, or will it be recognized as a subset of the wider arm64 / aarch64 architecture?
like this
ShaunaTheDead and Noxious like this.
Pretty much. From v8.0 onwards all the extra features are indicated by id flags. Stuff that is relevant to kernel mode will generally be automatically handled by the kernel patching itself on booting up and in user space some libraries will select appropriately accelerated functions when the ISA extensions are probed. There are a bunch off advisory instructions encoded in the hint space that will be effectively NOPs on older hardware but will enhance execution if run on newer hardware.
If you want to play with newer instructions have a look at QEMUs "max" CPU.
Thanks!
libraries will select appropriately accelerated functions when the ISA extensions are probed.
Yeah okay, that sounds like how it's always been done. I don't know why I figured it would be any different with ARM. But that makes complete sense.
Den gångna veckan besökte jag bland annat jugendstaden Ålesund i västra Norge. En liten stad känd för fiske och att de centrala delarna i huvudsak består av byggnader i jugendstil eller art noveau som det brukar kallas på engelska.
As Developer Advocate on the Mastodon team, I try to check out as many different apps and experiences built using the platform as I can. The other day I noticed an app for the Apple Watch that I had not seen yet (Odous) – so I installed it.
I posted from that app today, and liaizon asked for a screenshot… so I thought I would do a quick screenshot comparison of the three watchOS apps that I know about.
Posting from the Odous app on Apple Watch— Andy Piper (@andypiper@macaw.social) 2024-08-19T11:42:57.026Z
Each of these apps need to be configured by authenticating with your Mastodon account on the phone first. Once that is done, you can scroll through your timeline on your Watch, view images, and even post or reply – if that is something you find helpful. I don’t have a lot to say on any of them specifically, but they are all perfectly nice and useful apps. I’ve listed the screenshots in order of the time that I installed each app, Stomp being the first one I found, then Oxpecker, and Odous being the one I installed most recently.
Stomp
Oxpecker
Odous
Share this post from your fediverse server
https:// Share
This server does not support sharing. Please visit .
andypiper.co.uk/2024/08/19/mas…
#Blaugust2024 #100DaysToOffload #app #appleWatch #client #fediverse #mastodon #screenshots #watchos
The company behind Mastodon
Our story, mission, annual reports, interviews, press releases and more.joinmastodon.org
Cattleya A/S är ett pelagiskt fiskeriföretag i Esbjerg. Det kvarvarande pelagiska fiskeriföretaget i Esbjerg. Den stad som en gång i tiden var Danmarks största fiskeläge och centrum för industrifisket och fiskmjölsindustrin tillsammans med Skagen. Idag är fiskmjölsfabriken borta och Thyborøn har övertagit Esbjergs roll.
Mike Macgirvin är en veteran när det gäller utvecklingen av Fediversum. Han arbetade på Facebook fram till 2010 och innan dess på bland annat AOL och Netscape. Dessuom står han bakom utvecklingen av nomadisk identitet.
I'm still a bit confused by the use of this "Driver Store". Since when does Wine support device drivers? Or are we talking about something else?
Phoronix seems to explain a bit more, but I still did not understand: phoronix.com/news/Wine-9.16-Re…
Could anyone share their insights?
Wine 9.16 Begins Working On Driver Store Implementation, Pbuffer Support For Wayland
Wine 9.16 is out as the newest bi-weekly development snapshot for this open-source software that enables running Windows games and applications on Linux.www.phoronix.com
Wine 9.16 released with more Wayland work and an initial Driver Store implementation
The Wine 9.16 development release is now available for this compatibility layer to run Windows apps and games on Linux. Here's all that's changed.Liam Dawe (GamingOnLinux)
I'm actually wondering if it's not just applications. That text talks of installing drivers to devices, so I'm actually wondering if this is about better support for hardware that's paired to specific software. The recent use-case that's got it on my mind is Rekordbox with Pioneer DJ decks. My housemate was curious so I tried running it under WINE and it launches just fine, but it could not see the decks at all, nor the encrypted license key verification it does with it's driver. And I did manually install the driver into the prefix first.
However, I'm not positive this is it. It's just a hunch.
taanegl
in reply to anarchrist • • •Listen, if there isn't a server you can download maps and umods on that allows you to fly a shark in the sky with redeemer rockets over a Burger King, then I'm not interested...
UT99 was too good for us.