

If you’ve ever had to go through the audit process CAs are subjected to, not violating the compliance controls and ensuring audit compliance is a massive chunk of your attention for a lot of the year.


If you’ve ever had to go through the audit process CAs are subjected to, not violating the compliance controls and ensuring audit compliance is a massive chunk of your attention for a lot of the year.
Can I ask what client you’re using?


I assumed that the primary account had full control over secondary user profiles, will have to revisit and confirm - thanks for the tip!


I’m aware of what’s happening in the states. I’m talking from a resourcing perspective. You’d already have to know what you were after to confirm its absence from the phone, if the wipe can be done silently.
If you could load in to your dummy profile, while deleting the keys to your main profile, which could then be freed up as storage space, all silently, with the right unlock password, that’d be pretty hard to prove in a way that warranted arresting everyone.
This would limit this charge to only those that announced it as a political statement or who were already being targeted specifically.


what would be really cool is if it binned the storage keys for one user and not the other, silently. That way you could actually protect your data, without being martyred.
They’d have to prove a lot in the first instance to warrant arresting you then and there, like that the knew you’d done it
Looks like that might have changed, libc-gconv-modules-extra has an i386 package for 2.42-5 added at like midnight UTC+1. Given the sources only update every 6 hours, might be you found an unlucky update in between?
Struggled to find a time for the release, but the changelog has one, unsure how true to package-available time that is:
glibc (2.42-5) unstable; urgency=medium
[ Martin Bagge ]
* Update Swedish debconf translation. Closes: #1121991.
[ Aurelien Jarno ]
* debian/control.in/main: change libc-gconv-modules-extra to Multi-Arch:
same as it contains libraries.
* debian/libc6.symbols.i386, debian/libc6-i386.symbols.{amd64,x32}: force
the minimum libc6 version to >= 2.42, to ensure GLIBC_ABI_GNU_TLS is
available, given symbols in .gnu.version_r section are currently not
handled by dpkg-shlibdeps.
-- Aurelien Jarno <aurel32@debian.org> Sat, 06 Dec 2025 23:02:46 +0100
glibc (2.42-4) unstable; urgency=medium
* Upload to unstable.
-- Aurelien Jarno <aurel32@debian.org> Wed, 03 Dec 2025 23:03:48 +0100
That is 500Mb+ because the ‘designer’ just stuffed the highest quality image in they could as a background on the whole thing
I thought about this for a long while, and realised I wasn’t sure why, just that most of my work has gravitated towards Arch for a while.
Eventually, I’ve decided the reason for the move is because of three specific issues, that are really all the same problem - namely I don’t want to learn the nix config language to do the things I want to do right now.
I’ve read lots of material on flakes, even first modified then wrote a flake to get not-yet-packaged nvidia 5080 modules installed (for a corporate local llm POC-turned-PROD, was very glad I could use nix for it!) I still just don’t really get how all the pieces hang together intuitively, and my barrier is interest and time.
Lanzaboote for secure boot. I’m going to encrypt disks, and I’m going to use the TPM for unlocking after measured uki, despite the concerns of cold-boot attacks, because they aren’t a problem in my threat model. Like the nvidia flake, I don’t really get how it hangs together intuitively.
Home management and home-manager. Nix config language is something I really want to get and understand, but I’ve been maintaining my home directory since before 2010, and I have tools and methods for dealing with lots of things already. The conversion would take more time than I’m prepared to devote.
Most of the benefits of nix are things I already have in some format, like configuration management and package tracking with git/stow, ansible for deployment, btrfs for snapshots, rollback and versioning. It’s not all integrated in one system, but it is all known to me, and that makes me resistant to change.
I know that if I had a week of personal time to dig in and learn, to shake off all the old fleas and crutch methods learned for admin on systems that aren’t declarative, I’d probably come away with a whole new appreciation for what my systems actually look like, and have them all reproducible from a readable config sheet. I’m just not able to make that time investment, especially for something that doesn’t solve more problems than I’ve already solved.
You are right to be afraid. I had a similar story, and am still recovering and sorting what data is recoverable. Nearly lost age 0.5-1.5 years of media of my daughters life this way.
As others have said, don’t replicate your existing backup. Do two backups. Preferably on different mediums, spinning disk/ssd eg.
If one backup is corrupted or something nasty is introduced, you will lose both. This is one of the times it is appropriate to do the work twice.
I’ve built two backup mini PCs, and I replicate to them pretty continuously. Otherwise, look at something like Borg base/alternatives.
Remember, 3-2-1 and restore testing. It’s not a backup unless you can restore it.
This is the most important thing. Over time, you develop opinions about software and methods of solving problems. I have strong opinions on how I want to manage a system, but almost no opinions on flags I want to switch when I compile software. This is why I’m on arch not gentoo. I’m sure I’ll make the leap eventually…
Before I switched back to Arch for my daily driver, I’d frankensteined my Fedora install on my laptop to replace power management, all the GUI bits, most of the networking stack and a fair chunk of the package system. Fedora, and Gnome in that case is opinionated software. That’s a good thing as far as I’m concerned, having a unified vision helps give the system direction and a unique feel. These days, I have my own opinions that differ in some ways from available distros.
I wanted certain bits to work a certain way, and I kept having to replace other parts to match the bits I was changing. When you ask the question, can I swap daemon X out for Y, the answer on fedora was, sure, but you’ll have to replace a, b and c too, and figure out the rest for yourself. Good luck when updates come along.
The answer on arch is, yeah, sure, you can do that - and here’s a high level wiki naming some gotchas you’ll want to watch out for.
I’ve also reached a stage in my computer usage that I don’t want things to happen automatically for me unless I’ve agreed them or designed them. For example, machines don’t auto-mount usb drives, even in gui user sessions, or auto connect to dhcp. I understand what needs to be done, and do it the way I want to do it, because I have opinions on networking and usb mounting.
My work laptop is a living build that I just keep adding to and changing every day. Btrfs snapshots are available for rollback…
I’ve got two backup machines - beelink mini me’s running reproducible builds created using archinstall. It’s running on internal emmc, and they have have a 6 disk zfs raidz2 on internal nvme drives, all of which are locked behind luks encryption,with the keys in the fTPM module, without the damn Microsoft key shim. On is off site. Trying to get secureboot working on Debian was an exercise in frustration.
I’ve modified a version of that same build for my main docker host on another mini PC.
My desktop runs nixos, but will be transfered to arch next rebuild.
I’ve got a steamdeck, which runs an arch based distro.
I used to run raspberry pi’s on arch because the image to flash the SD cards used to be way smaller than what was offered by the default pi is.
That’s all using arch. It’s flexible, has the tool sets I need, and almost never tells me ‘No, you can’t do that’.
I’d agree that a hardware solution would be best. Something designed specifically to do it. I’ve been eyeing up the biometric yubikey for a while.
I do this for ssh keys, VPN certs and pgp keys. My solution is pretty budget, I generate the keys on a LUKS encrypted USB and run a script that loads them in to agents, and flushes them on sleep. The script unlocks and mounts the LUKS partition, adds the keys to agents, unmounts and locks the USB. The passwords I just remember for the unlock and load into memory, but they’re ripe for stuffing in to keepass-xc - I need to look at the secret service api and incorporate that in to the script to fetch the unlock passwords directly from keepass.
I have symlinks in the default user directories to the USB’s mount points, like ~/.ssh/id_ed25519 -> /run/media/<user>/<mount>/id_ed25519. By default, when you run ssh-agent, it tries to add keys in the default places.
The way it works for me is:
I keep break-glass spares in a locked cabinet in my house and office, both with different recovery keys
I do this because it’s my historical solution, and I haven’t evaluated the hardware options seriously yet.


I have never understood this fork argument. All it takes to make it work is a clear division for the project.
If you want to make something, and it requires modification of the source for a GPL project you want to include, why not contribute that back to the source? Then keep anything that isn’t a modification of that piece of your project separately, and license it appropriately. It’s practically as simple as maintaining a submodule.
I’d like to believe this is purely a communication issue, but I suspect it’s more likely conflated with being a USP and argued as a potential liability.
These wasteful practices of ‘re-writing and not-cloning’ are facilitated by a total lack of accountability for security on closed source commercialised project. I know I wouldn’t be maintaining an analogue of a project if there were available security updates from upstream.


You know what I would buy? Hitman set in ancient Egypt.
Infiltrating a workgang forced to build a pyramid, putting a spitting cobra into a nasty enforcer’s chamber pot because he owes the Potiphar some serious myrrh?
Sign me up.


The thin end of the wedge for anything is always sex


The thing I don’t understand about any of this, is why can’t you comment on ongoing dialogues with the gatekeepers?
I understand the basic tenants of keeping the discussion closed until official statements can be prepared, to prevent the press and the public from going off half cocked. That makes sense for private matters.
This is not private. I can’t understand what is the point of negotiating law for people if they can’t even see the ongoing process?


They’ve snapified coreutils too, and rewritten them in rust (uutils). It’s proving to be a challenging transition…
Edit: While the article mentions rust’s vaunted memory safety as a driver, I can’t help but notice that uutils is licensed MIT, as opposed to GNU’s coreutils license being GPL v3.
While snapd is licensed GPL v3, it’s important to note that despite the ‘d’ suffix, it’s barely a daemon. It’s mostly a client for the snap backend - which is proprietarially licensed and only hosted with Canonical. The snapd client could be replaced at any time.


Everything’s a trade off, as you already know. I still use lets encrypt, despite the fact that I know attackers watch CT logs, and they’ll know as soon as I mint a cert.


Also, according to the propaganda model, in developed democratic societies, the propaganda is assumed to be true, and if you’re not on board with that, you’re not part of the debate.


Not without Deno anymore, google just took a great big shit on it :(
Mine (Thunder) doesn’t recognize tagging the code block as a specific syntax, it just shows it as preformatted block, with no highlighting.