That sounds a bit suspicious.
That sounds a bit suspicious.
Wait until you need to validate the installed state of files on the machine.
I don’t think I’ve ever had to do that in all my years of using Debian. What does it even mean?
Oh and another point: on Debian every package you get is Debian. On Arch the stuff in AUR is not Arch and is not supported by Arch, it’s unstable experimental stuff and you take your chances with it.
In practice, generally, the AUR stuff trends to mostly work fine but it’s never guaranteed. It can and it does break spontaneously from time to time.
This applies to ALL Arch-based distros. So if you plan on counting on AUR to supplement your app needs, please reconsider.
Debian stable has ~100k stable packages included. Arch has ~15k bleeding edge packages included and ~80k “varies wildly” in the AUR. It will not be the same experience.
Debian with Steam and other popular desktop apps (like LibreOffice and Firefox) installed from Flatpak will be a much more reliable experience.
Yeah, they like forgot to reupload a new cert 3 times.
It happens to everybody, including Microsoft, Google, Amazon etc.
That thing hasn’t been “valid” in half a decade.
There are three distros derived from Arch that try to do very different things:
But seriously, I have mixed feelings recommending Manjaro to a beginner. The distro itself is super-stable and easy to use because you basically have to do nothing. I have non-computer savvy family members on Manjaro without admin privileges and it works perfectly.
But the trick is that doing nothing part. You have to leave it alone and not modify the way it works, and beginners often feel the need to tinker with the system… Not only that but it’s hard as a beginner to figure out online what’s generic Arch advice and what’s Manjaro-specific and which of that can be applied safely on Manjaro and which is an Arch-ism that will ruin your install.
If you’re set on trying Manjaro I can offer a list of recommendations to give you an idea of how to navigate the dos and donts.
The Deck is configured by Valve in a way uniquely suited to it, and they also make sure it works properly. It’s not going to be the same on vanilla Arch installed by you on your own PC.
Common wisdom for a beginner is to use something like Debian or Debian-based like Mint or Ubuntu because they’re popular and stable so you can get a safe start. I wouldn’t recommend Arch or Arch-based to a complete beginner.
Depends on the distro, some have started to offer btrfs by default.
Things you can’t do with the website:
They had some sort of point-based scheme, nuf said. It was so monumentally stupid I can’t even describe it without getting upset.
You forgot to mention the absolutely idiotic way kmanga works.
This is not a new problem, .internal is just a new gimmick but people have been using .lan and whatnot for ages.
Certificates are a web-specific problem but there’s more to intranets than HTTPS. All devices on my network get a .lan name but not all of them run a web app.
Everybody should be using DNS over HTTPS (DoH) or over TLS (DoT) nowadays. Clear DNS is way too easy to subvert and even when it’s not being tampered with most ISP snoop on it to compile statistics about what their customers visit.
DoH and DoT aren’t a full-proof solution though. HTTPS connections still leak domain names when the target server doesn’t use Encrypted Hello (ECH) and you need to be using DoH for ECH to work.
Even if all that is in place, a determined ISP, workplace or state actor can identify DoH/DoT servers and compile block lists, perform deep packet inspection to detect such connections regardless of server, or set up their own honey trap servers.
There’s also the negative side of DoH/DoT, when appliances and IoT devices on your network use it to bypass your control over your LAN.
If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like lan.domain.com
. With that you can get a wildcard Let’s Encrypt certificate for *.lan.domain.com
and all your https://whatever.lan.domain.com
URLs will work normally in any browser (for as long as you’re on the LAN).
Ubuntu and Kubuntu are nice distros, the problem with Ubuntu is that Canonical makes snaps mandatory. But on Kubuntu you can make them optional.
Kubuntu comes with snap support but you can uninstall it and the default snaps, mark the snapd package as forbidden and that’s pretty much it.
I did Linux From Scratch once. I got it to the point it was booting a kennel that supported everything I needed, had a working init (sysv), a helper script that “installed” packages (symlinked stuff to integrate them into the system) and kept “recipes” for whatever I compiled.
If I had kept going and compiled everything I needed and kept maintaining that I’m guessing it would have been pretty close to the Slackware experience, right?
It was very cool to know I can do all that and I learned a lot but if I had kept going I feel like it would have become limiting rather than empowering.
Like, it’s cool to go camping and catch your food, and cook it, and sleep outdoors and to know you can survive in the wild, but I wouldn’t want to have to do that every day.
The problem with making the core immutable is that you have to decide where you draw the line between immutable and regular packages.
It sounds nice to be able to always have an immutable blob with some built-in functionality that you can fall back to, but the question is how far do you want to take that blob?
Things that go into the immutable blob don’t offer much (if any) choice to the user. I can see it being used for something like the kernel and basic drivers, coreutils, basic networking. It starts getting blurry when you get to things like systemd and over-reaching when it gets to desktop functionality.
Also, you say it’s more reliable but you can get bugs in anything. Version x.y.z of the kernel can have bugs whether it’s distributed as part of an immutable core or as a package.
I definitely think distributing software as immutable bulk layers can be useful for certain device classes such as embedded, mobile, gaming etc. The Steam Deck for example and other devices where the vendor can predefine the partition table and just image it with a single binary blob.
On the desktop however I struggle to see what problems immutable solves that are not already solved some other way. Desktop machines require some degree of flexibility.
It’s using work done for the Arkdep tookkit by Arkane Linux. The immutable image is based on btrfs.
I don’t know, rats are pretty smart.