For many systems out there, /bin and /lib are no longer a thing. Instead, they are just a link to /usr/bin and /usr/lib. And for some systems even /sbin has been merged with /bin (in turn linked to /usr/bin).
For many systems out there, /bin and /lib are no longer a thing. Instead, they are just a link to /usr/bin and /usr/lib. And for some systems even /sbin has been merged with /bin (in turn linked to /usr/bin).
Not just Linux… 99% of the time you see something weird in the computing world, the reason is going to be “because history.”
The C developers are the ones with the ageist mindset.
The Rust developers certainly are not the ones raising the point “C has always worked, so why should we use another language?” which ignores the objective advantages of Rust and is solely leaning on C being the older language.
They very rarely have memory and threading issues
It’s always the “rarely” that gets you. A program that doesn’t crash is awesome, a program that crashes consistently is easy to debug (and most likely would be caught during development anyway), but a program that crashes only once a week? Wooo boy.
People vastly underestimate the value Rust brings by ensuring the same class of bugs will never happen.
It really depends.
If I know I will never open the file in the terminal or batch process it in someways, I will name it using Common Case: “Cool Filename.odt”.
Anything besides that, snake case. Preferably prefixed with current date: “20240901_cool_filename”
People back then just grossly underestimated how big computing was going to be.
The human brain is not built to predict exponential growths!
One of the issues at hand is that X11, the predecessor of Wayland, does not have a standardized way to tell applications what scale they should use. Applications on X11 get the scale from environment variables (completely bypassing X11), or from Xft.dpi, or by providing in-application settings, or they guess it using some unorthodox means, or simply don’t scale at all. It’s a huge mess overall.
It is one of the more-or-less fundamentally unfixable parts of the protocol, since it wants everything to be on the same coordinate space (i.e. 1 pixel is 1 pixel everywhere, which is… quite unsuitable for modern systems.)
Wayland does operate like how you say it and applications supporting Wayland will work properly in HiDPI environments.
However a lot of people and applications are still on X11 due to various reasons.
LoDPI applications are either tiny sized or upscaled (= blurriness), aren’t they?
Yeah I get the display server part. What I meant was that 200% scaling gets you 1920x1080 logical resolution on HiDPI applications – LoDPI applications continue to be blurry just as if you set your actual resolution to 1080p, but HiDPI applications will enjoy the enhanced visual acuity.
Even on smaller screens like the 14" ones, the quality of very high resolution (e.g. 4K) is still quite visible IMO, especially when it comes to text rendering. But it could very well just be my eyes.
It’s not even Linux’s fault. Plenty of apps support HiDPI on Linux.
It’s the developers who still think that LoDPI-only is still acceptable when it’s already 2024.
Isn’t scaling to 200% the same as lowering the resolution to half? And you lose the high DPI for apps that support it too.
Agreed. HiDPI is the way to go and we should appreciate Framework for putting that in their laptops instead of continuing the use of shitty 1366x768 screens.
Xorg is the reason why OP is facing the scaling issues. OP, try to force the apps to run on native Wayland if they support it but don’t default to it. The Wayland page on Arch wiki has instructions on that. Immensely improved my HiDPI experience.
Agencies that are still living in the 90s…
Assuming the entire US court system isn’t in the corporate pocket
I love your optimism
People who do work for themselves
Did you notice that I said “merge request” earlier? Your neighbours were kindly helping you to make a cake and you responded to their kindness with GTFO.
Did I say “some”? I think I did.
GNOME developers seem to have some sort of a weird “vision” for their software. If your bug report falls within their vision, good for you. When your bug report doesn’t, it’s insta WONTFIX.
The FDO icon theme fiasco occurred merely a few days ago.
Entitled brat? What… Have you ever seen how GNOME developers respond to some bug reports and merge requests?
Since when has reporting bugs and contributing to the project become an entitlement?
Neither hit the backdoor. Arch didn’t patch OpenSSH and the library wasn’t linked as a result.
It’s not a fork of wlroots. wlroots is a library to assist developers in creating Wayland compositors.
Last time I asked around about this question, the answer was surprisingly “probably not much”! When a low-power x86 chip (like those mobile chips) is idling (which is pretty much all the time if all you are doing is hosting a server on it) it consumes very little power, about the same level as an idling Pi. It is when the frequency ramps up that performance-per-watt gets noticeably worse on x86.
Edit: My personal test showed that my x86 laptop fared slightly worse than my Pi 3 in idling power (~2 watts higher it seems), but that laptop is oooooooold.