• 1 Post
  • 62 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it’s very early stages. And still ‘man <whatever>’ is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don’t remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command ‘man’ out of the box and I was more than midly annoyed that I had to search for that.

    Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would’ve taken several hours to download, but it was annoying enough that I’ll spend some time at monday to fix this on my laptop.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlMan pages maintenance suspended
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 days ago

    I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it’s not beneficial to the actual project he’s been working with. And I can also understand that he doesn’t want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.




  • IsoKiero@sopuli.xyztoLinux@lemmy.mlThe Insecurity of Debian
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    The threat model seems a bit like fearmongering. Sure, if your container gets breached and attacker can (on some occasions) break out of it, it’s a big deal. But how likely that really is? And even if that would happen isn’t the data in the containers far more valuable than the base infrastructure under it on almost all cases?

    I’m not arguing against SELinux/AppArmor comparison, SElinux can be more secure, assuming it’s configured properly, but there’s quite a few steps on hardening the system before that. And as others have mentioned, neither of those are really widely adopted and I’d argue that when you design your setup properly from the ground up you really don’t need neither, at least unless the breach happens from some obscure 0-day or other bug.

    For the majority of data leaks and other breaches that’s almost never the reason. If your CRM or ecommerce software has a bug (or misconfiguration or a ton of other options) which allows dumping everyones data out of the database, SElinux wouldn’t save you.

    Security is hard indeed, but that’s a bit odd corner to look at it from, and it doesn’t have anything to do with Debian or RHEL.


  • If I had to guess, I’d say that e1000 cards are pretty well supported on every public distribution/kernel they offer without any extra modules, but I don’t have any around to verify it. At least on this ubuntu I don’t find any e1000 related firmware package or anything else, so I’d guess it’s supported out of the box.

    For the ifconfig, if you omit ‘-a’ it doesn’t show interfaces that are down, so maybe that’s the obvious you’re missing? It should show up on NetworkManager (or any other graphical tool, as well as nmcli and other cli alternatives), but as you’re going trough the manual route I assume you’re not running any. Mii-tool should pick it up too on command line.

    And if it’s not that simple, there seems to be at least something around the internet if you search for ‘NVM cheksum is not valid’ and ‘e1000e’, spesifically related to dell, but I didn’t check that path too deep.




  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 days ago

    I’ve read Linus’s book several years ago, and based on that flimsy knowledge on back of my head, I don’t think Linus was really competing with anyone at the time. Hurd was around, but it’s still coming soon™ to widespread use and things with AT&T and BSD were “a bit” complex at the time.

    BSD obviously has brought a ton of stuff on the table which Linux greatly benefited from and their stance on FOSS shouldn’t go without appreciation, but assuming my history knowledge isn’t too badly flawed, BSD and Linux weren’t straight competitors, but they started to gain traction (regardless of a lot longer history with BSD) around the same time and they grew stronger together instead of competing with eachother.

    A ton of us owes our current corporate lifes to the people who built the stepping stones before us, and Linus is no different. Obviously I personally owe Linus a ton for enabling my current status at the office, but the whole thing wouldn’t been possible without people coming before him. RMS and GNU movement plays a big part of that, but equally big part is played by a ton of other people.

    I’m not an expert by any stretch on history of Linux/Unix, but I’m glad that the people preceding my career did what they did. Covering all the bases on the topic would require a ton more than I can spit out on a platform like this, I’m just happy that we have the FOSS movement at all instead of everything being a walled garden today.


  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 days ago

    That kind of depends on how you define FOSS. The way we think of that today was in very early stages back in the 1991 and the orignal source was distributed as free, both as in speech and as in beer, but commercial use was prohibited, so it doesn’t strictly speaking qualify as FOSS (like we understand it today). About a year later Linux was released under GPL and the rest is history.

    Public domain code, academic world with any source code and things like that predate both Linux and GNU by a few decades and even the Free Software Foundation came 5-6 years before Linux, but the Linux itself has been pretty much as free as it is today from the start. GPL, GNU, FSF and all the things Stallman created or was a part of (regardless of his conflicting personality) just created a set of rules on how to play this game, pretty much before any game or rules for it existed.

    Minix was a commercial thing from the start, Linux wasn’t, and things just refined on the way. You are of course correct that the first release of Linux wasn’t strictly speaking FOSS, but the whole ‘FOSS’ mentality and rules for it wasn’t really a thing either back then.

    There’s of course adacemic debate to have for days on which came first and what rules whoever did obey and what release counts as FOSS or not, but for all intents and purposes, Linux was free software from the start and the competition was not.


  • As a rule of thumb, if you pay more money you get a better product. With spinning drives that almost always means that more expensive drives (in average) run longer than cheaper ones. Performance is another metric, but balancing those is where the smoke and mirrors come into play. You can get a pretty darn fast drive for a premium price which will fail in 3-4 years or for a similar price you can get a bit slower drive which will last you a decade. And that’s in average. You might get a ‘cheap’ brand high-performance drive to run without any issues for a long long time and you might also get a brand name NAS drive which will fail in 2 years. Those averages start to play a role if you buy drives by a dozen.

    Backblaze (among others) publish their very real world statistics on which drives to choose (again, on average), but for home gamer that’s not usually an option to run enough drives to get any benefits from statistical point of view. Obviously something from HGST or WD will most likely outperform any no-name brand from aliexpress and personally I’d only get something rated for 24/7 use, like WD RED, but it’s not a guarantee that those will actually run any longer as there’s always deviations from their gold standard.

    So, long story short, you will most likely get a significantly different results depending on which brand/product line you choose, but it’s not guaranteed, so you need to work around that with backups, different raid scenarios (likely raid 5 or 6 for home gamer) and acceptable time for downtime (how fast you can get a replacement, how long it’ll take to pull data back from backups and so on). I’ll soon migrate my setup from somewhat professional setting to more hobbyist one and with my pretty decent internet connectivity I most likely go with 2-1-1 setup instead of the ‘industry standard’ 3-2-1 (for serious setup you should probably learn what those really mean, but in short: number of copies existing - number of different storage media - number of offsite copies),

    On what you really should use, that depends heavily on your usage. For a media library a 5400rpm bigger drive might be better than a bit smaller 7200rpm drive and then there’s all kinds of edge cases plus potential options for ssd-caching and a ton of other stuff, so, unfortunately, the actual answer has quite a few of variables, starting from your wallet.



  • In theory you just send a link to click and that’s it. But, as there always is a but, your jitsi setup most likely don’t have massive load balancing, dozens of locations for servers and all the jazz which goes around random network issues and everything else which keeps the internet running.

    There’s a ton of things well outside your control and they may or may not bite you in the process. Big players have tons of workforce and money to make sure that kind of things don’t happen and they still do now and then. Personally, for a single use scenario like yours, I wouldn’t bother, but I’m not stopping you either, it’s a pretty neat thing to do. My (now dead) jitsi instance once saved a city council meeting when teams had issues and that got me a pretty good bragging rights, so it can be pretty rewarding too.


  • Jitsi works, and they have open relays to test with, but as the thing here is very much analog and I’d assume she’d just need to see your position, how hands move etc, the audio quality isn’t the most important thing here. Sure, it helps, but personally I’d just use zoom/teams/hangouts/something readily available and invest in a decent microphone (and audio in general) + camera.

    That way you don’t need to provide helpdesk on how to use your thing and waste time from actual lessons nor need to debug server issues while you’ve been scheduled to train with your teacher.


  • Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

    True. I have no doubt that you could set up a linux system to calculate pi to 10 million digits (or something similar) more power efficiently than windows-based system, but that would include compiling your own kernel leaving out everything unnecesary for that particular system, shutting down a ton of daemons which is commonly run on a typical desktop and so on and waste a ton more power on testing that you could never save. And that might not even be faster, just less power hungry, but no matter what that would be far far away from any real world scenario and instead be a competition to build a hardware and software to do that very spesific thing with as little power as possible.


  • Interesting thought indeed, but I highly doubt that difference is anything you could measure and there’s a ton of contributing factors, like what kind of services are running on a given host. So, in order to get a reasonable comparison you should run multiple different software with pretty much identical usage patterns on both operating systems to get any kind of comparable results.

    Also, the hardware support plays a big part. A laptop with dual GPUs and a “perfect” support from drivers on Windows would absolutely wipe the floor with Linux which couldn’t switch GPUs at the fly (I don’t know how well that scenario is supported on linux today). Same with multicore-cpu’s and their efficient usage, but I think on that the operating system plays a lot smaller role.

    However changes in hardware, like ARM CPUs, would make a huge difference globally, and at least traditionally that’s the part where linux shines on compatibility and why Macs run on batteries for longer. But in the reality, if we could squeeze more of our CPU cycles globally to do stuff more efficiently we’d just throw more stuff on them and still consume more power.

    Back when cellphones (and other rechargeable things) became mainstream their chargers were so unefficient that unplugging them actually made sense, but today our USB-bricks consume next to nothing when they’re idle so it doesn’t really matter.



  • At work where cable runs are usually made by maintenance people the most common problem is poor termination. They often just crimp a connector instead of using patch panels/sockets and unwind too much of the cable before connector which causes all kinds of problems. With proper termination problems usually go away.

    But it can be a ton of other stuff too. Good cable tester is pretty much essential to figure out what’s going on. I’m using 1st gen version of Pocketethernet and it’s been pretty handy, but there’s a ton of those available, just get something a bit better than a simple indicator with blinking leds which can only indicate if the cable isn’t completely broken.



  • It depends heavily on what you do and what you’re comparing yourself against. I’ve been making a living with IT for nearly 20 years and I still don’t consider myself to be an expert on anything, but it’s a really wide field and what I’ve learned that the things I consider ‘easy’ or ‘simple’ (mostly with linux servers) are surprisingly difficult for people who’d (for example) wipe the floor with me if we competed on planning and setting up an server infrastructure or build enterprise networks.

    And of course I’ve also met the other end of spectrum. People who claim to be ‘experts’ or ‘senior techs’ at something are so incompetent on their tasks or their field of knowledge is so ridiculously narrow that I wouldn’t trust them with anything above first tier helpdesk if even that. And the sad part is that those ‘experts’ often make way more money than me because they happened to score a job on some big IT company and their hours are billed accordingly.

    And then there’s the whole other can of worms on a forums like this where ‘technical people’ range from someone who can install a operating system by following instructions to the guys who write assembly code to some obscure old hardware just for the fun of it.