• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • While security has nothing to do with my disgust for docker and people advocating its use, docker adds a layer of complexity, which means it is not necessarily more secure.

    What is extremely bad about docker:

    1. it enables extremely shitty configuration control on the side of a developer. There are way too many developers who have a chaotic approach to configurations, and instead of being forced to write a proper installation and configuration guide from scratch, and thereby making themselves(!) aware of active configuration changes they made to make their system work, they just roll out the docker container they develop in, without remembering most of the configurations they made. Which, naturally, means that they are unable to assist in troubleshooting problems or reproduce issues that users might have.

    In general, if you can’t write a good user manual, or at least clearly identify needed dependencies and configurations, you should not be developing software for other people.

    1. it combines the disadvantages of a VM (shitty performance) and running directly on the host OS (sandboxing is not nearly as good as on a VM)

    2. it creates insane bloat, by completely bypassing the concept of shared libraries and making people download copies of software they already have on their system

    3. it adds a lot of security risks because the user would have to not only review the source code they are compiling and installing, but also would have to scan all the dependencies and what-not, and would basically have to trust the developer and/or anyone distributing an image that they did not add any malware.






  • For the keys - do you mean something like

    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 00000000 where 00000000 is replaced with the fingerprint of the key you want to fetch?

    I do agree - the apt-key command is kinda dangerous because it imports keys that will be generally trusted, IIRC. So a similar command to fetch a key by fingerprint for it to be available to choose as signing keys for repositories that we configure for a single application (suite) would be nice.

    I always disliked that signing keys are available for download from the same websites that have the repository. What’s the point in that? If someone can inject malicious code in the repository, they sure as hell can generate a matching signing key & sign the code with that.

    Hence I always verify signing keys / fingerprints against somewhat trustworthy third parties.

    What we really need though is a crowdsourced, reputation-based code review system. Where open source code is stored in git-like versioning history, and has clear documentations for each function what it should and should not do. And a reviewer can pick as little as an individual function and review the code to confirm (or refute) that the function

    1. does exactly what the interface documentation claims it does
    2. does nothing else
    3. performs input validation (range checks etc)
    4. is well-written (in terms of performance)

    Then, your reputation score would increase according to other users concurring with your assessment (or decrease if people disagree), and your reputation can be used as a weighting factor in contributing to the “review thoroughness” of a code module that you reviewed. E.g.: a user with a reputation of 0.5 confirms that a module does exactly what it claims to do: Module gets review count +1, module gets new total score of +0.5, new total weight of ( combined previous weights + 0.5 ) and the average review score is “reviews total score” / “total weight”.

    Something like that. And if you have a reputation of “0.9”, the review count goes +1, total score +0.9, total weight +0.9 (so the average score stays between 0 and 1).

    Independent of the user reputation, the user’s review conclusion is stored as “1” (= performs as claimed) or “0” (= does not perform as claimed) for this module.

    Reputation of reviewers could be calculated as the sum of all their individual review scores (at the time the reputation is needed), where the score they get is 1 minus the absolute difference between the average review score of a reviewed module and their own review conclusion.

    E.g. User A concludes: module does what it claims to do: User A assessment is 1 (score for the module) User B concludes: module does NOT what it claims to do: User B assessment is 0 (score)

    Module score is 0.8 (most reviewers agreed that it does what it claims to do)

    User A reputation gained from their review of this module is 1 - abs( 1 - 0.8 ) = 0.8 User B reputation gained from their review of this module is 1 - abs( 0 - 0.8 ) = 0.2

    If both users have previously gained a reputation of 1.0 from 10 reviews (where everyone agreed on the same assessment, thus full scores):

    User A new reputation: ( 1 * 10 + 0.8 ) / 11 = 0.982 User B new reputation: ( 1 * 10 + 0.2 ) / 11 = 0.927

    The basic idea being that all modules in the decentralized review database would have a review count which everyone could filter by, and find the least-reviewed modules (presumably weakest links) to focus their attention on.

    If technically feasible, a decentralized database should prevent any given entity (secret services, botfarms) to falsify the overall review picture too much. I am not sure this can be accomplished - especially with the sophistication of the climate-destroying large language model technology. :/


  • Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.

    As for the Makefiles, I meant that for whatever build toolchain the project uses - because the rules to build a project are an essential part of the project, linking the source code into a working library or executable. Whether it is cmake, or gnu make, or whatever else there is - that’s not so important as long as those build toolchains are available cross platforms.

    I think what is really missing in the open source world is a distribution-agnostic standard how to describe application dependencies so that package maintainers can auto-generate distro-packages with the distribution-specific dependencies based on that “dependencies” file.

    Similar to debian dependencies Depends: libstdc++6 (>= 10.2.1) but in a way that identifies code modules, not packages, so that distributions that package software together differently will still be able to identy findPackageFor( dependency )

    I would really like to add this kind of info to my projects and have a tool that can auto-build a repo-package from those.