• 1 Post
  • 54 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • You’re missing the big impact here which is that bots can shift public opinion in mass which affects you directly.

    Gone are the days where individuals have their own opinions instead today opinions are just osmosised through social media.

    And if social media is essentially just a message bought by whoever can pay for the biggest bot farm, then anyone who thinks for themselves and wants to push back immediately becomes the enemy of everyone else.

    This is not a future that you want.


  • Mhm, I love dismissive “Look, it already works, and there’s nothing to improve” comments.

    Lemmy lacks significant capabilities to effectively handle the bots from 10+ years ago. Nevermind bots today.

    The controls which are implemented are implemented based off of “classic” bot concerns from nearly a decade ago. And even then, they’re shallow, and only “kind of” effective. They wouldn’t be considered effective for a social media platform in 2014, they definitely are not anywhere near capability today.




  • douglasg14b@lemmy.worldtoSelfhosted@lemmy.worldMozilla grants Ente $100k
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 month ago

    The issue here is that these are solvable problems, release compat isn’t a new problem. It’s just a problem that takes dedicated effort to solve for, just like any other feature.

    This is something FOSS apps tend to lack simply due to the nature of how contributions tend to work for free software. Which is an unfortunate reality, but a reality none the less.


  • douglasg14b@lemmy.worldtoSelfhosted@lemmy.worldMozilla grants Ente $100k
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    3
    ·
    1 month ago

    People really underestimate the value of stability and predictability.

    There are some amazing FOSS projects out there ran by folks who don’t give a crap about stability or the art of user experience. It holds them back, and unfortunately helps drive a fragmented ecosystem where we get 2,3,5 major projects all trying to do the same thing.


  • Because the majority of my traffic and services are internal with internal DNS? And I want valid HTTPS certs for them, without exposing my IP in the DNS for those A records.

    If I don’t care about leaking my IP in my a records then this is pretty easy. However I don’t want to do this for various reasons. One of those being that I engage in security related activities and have no desire to put myself at risk by leaking.

    Even services that I exposed to the internet I still don’t want to have my local network traffic go to the internet and back when there is no need for that. SSL termination at my own internal proxy solves that problem.

    I now have this working by using the cloudflare DNS ACME challenge. Those services which I exposed to the internet cloudflare is providing https termination for, cloudflare is then communicating with my proxy which also provides https termination. My internal communication with those services is terminated at my proxy.



  • I stated in the OP that cloudflair HTTPS is off :/

    I’m not using cloudflare for the certificate. I also can’t use the cloud for certificate anyways for internal services through a loopback.

    Similarly you can have SSL termination at multiple layers. That’s works I have services that proxy through multiple SSL terminations. The issue that I’m having is that the ACME challenge seems to be having issues, these issues are documented and explained in various GitHub threads, however the set of solutions are seemingly different and convoluted for different environments.

    This is why I’m asking this question here after having done a reasonable amount of research and trial and error.


  • I am doing SSL termination at the handoff which is the caddy proxy. My internal servers have their SSL terminated at caddy, my traffic does not go to the internet… It loops back from my router to my internal Network.

    However DNS still needs to have subdomains in order to get those certificates, this cloudflair DNS. I do not want my IP to be associated with the subdomains, thus exposing it, therefore cloudflair proxy.

    You’re seeing the errors because the proxy backend is being told to speak HTTPS with Caddy, and it doesn’t work like that.

    You can have SSL termination at multiple points. Cloudflare can do SSL termination and Cloudflair can also connect to your proxy which also has SSL termination. This is allowed, this works, I have services that do this already. You can have SSL termination at every hop if you want, with different certificates.

    That said, I have cloudflair SSL off, as stated in the OP. Cloudflare is not providing a cert, nor is it trying to communicate with my proxy via HTTPS.

    Contrary to your statement about this not working that way, cloudflair has no issues proxying to my proxy where I already have valid certs. Or even self signed ones, or even no certs. The only thing that doesn’t work is the ACME challenge…


    Edit: I have now solved this by using Cloudflair DNS ACME challenge. Cloudflair SSL turned back on. Everything works as expected now, I can have external clients terminate SSL at cloudflair, cloudflair communicate with my proxy through HTTPS, and have internal clients terminate SSL at caddy.









  • I’m not sure why you’re so dismissive of this? It’s kind of asinine.

    Does everyone everywhere only ever use computers in an enclosed room? Is everyone with something value to exfiltrate easily accessible to kidnap and beat with a wrench?

    This is valuable for corporate espionage, political purposes, or for nation states. If miniaturized, even easier for targeted attacks where it might be difficult to inject malware, or for broad attacks on office workers.

    And the best part is that it doesn’t leave a trace which beating someone with a wrench and malware would do…


  • They could, but as it currently stands media hosting on the fediverse… Sucks.

    It’s obscenely expensive for everyone involved, and scales poorly. It’s just not ready to operate at scale at this point.

    I’m sure it will get better, but large storage costs are better off being handled by a distributed file-system where a minimal level of duplication is baked in, but the storage load is reasonably spread out instead of fully duplicated on each peer.

    There are technologies for this, but they all have their own issues. And tomorrow there will be n+1 distributed filesystems, fragmenting it further.