• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle
  • I haven’t used tailscale to know how well it works but as a current zerotier user I’ve been considering moving away from it.

    I actually love the idea and it’s super simple to set up but has some very annoying pitfalls for me:

    1. It’s a lot of “magic”. When it fails to work the zerotier software gives you very little information on why.
    2. The NAT tunneling can be iffy. I had it fail to work in some public WiFis, occasionally failed to work on mobile internet (same phone and network when it otherwise works). Restarting the app, reconnecting and so on can often help but it’s not super reliable IMO.
    3. Just recently I’ve had to uninstall the app restart my Mac, reinstall the app to get it to work again - there were no changes that made it stop, it just decided it’s had enough one day to the next and as in point 1, it doesn’t tell you much over whether it’s connected or not.

    Pretty much all of the issues I’ve had were with devices that have to disconnect and re-connect from the network and/or devices that move between different networks (like laptop, phone). On my router, it’s been super stable. Point is, your mileage may vary - it’s worth trying but there are definitely issues.


  • Would you accept a certificate issued by AWS (Amazon)? Or GCP (Google)? Or azure (Microsoft)? Do you visit websites behind cloudflare with CF issued certs? Because all 4 of those certificates are free. There is no identity validation for signing up for any of them really past having access to some payment form (and I don’t even think all of them do even that). And you could argue between those 4 companies it’s about 80-90% of the traffic on the internet these days.

    Paid vs free is not a reliable comparison for trust. If anything, non-automated processes where a random engineer just gets the new cert and then hopefully remembers to delete it has a number of risk factors that doesn’t exist with LE (or other ACME supporting providers).




  • I have no experience with this, but happened to have seen an interview with Ludwig Minelli, the founder of Dignitas (an organisation for assisted death). The man is 90+ and still fighting for this right. I believe I saw it in a video format, but I think this was the interview - I think it’s worth a read.

    I’d suggest you look up the contact for the various organisations and reach out with your situation and questions to see what they say. They’re likely to be much better sources of information.






  • Honestly, even if you don’t terminate SSL right until your very own app server, it’s still based on the assumption that whoever holds the root cert for your certificate is trustworthy.

    The thing that has actually scared me with CF is the way their rules work. I am not even sure what’s the verification step to get to this, but if there is a configured page rule in a different CF account for your domain that points at cloudflare (I.e. the orange cloud), you essentially can’t control your domain as long as it’s pointing at CF (I think this sentence is a bit confusing so an alternative explanation: your domain is pointing DNS at your own CF account, in your CF account you have enabled proxying for your domain, some other CF account has a page rule for your domain, that rule is now in control). The rule in some other account will control it.

    It has happened to us at work and I had to escalate with their support to get them to remove the rule from the other cloudflare account so we can get back control of our domain while using CF. Their standard response is for you to find and ask the other CF account to remove the rule for your domain.

    This is a pretty common issue with gitbook, even the gitbook CEO was surprised CF does this.


  • I wonder if this will also have a reverse tail end effect.

    Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

    Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.


  • I have never seen contributors get anything for open source contributions.

    In larger, more established projects, they explicitly make you sign an agreement that your contributions are theirs for free (in the form of a github bot that tells you this when you open a PR). Sometimes you get as much as being mentioned in a readme or changelog, but that’s pretty much it.

    I’m sure there may be some examples of the opposite, I just… Wouldn’t hold my breath for it in general.


  • I think I misunderstood your problem, I assumed the issue was the volume mounts and after testing it I was indeed wrong - the docker cli now accepts relative paths so your original command does the same as what I suggested. After re-reading your issue I have a different idea of what’s wrong, but would have to see your dockerfile (or for you to confirm) to be sure.

    Do you add 10f.py to the docker image when you build it and do you specify the command/entrypoint in the Dockerfile? There are possibly to issues I can think of with how you do that (although considering the docker compose works it’s probably the 2nd):

    1. You do add it and you add it to /data in the image - when you mount a volume over it would make the script no longer exist in the container.
    2. You do add it and it’s not in /data - in this case the issue with running docker run -v ./:/data -w /workdir tenfigers_10f:v1 10f.py is the last bit - you override the command which makes it try to look for it at /data/10f.py, if you omit it the last part (10f.py) it should run whatever the original command was and assuming you set the cmd/entrypoint correctly in the Dockerfile it should see /data as ./ in python.

    (Also when you run it with the CLI you might want to add -it --rm as well to the docker command otherwise it won’t really behave similarly to a regular command)


  • It works in docker compose because compose handles relative paths for the volumes, the docker CLI doesn’t.

    You can achieve this by doing something like

    docker run -v $(pwd):/data ...
    

    pwd is a command that returns the current path as an absolute path, you can just run it by itself to see this. $() syntax is to execute the inner command separately before the shell runs the rest of it. (Same as backticks, just better practice)

    I imagine that wouldn’t work on windows, but it would on either osx, Linux or wsl.

    Generally speaking, if you need the file system access and your CLI requires some setup, I’d recommend either writing it in a statically compiled language (e.g. golang, rust) or researching how to compile a python script into an executable.

    If you’re just mounting your script in the container - you’re better off adding it directly at build time.



  • Haven’t had any experience with eweka, but this is the reason why people tend to have multiple providers from different backbones and multiple indexers - to increase your chance for completion. Weirdly, eweka does not follow DMCA, but NTD which I’ve seen regarded as slower to take down content, so in theory the experience should be better, especially on fresh content.

    Your mileage will vary greatly depending on what indexers/providers you pick and unfortunately it’s very difficult to say whether it will reach your expectations until you try different options.

    If you’re willing to spend some more on it, you could try just looking for a small and cheap block account from a different backbone to see if it helps with the missing articles, but there are no guarantees.