While technically true, bridge is ultimately an IMAP server you run yourself … and they do have good reasons for this design.
Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.
While technically true, bridge is ultimately an IMAP server you run yourself … and they do have good reasons for this design.
Plex is moving in the app direction… So Plex is probably moving away from what you want despite being one of the easiest options.
It would probably be helpful to know what you’re trying to accomplish beyond “what”. Like, why do you want to host your music and play it via a web browser.
There are Ukrainian and Russian ties… AFAIK it’s used heavily on both sides of the conflict. The founder had some commentary as to why the stance they’ve taken is the stance they’ve taken.
His mother is also from Ukraine herself:
… and Pavel is a French / UAE citizen (as additionally demonstrated by the French government holding him for questioning). The “Telegram is a Russian puppet” arguments are fairly weak.
Their crypto is still AES it’s just the stuff around it that’s home brewed… And even then telegram has been around 10+ years now with no known breaches via the encryption.
That argument was a lot stronger years ago.
For my grandfather… The issue wasn’t the shows, but he specifically wants a few news programs and will not under any circumstances go without them.
This was a problem for even going to Internet based streaming options because he just will not accept anything without those shows for more than a few months.
Meanwhile he also complains he doesn’t have enough to watch and says he can’t afford it (he can, he just doesn’t like what it cost)… But those dang news channels… and just his outlook on TV in general.
So, the web uses a system called chain of trust. There are public keys stored in your system or browser that are used to validate the public keys given to you by various web sites.
Both letsencrypt and traditional SSL providers work because they have keys on your system in the appropriate place so as to deem them trustworthy.
All that to say, you’re always trusting a certificate authority on some level unless you’re doing self signed certificates… And then nobody trusts you.
The main advantage to a paid cert authority is a bit more flexibility and a fancier certificate for your website that also perhaps includes the business name.
Realistically… There’s not much of a benefit for the average website or even small business.
Even more so, FBI wants to know where the money grandma gave to get her pictures back from the ransomware went.
All this money tracking stuff AFAIK was originally more about organized crime than tax revenue.
So the local machine doesn’t really need the firewall; it definitely doesn’t hurt, but your router should be covering this via port forwarding (ipv4) or just straight up firewall rules (ipv6).
You can basically go two routes to reasonable harden the system IMO. You can either just set up a user without administrative privileges and use something like a systemd system level service to start the server as that user and provide control over it from other users … OR … if you’re really paranoid, use a virtual machine and forward the port from the host machine into the VM.
A lot of what you’re doing is … fine stuff to do, but it’s not really going to help much (e.g. building system packages with hardening flags is good, but it only helps if those packages are actually part of the attack surface or rather what’s exposed to the remote users in someway).
Your biggest risk is going to be plugins that aren’t vetted doing bad things (and really only the VM or using the dedicated user account provides an insulation layer there – the VM really only adds protection against privilege escalation which is pretty hard to pull off on a patched system).
My advice for most people:
For Minecraft in particular, to properly back things up on a busy server you need to disable auto save, manually force save, do the backup and then enable auto save again after your backup. Kopia can issue commands to talk to the server to do that, but you need a plugin that can react to those commands running on the server (or possibly to use the server console via stdin). Realistically though, that’s overkill and you’ll be just fine backing up the files exactly as they are periodically.
Kopia in particular will do well here because of its deduplication of baked up data + chunking algorithm that breaks up files. That has saved me a crazy amount of storage vs other solutions I’ve tried. Kopia level compression isn’t needed because the Minecraft region files themselves are already highly compressed.
I think this is the main technology behind that and it is open source… I heard something about it years ago too. I’ve similarly never used it and am curious now that you mention it if anyone has. I’m unsure how to actually “use” ipfs and/or what tools might use it.
I’m kind of inclined to believe it doesn’t work (or doesn’t work well) otherwise it probably would be a bigger deal by now and there would be a lot to show off on the ipfs website.
Edit: It looks like this provides S3 compatible storage to IPFS. However, it seems more expensive than B2… So I’m not really sure why one would use it. You’d think IPFS would be attempting to undercut traditional providers.
rclone or rsync is probably better but see my reply a few comments down (the very long one) about protocol aware cloning vs just cloning things at the file system
They do have versioning: https://docs.syncthing.net/v1.27.7/users/versioning
Of course, you actually have to use that, it has to work, and you have to have a strategy for reverting the state (I don’t know if they have an easy way to do that – I’ve never used the versioned side of things).
I have had some situations where Syncthing seems to get confused and doesn’t do its job right. I ran into this particularly with trying to sync runelite configurations and music. There were a few times I had to “force push” … and I vaguely recall one time where I was fighting gigs of “out of sync” in both directions on something and just destroyed the sync and rebuilt it to stop … whatever it was doing.
Don’t get me wrong, it’s a great tool for syncing things between computers; but I would not rely on it for backup (and prefer having a backup solution on top of the synced directories). There are real backup tools out there that are far better suited to this sort of thing. I suggested Kopia, you should get some integrity checking using its builtin sync (as it won’t be able to figure out what to sync if your origin is corrupted); you won’t get that with a straight up rsync or a syncthing, they’re not application-aware enough to know they’re about to screw you over.
Restic has a similar feature but I’ve always found Restic’s approach much more frustrating and not-at-all friendly for anyone less than a veteran in systems administration. Kopia keeps configuration in the repository itself, has a GUI for desktop use that runs jobs for you automatic, automatically uses the secrets manager appropriate for your operating system, etc … Restic you kind of have to DIY a lot of basic things and the “quick start tutorial” just kinda ignores these various concerns.
Even if you plan to just use cron jobs, Kopia will do sane things with maintenance. Restic last I checked you still need to manually run maintenance tasks and if any job maintenance or otherwise fails, you need to make sure to unlock the repository (which if you haven’t set up notifications … well now you’ve got a silent backup failure and your backups aren’t running).
I just kept running into a sea of “oh this could be bad” footguns with Restic that made me uncomfortable trusting it as my primary backup. I’m sure Restic can be a great tool if used in expert hands with everything appropriately setup; but nobody tells you how to do that … and I get the feeling a lot of people are unaware of what they’re getting into.
The folks making Kopia … they seem like they really know what they’re doing and I’ve been very happy with it. We’re moving from rsnapshot to Kopia at work now as well (rsnapshot is also fairly good you’ve got a bunch of friends with NASes that support hard links and SSH, but it’s CHATTY and has no deduplication, encryption, data integrity verification is basically left to the file system – so you better be running ZFS – etc).
Duplicati’s developer is back too, so that might be something to keep an eye on … but as it stands, the project has been bit rotting for a while and AFAIK still has some pretty significant performance issues when restoring data.
You could use kopia for this (but you would need to schedule cron jobs or something similar to do it).
The way this works with kopia… You configure your backups to a particular location, then in-between runs there’s a sync command you can use to copy the backup repository to other locations.
Kopia also has the ability to check a repository for bad blobs via its verify function (so you can make sure the backups stored are actually at least X% viable).
Using zerotier or tailscale (for this probably tailscale because of the multithreading) would let you all create a virtual network between the devices that lets them directly talk to each other. That would allow you to use kopia’s sync functionality with devices in their homes.
Syncthing is not a backup tool and may very well destroy all your data on its own (though this is rare).
But with that small tweak to their front end they can “VERY CLEARLY SEE that the platform is being misused.” So per your own argument, the government should force them to do so (and presumably anyone that’s uncomfortable with that can “just not use Signal”).
Signal can very clearly see all the messages you send if they just add a bit of code.
This seems like a weird thing to be concerned about. Any given time zone there are going to be millions if not billions of people.
Git also “leaks” your system username and hostname IIRC by default which might be your real name. A fake name and email would pretty much be sufficient to make any “leaked” time zone information irrelevant.
Granted… I wonder if stuff like this is how they caught those North Korean “employees.”
https://arstechnica.com/?p=2042326
FWIW, I’d also suggest just picking the wrong time zone (but a close one) over UTC or something like that. UTC seems like it’s just “HEY LOOK AT ME! I’M TRYING TO HIDE SOMETHING!” One on the other side of the world, if you sleep like most people, could be defeated by doing an analysis of when the commits were made on average vs other folks from random repositories to find the average time of day and then reversing that information into a time zone.
It’s better to be “Jimmy Robinson in Houston Texas” than “John Smith in UTC-0”
“The news” is too vague a source to dispute.
which is how Bellingcat got to the FSB officers responsible for the poisoning of Navalny via their mobile phone call logs and airline ticket data
Was that a bad thing? I’ve never heard the name Bellingcat before, but it sounds like this would’ve been partially responsible for the reporting about the Navalny poisoning?
They used the two highly popular bots called Ha and the E ** G, which allow to get everything known to the government and other social networks on every citizen of Russia for about $1 to $5.
Ultimately, that sounds like an issue the Russian government needs to fix. Telegram bots are also trivial to launch and duplicate so … actually detecting and shutting that down without it being a massive expensive money pit is difficult.
It’s easy to say “oh they’re hosting it, they should just take it down.”
https://www.washingtonpost.com/politics/2018/10/16/postal-service-preferred-shipper-drug-dealers/
Should the US federal government hold themselves liable for delivering illegal drugs via their own postal service? I mean there’s serious nuance in what’s reasonable liability for a carrier … and personally holding the CEO criminally liable is a pretty extreme instance of that.
deleted by creator