Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
Yeah the golden age of streaming has long passed. Now it’s an expensive, ad-ridden fragmented mess of data harvesting.
Unironically Powershell is great and learning it has propelled me through the last 12 years of my career as a Sysadmin. My biggest complaints with it are generally Windows complaints or due to legacy powershell modules.
My default is to generate a 32 character password and store it in a password manager. Doesn’t matter to me how many characters it has since I’m just going to copy and paste it anyway.
Pretty surprising how many places enforce shorter passwords though… I had a bank that had a maximum character limit of 12. I don’t bank with them anymore. Short password limits is definitely is an indicator of bad underlying security practices.
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
No need to optimize when you can just push people to upgrade their hardware more frequently so you make fat stacks of cash from OEM’s.
Linux has been easier to install than Windows for a while now, particularly with all the goofy hacks you have to pull out just to make an offline account on Win11.
Alternatively what you’re describing sounds like SponsorBlock but for podcasts. You probably wouldn’t have to rehost the actual audio files to accomplish this, just have a podcast client/addon that allows user submissions for ad segments and a database somewhere that can host the metadata for ad breaks.
Biggest issue is probably that you’re probably building or forking an existing podcast app to do it, and some podcasts dynamically insert ads so it’s possible that peoples downloaded files could have different ad segments/times.
Well it may not be accurate or effective, but at least it’s expensive.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
I’ve been using ZFS now for a few years for all my data drives/pools but I haven’t gotten brave enough to boot from it yet. Snapshotting a system drive would be really handy.
Expecting other people to build the online communities you want to use is how we got the corporate social media bait and switch in the first place.
I thought it was just a meme.
I see way more complaints about ‘elitist Arch users’ than I ever do comments from actual elitist Arch users.
DuckDNS is great… but they have had some pretty major outages recently. No complaints, I know it’s an extremely valuable free service but it’s worth mentioning.
Cloudflare has an api for easy dynamic dns. I use oznu/docker-cloudflare-ddns to manage this, it’s super easy:
docker run \
-e API_KEY=xxxxxxx \
-e ZONE=example.com \
-e SUBDOMAIN=subdomain \
oznu/cloudflare-ddns
Then I just make a CNAME for each of my public facing services to point to ‘subdomain.example.com’ and use a reverse proxy to get incoming traffic to the right service.
765 movies (~4.5 TB)
161 tv series (~7.2 TB)
About a year ago 6TB storage was no longer cutting it since I was constantly having to hunt for media to delete or downgrade quality in order to make more room. I bought five 14TB drives and put them in a big zfs pool so I don’t have to do that anymore.
I thought we’d already collectively settled on the tinfoil hat.
They also specifically warn that it’s not optimized for a VM right now. It’s still not quite ready on bare metal, but less so in a VM.
I’ve had exactly this happen to me. It was my own fault but it took a bit of work figure out.