No that’s really not possible. I’d recommend tossing the similar ones after you pick the “best”.
No that’s really not possible. I’d recommend tossing the similar ones after you pick the “best”.
I gotcha, I misunderstood. Cheers!
You’re right, but that universal hardware support is still lacking. It’s spotty on different tv boxes, phones, consoles. It’ll get there!
I agree. One day we’ll hit a point where no matter how much money you toss at it, it wont get much better and at that point, open source will catch up, and then companies will opt for that because it’s free, until then that license gets you x% better quality/byte ratios, it gives your company and “edge” to go all in on it.
Video is a can of worms. Video files have the concept of a container (like .mp4, .mkv) and codecs (h.264, h.265). Making it worse, you also have embedded audio which has its own codec (mp3, ac3, aac, ogg, flac).
For me, mp4 container with h.264 video and aac audio have the widest hardware support- videos encoded that way pretty much play on everything and well.
It’s going to be a couple more years before good hardware support for h.265 is ubiquitous. I see .mkv with h.265 used a lot for 4K stuff, and while well supported on desktop, device support is still spotty.
What’s wrong with ZSH? I was using it for 5+ years before it became the default over bash, mainly because of the auto complete features, oh-my-zsh and later just plugins and powerlevel10k.
You can get gigabit over 5e, you don’t need super expensive cables. That said I ran cat 6 through my whole house and am able to fully saturate the bus, about 115 MBps (920 Mbps) which accounts for the TCP overhead. I haven’t tried 2/5/10G on it bull I’ll probably upgrade in a few years, I don’t expect to have much trouble getting good speeds. Your biggest issue was you might not have had all the cable pairs in your wire, or your cables ends might have been crusty, or you could have had bad kinks in the wire causing packet loss, or some real absolute trash quality wire. In general, 5e and 6 are plenty for most people/situations to get good speeds (1Gb+)
In our case cloud is fine, as long as it’s within our security boundary- so that means external SaS is out, but hosted within our cloud is fine. I’m still not super excited about the prospect of managing and maintaining it though :/ We’re going down this path because AWS is killing code commit and other pipeline stuff, which sucks because even though other tools are better, code commit was fedRamped and from the same vendor.
Redundancy is your best option regardless- that said, when those western digital easy-stores go on sale, I like to grab them for offline storage. Something like rsync every couple of months and you have a decent second copy of your data to keep on a shelf. The $/Gig was hard to beat, I haven’t gotten any in a year or two, but there were sales to get the drives with enclosures for like $130 for 8TB. At the time, that was far less than I was paying for internal NAS drives. Since it’s not a daily driver, you don’t need super high runtime or performance.
Don’t know if they continued to renew it, but macOS was officially certified as unix for a few years!
I’d say adaptability would be priority in an environment that is subject to frequent change. Environments that are largely static probably favor efficiency.
https://www.localstack.cloud/ emulates a bunch of the aws services, perfect for local testing.