Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
I’m not even american, so I’m not sure what you arw on about right now. All I asked was how Roe v. Wade being repealed was Biden’s fault, and the answer apparently is that he did not pack the court.
How genocide fits into Roe v. Wade, or how callling me names somehow helps I’m still unsure of.
Never let it be forgotten that Roe v. Wade was struck down during a Democrat administration
Ok, but what does that have to do with said denocrat administration? What say did they have in the matter? What could they have done to change the outcome?
They can, and are being made. E.g. the state of accessibility on Gome.
I think you are replying to the wrong person?
I did not say it helps with accuracy. I did not say LLMs will get better. I did not even say we should use LLMs.
But even if I did, non of your points are relevant for the Firefox usecase.
Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.
Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.
I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.
What do you mean “full set if data”?
Obviously you can not train on 100% of material ever created, so you pick a subset. There is a a lot of permissively licensed content (e.g. Wikipedia) and content you can license (e.g. Reddit). While not sufficient for an advanced LLM, it certainly is for smaller models that do not need wide knowledge.
I’d say the main differences are at least
Feel free to assume that, but don’t claim an assumption as a fact.
You recommended using native package managers. How many of them have been audited?
You know what else we shouldn’t assume? That that it doesn’t have a security feature. And we additionally then shouldn’t go around posting that incorrect assumption as if it were a fact. You know, like you did.
There is no general copyright issue with AIs. It completely depends on the training material (if even then), so it’s not possible to make blanket statements like that. Banning technology, because a particular implementation is problematic, makes no sense.
I’m confused why you think it would be anything else, and why you are so dead set on this. Repos include a signing key. There is an option to skip signature checking. And you think that signature checking is not used during downloads, despite this?
Ok, here are a few issues related to signatures being checked by default, when downloading: https://github.com/flatpak/flatpak/issues/4836 https://github.com/flatpak/flatpak/issues/5657 https://github.com/flatpak/flatpak/issues/3769 https://github.com/flatpak/flatpak/issues/5246 https://askubuntu.com/questions/1433512/flatpak-cant-check-signature-public-key-not-found https://stackoverflow.com/questions/70839691/flatpak-not-working-apparently-gpg-issue
Flatpak repos are signed and the signature is checked when downloading.
It’s OK to be wrong. Dying on this hill seems pretty weird to me.
From the page:
It is recommended that OSTree repositories are verified using GPG whenever they are used. However, if you want to disable GPG verification, the --no-gpg-verify option can be used when a remote is added.
That is talking about downloading as well. Yes, you can turn it off, but so can you usually do it with native package managers, e.g. pacman: https://wiki.archlinux.org/title/Pacman/Package_signing
That doesn’t seem to be true? https://flatpak-testing.readthedocs.io/en/latest/distributing-applications.html#gpg-signatures
In what way don’t they “securely download” ?
Do you hapen to know where? Searching seems to give no results.
In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.
The analogy works perfectly well. It does not matter how common it is. Pstching binaries is very hard compared to e.g. LoRA. But it is still essentially the same thing, making a derivative work by modifying parts of the original.
I don’t see your point? What is the “source” for Mona Lisa I would use? For LLMs I could reproduce them given the original inputs.
Creating those inputs may be an art, but so could any piece of code. No one claims that code being elegant disqualifies it from being open source.
But isn’t it obvious that if a presidential candidate promises some legislation, that it is contingent on the legislative branch?