

One option is to do this for them, and just send the link to the instance most suited to your current audience when recommending using Lemmy, rather than trying to explain what instances are, because they don’t need to know that to use it.


One option is to do this for them, and just send the link to the instance most suited to your current audience when recommending using Lemmy, rather than trying to explain what instances are, because they don’t need to know that to use it.


More corrupt and less democratic than the countries that have got it probably


An advantage of github is that its social network type features give you ways to make more educated guesses about whether a project is legitimate vs a malware trap. It’s great that it is not difficult to find alternative hosting for git repos to evade Microsoft’s censorship, but I don’t think there exists an ideal alternative solution for trust infrastructure, which is extra important for anything piracy related.


It isn’t normal for human beings to modify their behavior because someone scolded them about it in a rational way, especially when popular approval is still on the side of doing what they are doing, but that doesn’t mean there is nothing that moves the needle on people changing their behavior. You need positive reinforcement when they do something else instead, and stuff like that.


Actual avocado life hack: buy way too many avocados, then when they decide to be ripe on their own time, cut them into cubes and freeze them. Now you have avocado on demand whenever you want.


Are there any equivalent quarantine subs that would have a similar effect on the threadiverse? Right-wing people in particular seem to be convinced it is not what they are looking for, fortunately, judging from comments I saw when I browsed r/RedditAlternatives.


Oil is fungible, so Oilcoin would make more sense than a non-fungible token. It might be tricky to figure out a way to transport physical fuel over the blockchain, but annoying details like that are what vibecoding is for.
A rule of thumb I think is good for most sorts of investment is, what choice can you feel good about making whether or not it works out? I can handle not getting 1k, but I would feel like a real chump missing out on an easy 1m without giving my best effort. If I pick just the mystery box and win, I feel like that win is deserved. If I pick just the mystery box and I walk away with nothing, then at least I don’t have to live with the shame of being a 2-boxer, which is more valuable than $1k. If I pick both boxes, I most likely get a little bit of money and a lifetime of bitter regrets, or in the less likely case get 1.001 million dollars and a sense of having barely avoided disaster and not really “deserving” it. Choosing only the mystery box is the clear choice because it is the choice I am more able to handle having made, on an emotional level.


Both incidentally categories where I will never be happy with slopcode.
The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.
We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?
Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?
I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.


One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.


I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.
As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.


I’ll argue that it is a tool, and object to automatic zealous hostility towards anyone using it, but that doesn’t mean criticisms of how that tool is being used aren’t valid. It seems like that is what people are focusing on here, and they definitely aren’t Luddites for doing so.


If unusual behaviours are detected, for example a large group of people moves suddenly or in an unexpected way, security teams on the ground are alerted and can check if there is a problem.
Yes this will definitely be used only for its intended purpose


The pitch for their AI generated newsletter kind of makes me suspicious about the rest of the article


MSG + Citric Acid


Don’t younger people prefer streaming and direct download pirate sites to torrenting anyway


because I don’t know jackshit about coding and I am not gonna pretend I do.
But if OP does know and applies that knowledge to what they are doing, it’s not the same thing and doesn’t make sense to have the same disclaimer.


Not sure what your point is, do you not like how I worded that? I’m saying it’s a bad thing, do you think it’s a good thing, or missed the second half of the sentence? Not using AI to write comments is something I take pretty seriously, so please don’t cast doubt on its humanity just because what I write is long and verbose and not in complete agreement with you, I am a real person who has put effort into laying out my thoughts and this hurts my feelings.
If your point is further restrictions to children’s access to social media being broadly unpopular, unfortunately that isn’t accurate. This is why I’m taking a contrarian position here despite believing free computing should take priority; if people want this, and it’s going to happen in some form, maybe a compromise that doesn’t involve the worst losses of privacy and control is the best available path forward. If not, I want to hear arguments why not, or alternative plans, because the ones I can think of aren’t totally convincing.
I sent an email a few weeks ago, hopefully they’re at least noticing that people are aware of these laws as a potential problem.