• 0 Posts
  • 57 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • I think this is a gross oversimplification of the issues, but that’s a big part of the problem. Currently, the problems facing both the DNC and the country are very complex and one very large and diverse side has to present ultimate complex solutions to complex problems to an electorate that collectively can barely tie its shoes, and the other just handwaves over this complexity, others a whole punch of folks, and says they’ll “fix” it by pushing the bad mean “others” out.

    Government actually DID work for a huge swathe of people in subtle, often taken for granted ways. It had vast room for improvement, but nevertheless was better than the alternative of simply not existing. The VA is a good example of this. But when you try to explain that to folks, they just glaze over. Another example is I’ve had discussions with multiple people now who were convinced that Harris just “never even had any policy” despite this being objectively untrue and very easily refuted.

    This is all to say, no, “people” don’t know or see shit. The average voters is wildly uninformed, uninformable, and cannot engage in anything beyond magical thinking. Long term solution is to make politics at the individual level more engaging with their day to day and increase general competence and access to true information. No idea what the short term solution is–as the saying goes, you can’t use reason to convince someone out of a stupid fucking opinion.


  • For sure. I personally think our current IP laws are well equipped to handle AI generated content, even if there are many other situations where they require a significant overhaul. And the person you responded to is really only sort of maybe half correct. Those advocating for, e.g., there to be some sort of copyright infringement in training AI aren’t going to bat for current IP laws-- they’re advocating for altogether new IP laws effectively thar would effectively further assetize and allow even more rent seeking in intangibles. Artists would absolutely not come out ahead on this and it’s ludicrous to think so. Publishing platforms would make creators sign those rights away and large corporations would be the only ones financially capable of acting in this new IP landscape. The compromise also likely would be attaching a property right in the model outputs and so it would actually become far more practical to leverage AI generated material at commercial scale since the publisher could enforce IP rights on the product.

    The real solution to this particular issue is require all models that out materials to the public at large be open source and all outputs distributed at large be marked as generated by AI and thus being effectively in the public domain.





  • It could of course go up to the scotus and effectively a new right be legislated from the bench, but it is unlikely and the nature of these models in combination with what is considered a copy under the rubric copyright in the US has operated effectively forever means that merely training and deploying a model is almost certainly not copyright infringement. This is pretty common consensus among IP attorneys.

    That said, a lot of other very obvious infringement in coming out in discovery in many of these cases. Like torrenting all the training data. THAT is absolutely an infringement but is effectively unrelated to the question of whether lawfully accessed content being used as training data retroactively makes its access unlawful (it really almost certainly doesn’t).


  • Even in your latter paragraph, it wouldn’t be an infringement. Assuming the art was lawfully accessed in the first place, like by clicking a link to a publicly shared portfolio, no copy is being encoded into the model. There is currently no intellectual property right invoked merely by training a model-- if people want there to be, and it isn’t an unreasonable thing to want (though I don’t agree it’s good policy), then a new type of intellectual property right will need to be created.

    What’s actually baffling to me is that these pieces presumably are all effectively public domain as they’re authored by AI. And they’re clearly digital in nature, so wtf are people actually buying?


  • What we need is robust decentralized multimodal energy production fit for the local area where it is installed and contributing to a well maintained distributed grid with multiple redundancies and sufficient storage so that incidental costs are minimized and uptime is effectively 100%. Energy is a tool and its generation is a category of tools, whining about people developing a better screwdriver rather than only using hammers is counterproductive when we’re trying to build a house for as many people as possible that doesn’t fucking kill everyone.




  • Nearly all personal insurance should fundamentally be a government function. Insurance is principally there to assist you when shit goes south. This is also effectively one of the foundational reasons humans collect into societies. But we’ve let runaway American capitalism and abstracting money to the point where it is treated like a resource in an of itself rather than the allocation algorithm it is completely pervert our understanding of how government and private enterprise respectively should interact with society.


  • ??? It is literally impossible for any voter to not know the devil they chose. No, over 70 million voters actively chose to elect perhaps the most incompetent and transparently stupid president in history back into office, but with a well known and well documented playbook this time around on how literally entry metric of American life, from domestic policy to foreign policy, will be made worse to the sole benefit of big corporate actors and 1%ers. A whole bunch of others were too apathetic to be concerned by this.

    Voters ultimately made their choice. A lot of folks are going to die as a result, but unfortunately it won’t be limited to just the idiots that actually chose this.


  • Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.

    I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).


  • There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.