Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • I’m happy you provided a few examples. This is good for anyone else reading along.

    Equifax in 2017: Penalty was, let’s assume the worst case, 700$M. The company in 2017 made 3.3$B, and I’d assume that was after the penalty, but even if it wasn’t, that was a penalty of 27% of revenue. That actually seems like it would hurt.

    TSB in 2022: Fined ~48.6£M by two separate agencies. TSB made 183.5£M in revenue in 2022, still unclear if that was pre- or post- penalty, but this probably actually hurt.

    Uber in 2018: your link suggests Uber avoided any legal discovery that might have exposed their wrongdoing. There are no numbers in the linked article and a search suggest the numbers are not public. Fuck that. A woman was killed by an AI driven car and the family deserves respect and privacy, but uber DOES NOT. Because it’s not a public record, I can’t tell how much they paid out for the death of the victim, and since uber is one of those modern venture-capital-loss-leader companies, this is hard to respond to.

    I’m out of time – and won’t likely be able to finish before the weekend, so trying to wrap up – and Boeing seems complicated and I’m more familiar with Crowdstrike and I know they fucked up. In both cases, I’m not sure how much of a penalty they paid out relative to income.

    I’ll cede the point: There are some companies who have paid a price for making mistakes. When you’re talking companies, though, the only metric is money-paid/money-earned. I would really like there to be criminal penalties for leadership who chase profit over safety, so there’s a bit of ‘wishful thinking’ in my worldview. If you kill someone as a human being (or 300 persons, Boeing), you end up with years in prison, but company just pays 25% of it’s profit that year instead.

    I still think Cassandra is right, and that more often than not, software companies are not held responsible for their mistakes. And I think your other premise, that ‘if software is better at something’ carries a lot: Software is good at explicit computation, such as math, but is historically incapable of empathy (a significant part of the original topic… I don’t want to be a number in a cost/benefit calculation). I don’t want software replacing a human in the loop.

    Back to my example of a flock camera telling the police that a stolen car was identified… the software was just wrong. The police department didn’t admit any wrongdoing and maaaaybe at some point the victim will be compensated for their suffering, but I expect flock will not be on the hook for that. It will be the police department, which is funded by taxpayers.

    Reading your comments outside this thread, I think we would agree on a great many things and have interesting conversations. I didn’t intend to come across as snide, condescending or arrogant. You made the initial point, cassandra challenged you and I agreed with them, so I joined where they seemed not to.

    The “bizarre emotion reaction” is probably that I despise AI and want it nowhere near any decision-making capability. I think that as we embed “AI” in software, we will find that real people are put at more risk and that software companies will be able to deflect blame when things go wrong.


  • The burden of proof is on you. Show me one example of a company being held liable (really liable, not a settlement/fine for a fraction of the money they made) for a software mistake that hurt people.

    The reality is that a company can make X dollars with software that makes mistakes, and then pay X/100 dollars when that hurts people and goes to court. That’s not a punishment, that’s a cost of business. And the company pays that fine and the humans who mode those decisions are shielded from further repercussions.

    When you said:

    the idea that the software vendor could not be held liable is farcical

    We need YOU to back that up. The rest of us have seen it never be accurate.

    And it gets worse when the software vendor is a step removed: See flock cameras making big mistakes. Software decided that this car was stolen, but it was wrong. The police intimidated an innocent civilian because the software was wrong. Not only were the police not held accountable, Flock was never even in the picture.



  • ^^ This is my take. Behave like an adult.

    99% of the time, I don’t have anywhere to be in a hurry, so I let others (who may or may not need to) go first.

    I often travel with kids at this time in my life, but we just chill in our row until things get calm. Then we can grab stuff from overhead if needed, even if it’s behind us.

    On the occasions where I’ve needed to rush to make a connecting flight, I just say it out loud and get some buy-in from those around me, or it’s already obvious and the whole cabin is probably aware. In those cases, getting non-pressured people to give you priority tends to work if you just ask.

    I can imagine a class of passenger who is super dependent on timing – but those people have already failed. PSA: When traveling, assume that you will not depart or arrive at the exact time on your ticket. Give yourself an hour or two to absorb delays and then you can just be chill.




  • korazail@lemmy.myserv.onetoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    sadly. I don’t have enough money to turn this shit-hose off.

    Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it’s just insane to claim it’s the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.

    There’s another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.

    LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate… but do you want to rely on them?

    When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.

    In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct – to be accurate, to be responsible.

    In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world’s input, we get “barely getting by” as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!

    Sadly, it’s not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don’t make any new things, AI will stagnate and die. Where will we be then?

    There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn’t just eat the billionaires*.

    *Hermes-3 says cannibalism is “barbaric” in most cultures, but otherwise doesn’t give convincing arguments.