New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 11 of 18 FirstFirst ... 23456789101112131415161718 LastLast
Results 301 to 330 of 538
  1. - Top - End - #301
    Halfling in the Playground
    Join Date
    Jan 2009

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Garwain View Post
    "Just like yer Pa did" also points in that direction. He was a sapper who found a weak spot to cause a collapse.
    ooh thank you, very helpful

  2. - Top - End - #302
    Bugbear in the Playground
    Join Date
    Feb 2009
    Location
    Germany
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Peelee View Post
    I have it on very good authority that within 20 years, everyone will be speaking German. Or a German-Chinese hybrid.
    Quote Originally Posted by faustin View Post
    Back in the 80-90, everyone used to say the same regarding Japanese.
    And pessimists in the decades before that said this about Russian.

    I want to see that unholy German-Chinese hybrid language.
    German is a language depending on grammar, having low context. Chinese is just the opposite. Writing would be awkward, speaking too.

    ...
    你 lieber lernst das Hybrid 现在! (you better learn that hybrid now!)


    ...
    Back to topic, does domination end once the vampire is destroyed?
    I would think, not.

  3. - Top - End - #303
    Bugbear in the Playground
     
    Lizardfolk

    Join Date
    Jul 2018

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by mjasghar View Post
    I mean, if you haven’t actually paid attention that’s exactly what is happening
    https://www.technologyreview.com/s/6...olley-problem/
    I specifically stated the trolley problem as an example of a scenario where a human would also be deciding who lives and who most likely dies, by virtue of it being a scenario where it doesn't seem possible to choose an option where nobody's life is in danger. So I'm not entirely sure what point you're trying to make.

    If you're trying to say that AIs are being built with as main purpose deciding who dies that's most likely wrong because their main feature will be driving safely so trolley problem scenarios are kept to a bare minimum. In fact self-driving cars won't become a public thing until they are less likely to cause accidents than human drivers. That they will have a priority list should the unfortunate happen doesn't mean it's their primary function.

    If you're trying to say that we're programming AI who will drive us to our deaths outside of scenarios where danger is unavoidable... I think you linked the wrong article.
    Last edited by Worldsong; 2019-07-30 at 10:58 AM.

  4. - Top - End - #304
    Ogre in the Playground
     
    The MunchKING's Avatar

    Join Date
    Mar 2009

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Worldsong View Post
    if you're in a situation where it seems unavoidable that someone gets hurt or dies a human also has to decide which option is best, otherwise the trolley problem would never have been conceived in the first place regardless of how accurate it is.
    I think the trolley problem is kind of dumb, because if you get in a situation where human life being lost is inevitable, you weren't driving safely enough. Which if a computer has better reaction times than a human, following laws designed for human reaction time (and I mean ACTUALLY following them, not just ignoring them when inconvenient like many humans do), you should always have enough time to safely stop before the problem comes to "human deaths were inevitable".

    Like the post a couple pages back about the car slowing down to make sure a deer the human didn't see wouldn't run across the road. An AI car may be slower, but it should be significantly safer. If it's not people just need to work on the tech.
    "Besides, you know the saying: Kill one, and you are a murderer. Kill millions, and you are a conqueror. Kill them all, and you are a god." -- Fishman

  5. - Top - End - #305
    Ogre in the Playground
     
    The MunchKING's Avatar

    Join Date
    Mar 2009

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Onyavar View Post
    Back to topic, does domination end once the vampire is destroyed?
    I would think, not.

    RAW says no, OOTS seems to go with "yes".

    (Note; Hylgia swirly-eyes in the first 7 panels, then right after Durkon's body gets destroyed her eyes blink back to normal.)
    "Besides, you know the saying: Kill one, and you are a murderer. Kill millions, and you are a conqueror. Kill them all, and you are a god." -- Fishman

  6. - Top - End - #306
    Pixie in the Playground
    Join Date
    Nov 2017

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Onyavar View Post
    Back to topic, does domination end once the vampire is destroyed?
    I would think, not.
    It did for Hilgya, who was dominated by Durkon back in strip #1131 Which I still can't link, sorry, because apparently I'm not nearly active enough on here.

    Edit: I don't type quickly enough, either...
    Last edited by Dogcula; 2019-07-30 at 11:10 AM.

  7. - Top - End - #307
    Titan in the Playground
     
    Jasdoif's Avatar

    Join Date
    Mar 2007
    Location
    Oregon, USA

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    You are grossly overestimating the capabilities of single-system control. You've just described a far more complex version of the global flight management system. There are about 100,000 flights per day across the entire world. Los Angeles alone, it seems, has four times that many cars per day in a single one of its roads. The global flight system is run by some of the best, most reliable computer systems we've ever built, and is still heavily dependent on human beings doing the heavy lifting. And unlike cars, it can avoid vehicle intersection in three dimensions, with far greater margins of error you find in any road.

    The idea you could control all cars in a city with a single computer, with little enough delay in communication to make safe decisions (where decisions need to be made and applied in fractions of a second) is at this point sci-fi.

    ETA: To illustrate the issue, what exactly do you think this computer would do if a child in a bike swerved into traffic? How would the computer 1) identify the problem, 2) calculate the rerouting for every car in the vicinity, avoiding them crashing into other already-calculated paths and 3) transmit the corrections to all vehicles that might be involved in the approximately second and a half it takes for that kid to be run over?

    AI cars deal with this by each carrying a computer of their own, processing it's own visual data in real time, and driving defensively when it is called for it, and aggressively when it must. Centralizing the problem would require a computer that would dwarf google's entire operation, and it'd still be incapable of dealing with the sheer raw data from billions of vehicles all needing second-to-second decisions.
    Yeah, centralizing approaches is a much more complicated subject than it looks like. On one hand, having all a system's decisions done in one place allows for really fast decisions. On the other hand, having to get that decision communicated out across a large system can be really slow. On the other hand (tentacle?); having components in the system detect something out of the ordinary, communicate that up to the central point, have the central point make a decision, and then getting that decision communicated out....well, that can be inordinately slow. Centralization gets economies of scale, along with diseconomies of scale...so they suffer as systems get bigger.

    Not that decentralization is automatically better, of course....A decentralized component can react quickly, but a decentralized component isn't always going to have all the information it would need to make the best decision...and the effects of that decision propagating through the system (like, if one car swerves to avoid a child suddenly on the road, other cars on the road will need to react to the car swerving...possibly by swerving themselves) can take a lot of time to resolve, and/or cause new situations to react to. Like most things, there's a trade-off.

    And it's not like it's a binary decision, either....Hybrid approaches (with some aspects decentralized, some centralized...and/or some decentralized decisions being communicated up to a central location for review and possibly distribution), and/or breaking a large systems into a set of smaller systems connected with a new system (so the size-based overhead that causes the delays/problems is more manageable), are options as well.
    Feytouched Banana eldritch disciple avatar by...me!

    The Index of the Giant's Comments VI―Making Dogma from Zapped Bananas

  8. - Top - End - #308
    Bugbear in the Playground
     
    Lizardfolk

    Join Date
    Jul 2018

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by The MunchKING View Post
    I think the trolley problem is kind of dumb, because if you get in a situation where human life being lost is inevitable, you weren't driving safely enough. Which if a computer has better reaction times than a human, following laws designed for human reaction time (and I mean ACTUALLY following them, not just ignoring them when inconvenient like many humans do), you should always have enough time to safely stop before the problem comes to "human deaths were inevitable".

    Like the post a couple pages back about the car slowing down to make sure a deer the human didn't see wouldn't run across the road. An AI car may be slower, but it should be significantly safer. If it's not people just need to work on the tech.
    I think the idea is that while all the cars could realistically avoid harm from each other they can't be expected to get a perfect score on dealing with sudden changes brought upon by external sources. And even then the idea would be that the amount of accidents would be minimal but just in case such a situation does occur the car should have some idea of how to react, if only to minimize the tragedy.

  9. - Top - End - #309
    Bugbear in the Playground
     
    HalflingRogueGuy

    Join Date
    May 2018

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    Wow, straight for the strawman.
    I wrote a post where I said that computers are going to maximize what they’re told to maximize, and I very strongly implied that the system that maximizes market share and return on investment is extremely unlikely to also maximize the preservation of human lives.

    To tie this back to the comic, these dwarves are in a meeting where they may have doomed the world by being extra polite and trying to go along with the rules, and not go against what everyone else is doing, even if what they’re doing is objectively awful.

    You can expect us to accept automation in much the same way.

  10. - Top - End - #310
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    I realize this has nothing to do with the topic of self-driving automobiles -- surely a spinoff thread in mad science and grumpy technology would be appropriate? -- but I'm going to offiically join team Smash the Ceiling and Kill the Vampires with Sunlight. We've had the Chekhov's gun on the table for awhile now that it was the rapid vampire rising spell Durkula pulled out of the staff before it was broken, and not the protection from daylight spell. Which means that all of these vamps are vulnerable to sunlight. Breaking the ceiling to let the light in would fulfill that with style.

    It will probably also kill everyone in the room except for the petrified Durkon. Unless they do something like ...

    Dwarf elder : *takes food wrapper out of pocket*
    *casually drops food wrapper on floor*

    ROOM :LITTERING! PETRIFY!

    *Dwarf elder is safe as statue*

    It would be HILARIOUS if the elders had a get-out-of-death free plan prepared for just such an eventuality, but I suspect it won't happen.

    ETA: Oh, I guess I can take a little bit of interest in the side debate.

    Quote Originally Posted by Dion
    I very strongly implied that the system that maximizes market share and return on investment is extremely unlikely to also maximize the preservation of human lives.
    It occurs to me that human casualties are going to result in a net loss to market share and consequently to ROI. People might flee the vehicles as "death traps". So while preserving human life may not be the absolute optimal path, it's got to be of great concern even to an amoral car company. I mean, car customers don't forget. There are still people who won't buy vehicles from GM or Chrysler because of bad experiences they had decades ago.

    Respectfully,

    Brian P.
    Last edited by pendell; 2019-07-30 at 11:49 AM.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  11. - Top - End - #311
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by CriticalFailure View Post
    It's crazy to me that the Boeing thing was able to happen - my understanding is that the software was made more aggressive and the back up sensors were removed without there being additional review and oversight? I guess it goes to show how important it is to have these kinds of design decisions appropriately reviewed.
    No. As I have said already, that is blaming the band-aid, not the problem.

    The problem is that Boeing 737 can't compete with the Airbus middle distance in fuel economy. To address that, they designed bigger engine. The new engine didn't fit under the current wings, so they lifted the wings. That changed the center of gravity for the plane, which Boeing chose to address via software. The software then had no easy way of communicating what it was doing, because the 737 are relics of the past. They didn't fix that because, like with the wings, the imperative was to keep the changes "cheap" and thus not require re-training of the pilots.

    So to prevent retraining, they automatized in software. And now people blame the software, rather than blaming Boeing for not biting the bullet and accepting that to compete with the new airbus models, they need a new design of their own, and every 737 pilot just needs to accept their 4 decade old model is obsolete, can't compete with modern designs, and it is time to learn to fly a new plane. And the people that employ them need to accept they need to pay for the retraining as well as for the new planes.

    Quote Originally Posted by Dion View Post
    I wrote a post where I said that computers are going to maximize what they’re told to maximize, and I very strongly implied that the system that maximizes market share and return on investment is extremely unlikely to also maximize the preservation of human lives.
    This doom-and-gloom prediction fails to consider that aviation, as well as pretty much every other transportation endeavor runs on safety, and that said safety is not good but crucial for their bottom line, market share and RoI. The only companies that can make a business out of killing their own clients are recreational drug companies such as tobacco.

    Grey Wolf
    Last edited by Grey_Wolf_c; 2019-07-30 at 11:55 AM.
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  12. - Top - End - #312
    Ogre in the Playground
     
    RedWizardGuy

    Join Date
    Jan 2015
    Location
    Brazil
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by pendell View Post
    I realize this has nothing to do with the topic of self-driving automobiles -- surely a spinoff thread in mad science and grumpy technology would be appropriate? -- but I'm going to offiically join team Smash the Ceiling and Kill the Vampires with Sunlight. We've had the Chekhov's gun on the table for awhile now that it was the rapid vampire rising spell Durkula pulled out of the staff before it was broken, and not the protection from daylight spell. Which means that all of these vamps are vulnerable to sunlight. Breaking the ceiling to let the light in would fulfill that with style.

    It will probably also kill everyone in the room except for the petrified Durkon. Unless they do something like ...

    Dwarf elder : *takes food wrapper out of pocket*
    *casually drops food wrapper on floor*

    ROOM :LITTERING! PETRIFY!

    *Dwarf elder is safe as statue*

    It would be HILARIOUS if the elders had a get-out-of-death free plan prepared for just such an eventuality, but I suspect it won't happen.

    Respectfully,

    Brian P.
    I laughed at the "LITTERING" stuff, but I think another way to pull the same stunt, if they believed turning to stone would save them, would be simply to slap one another (with nonlethal damage, of course).
    Each one of us, alone, is but a drop in the sea
    Our powers pale compared with the great heroes
    Our battles don’t hit theheadlines or shake the earth
    But they are few, can’t be everywhere, and we, many
    So, when the world or universe needs saving, they come
    But when people needs saving, we are the ones to appear
    We're underdogs, but we rise up to the challenge to be heroes.
    (Wishing Joe, a low-powered superhero)

    "I really like the Geek Math'ology we do here"

  13. - Top - End - #313
    Dwarf in the Playground
     
    diremage's Avatar

    Join Date
    Sep 2015

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Peelee View Post
    You think you already don't? In cars, no less?

    During a crash, the vehicle's crash sensors provide crucial information to the airbag electronic controller unit (ECU), including collision type, angle, and severity of impact. Using this information, the airbag ECU's crash algorithm determines if the crash event meets the criteria for deployment and triggers various firing circuits to deploy one or more airbag modules within the vehicle.

    Spoiler: And if you want to see how important those decisions are, check it:
    Show
    Spoiler: Airbags
    Show

    Airbags have between a 0.07% and 2% failure rate, based on a quick google. For a system that is one element of several that are designed to help mitigate an accident, where the airbag generally causes no additional harm if it fails to deploy (unless it explodes and launches shrapnel into someone's face, but that hardly ever happens) that's totally acceptable.

    For a system where failure means that someone, or several someones, die, I would hope we can do better than a 2% failure rate. And the consequences of releasing a faulty batch of self-driving cars could be much worse than "4 people get exploding airbags to the face and die, auto manufacturer issues safety recall."

    Spoiler: Centralized systems
    Show

    Centralized systems can generally make better-informed decisions, potentially, than individual elements of the system making individual decisions, but more data per decision means slower decisions because the system has to think about it for longer.

    I could definitely see a secondary element to self-driving cars where they broadcast a map of obstacles they can see and their current pathing decisions to other nearby self-driving cars, which could help eliminate blind spots and prevent them from doing a small number of stupid things like both changing into the same lane at the same time, but ultimately I think they're going to take more cues from bees and ants and rely on swarm intelligence rather than Skynet.

    Generally the rule of thumb is that robots do great in controlled environments where every variable can be accounted for. Automated warehouses and factories are good examples of that. In uncontrolled environments where a bird can poop on your sensors and a biker can jump out from behind a tree, robots tend to do less well.

    Spoiler: Boeing
    Show

    So...Boeing decided to put bigger engines on the 737, which caused it to tend to want to nose up. They should have just redesigned the airframe because the bigger engines screwed with the center of mass, but that would have required certifying a whole new plane and would have been slower and more expensive. So instead they wrote a little chunk of software to nose the plane down every now and again and sent pilots a little paragraph about the new system. Regulators OK'd it, and the plane bypassed the recertification process.

    The software relied on a sensor to tell it when to nose the plane down, and there was no backup sensor. If the sensor went bad, the software would crash the plane based on the bad data coming in. There was supposed to be a way to bypass the software, but it was a little counter-intuitive and if a trained pilot did the things that they would normally do to stop the plane from crashing, the software would take over and crash the plane anyway.

    Spoiler: petrification
    Show

    Why are the dwarves relying on what amounts to a piece of software to act as judge, jury and executioner, anyhow? Durkon should totally appeal his petrification on the basis that the destruction of property was necessary and justified to save the lives of the council (and incidentally everyone else in the world) from immediate death.

    For a supposedly Lawful society, their legal system is awful arbitrary. Having some random batch of runes decide to spontaneously petrify people on the basis of an unfounded accusation is no way to run a system of justice.


    Here is a little comic about how software engineers feel about electronic voting systems. https://xkcd.com/2030/

    We generally feel the same way about self-driving cars, I think. "Don't trust them, and don't listen to anyone who tells you they're safe. If you rely on them, everyone will die. There are lots of very smart people working on these very difficult projects. We should fund and encourage them, and do things the other way until everyone currently working in the field has retired."
    Last edited by diremage; 2019-07-30 at 12:12 PM.

  14. - Top - End - #314
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by diremage View Post
    Why are the dwarves relying on what amounts to a piece of software to act as judge, jury and executioner, anyhow?
    They are not. They are relying on the runes to prevent the voting from being interrupted. When voting is over, person is de-petrified (automatically, as far as know), and then justice can take place.

    Quote Originally Posted by diremage View Post
    Here is a little comic about how software engineers feel about electronic voting systems. https://xkcd.com/2030/

    We generally feel the same way about self-driving cars, I think.
    No, "we" don't. The issues faced by electronic voting and AI driving are completely different, and the concerns unrelated.

    Grey Wolf
    Last edited by Grey_Wolf_c; 2019-07-30 at 12:21 PM.
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  15. - Top - End - #315
    Bugbear in the Playground
     
    Fish's Avatar

    Join Date
    Oct 2007
    Location
    Olympia, WA

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by mjasghar View Post
    Maybe this is a cultural thing but I’m amazed so many people are willing to let a computer rule their life
    It’s not the computer I mind. Automated systems can respond faster to more data than any human. By any measure they are safer and possibly more efficient (when you eliminate such aberrations like satnav directing drivers along non-existent routes through fields and private property and lakes). Computers aren’t yet good at the little things humans do, like “Hey, isn’t that your friend Maria in the next lane?” and “Cruise around the block until the wife comes out of the shoe store” and “Pull over real quick, I want to get the mail” and “Stop, I think I saw our cat hiding under that bush” and “I’m using this exit from the parking lot instead because it’s easier to make a left turn from there at this time of day” and “I see some guys fishing. Look for a trail down the river and we’ll park as close as we can” and “Slow down for a minute — is the engine making a funny noise?” Computers will get there, if we build the right systems.

    No, it’s the people who program the computer — rather, it’s the corporations that pay the people to program the computer and set the programmers’ priorities — that I don’t completely trust.
    The Giant says: Yes, I am aware TV Tropes exists as a website. ... No, I have never decided to do something in the comic because it was listed on TV Tropes. I don't use it as a checklist for ideas ... and I have never intentionally referenced it in any way.

  16. - Top - End - #316
    Bugbear in the Playground
     
    Fish's Avatar

    Join Date
    Oct 2007
    Location
    Olympia, WA

    Default Re: OOTS #1172 - The Discussion Thread

    On topic: what I want to see is this.

    Gontor: So you’re the foolish old woman who sold all her jewels to save others.

    Sigdi: Nope. Didn’t sell ‘em all. (stands in sunlight, holds up diamond)
    The Giant says: Yes, I am aware TV Tropes exists as a website. ... No, I have never decided to do something in the comic because it was listed on TV Tropes. I don't use it as a checklist for ideas ... and I have never intentionally referenced it in any way.

  17. - Top - End - #317
    Dragon in the Playground Moderator
     
    Peelee's Avatar

    Join Date
    Dec 2009
    Location
    Birmingham, AL
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Fish View Post
    It’s not the computer I mind. Automated systems can respond faster to more data than any human. By any measure they are safer and possibly more efficient (when you eliminate such aberrations like satnav directing drivers along non-existent routes through fields and private property and lakes). Computers aren’t yet good at the little things humans do, like “Hey, isn’t that your friend Maria in the next lane?” and “Cruise around the block until the wife comes out of the shoe store” and “Pull over real quick, I want to get the mail” and “Stop, I think I saw our cat hiding under that bush” and “I’m using this exit from the parking lot instead because it’s easier to make a left turn from there at this time of day” and “I see some guys fishing. Look for a trail down the river and we’ll park as close as we can” and “Slow down for a minute — is the engine making a funny noise?” Computers will get there, if we build the right systems.

    No, it’s the people who program the computer — rather, it’s the corporations that pay the people to program the computer and set the programmers’ priorities — that I don’t completely trust.
    Yep. The best thing about computers is they do exactly what you tell them to, and the worst thing about computers is they do exactly what you tell them to.
    Cuthalion's art is the prettiest art of all the art. Like my avatar.

    Number of times Roland St. Jude has sworn revenge upon me: 2

  18. - Top - End - #318
    Dwarf in the Playground
     
    diremage's Avatar

    Join Date
    Sep 2015

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    They are not. They are relying on the runes to prevent the voting from being interrupted. When voting is over, person is de-petrified (automatically, as far as know), and then justice can take place.

    No, "we" don't. The issues faced by electronic voting and AI driving are completely different, and the concerns unrelated.

    Grey Wolf
    Do you write life-critical software? A healthy dose of paranoia is completely rational and justified if I know that, when I make an off-by-1 error or hit an edge case, my code will literally kill someone. Yes, we can patch it later, and there's processes in place to catch my screw ups, but at the end of the day that someone will be dead.

    Self-driving cars have all the unsolved security concerns of electronic voting booths, and ALSO have the problem that they sometimes mistake a pedestrian for a spot on the ground and drive over them. Or they sometimes decide, "I can totally fit under that truck," and decapitate their driver.
    Last edited by diremage; 2019-07-30 at 12:34 PM.

  19. - Top - End - #319
    Titan in the Playground
     
    Jasdoif's Avatar

    Join Date
    Mar 2007
    Location
    Oregon, USA

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    No, "we" don't. The issues faced by electronic voting and AI driving are completely different, and the concerns unrelated.
    For starters, car crashes are way easier to detect after the fact than vote fraud is.
    Feytouched Banana eldritch disciple avatar by...me!

    The Index of the Giant's Comments VI―Making Dogma from Zapped Bananas

  20. - Top - End - #320
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by diremage View Post
    Do you write life-critical software?
    None of your damned business. Don't claim to speak for all software engineers if you meant a tiny subset thereof. And even then, I suspect event "life-critical software engineers" might not agree with your broad assessment of AI driving.

    Quote Originally Posted by diremage View Post
    Self-driving cars have all the unsolved security concerns of electronic voting booths
    No, it doesn't. The concerns of electronic voting stem from the dual irresolvable needs of both ensured anonymity of voting and unverifiable voting count. There is no software that can both be verified to have counted correctly and at the same time doesn't reveal who voted for what (other than "the worlds most expensive pencil" code that simply prints out the vote, and then the person that punched it can verify it before carrying it to the voting box).

    AI driving doesn't need to keep secret who told them to drive from A to B. It's security concerns are not comparable to the issue of electronic voting.

    Grey Wolf
    Last edited by Grey_Wolf_c; 2019-07-30 at 12:38 PM.
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  21. - Top - End - #321
    Pixie in the Playground
    Join Date
    Sep 2008

    Default Re: OOTS #1172 - The Discussion Thread

    And lo, 11 pages into the thread, a clear need for specificity has become the bane of anyone wishing not to be verbose.

    On the issue of self-driving cars, I think we need to work on "cars with a third-person camera" before we even broach the subject of vehicular intelligence. I can avoid what I can see, and it's not fair to compare me to a car/system that has a spherical frame of reference, if I can't see all the same things it can.

    And as long as we're on the camera problem, therein lies the trouble with speculating on the future of a comic medium: everything that's about to happen may not have a precedent in-frame.

  22. - Top - End - #322
    Dwarf in the Playground
     
    diremage's Avatar

    Join Date
    Sep 2015

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    None of your damned business. Don't claim to speak for all software engineers if you meant a tiny subset thereof. And even then, I suspect event "life-critical software engineers" might not agree with your broad assessment of AI driving.
    And this is why software engineers are unlikely to ever successfully unionize, even if it were in our own best interests to do so xD

    Don't put words in my mouth about who I'm claiming to speak for. If there weren't people who disagreed with me, we wouldn't have self-driving cars running people over and killing their drivers and passengers right now.

    AI cars don't need to keep secret who tells them to do what, but they DO need to avoid getting hit with ransomware that tells the car's passengers, "Please deposit 3 bitcoins in the next 5 minutes or your car is going to drive you off a cliff. Thank you for your business!"

    The security concerns are a bit overshadowed right now by the difficulty of getting the systems to work at all, but only a bit.

  23. - Top - End - #323
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Crookwise View Post
    On the issue of self-driving cars, I think we need to work on "cars with a third-person camera" before we even broach the subject of vehicular intelligence. I can avoid what I can see, and it's not fair to compare me to a car/system that has a spherical frame of reference, if I can't see all the same things it can.
    ...

    Doesn't you car come with mirrors that allow you to see to your sides and back?

    Also, the large, large majority of accidents are not caused by blind spot collisions, but by human error when the information was available, but was not properly consumed (whether because of overconfidence {"I can make this red light"}, tiredness, impairment from legal or illegal drugs or distractions such as phones). Unless you have a way to enforce 100% attention on the road on humans, the AI will have multiple advantages over you, not just its 360 degree vision. That's kinda the point.

    Quote Originally Posted by diremage View Post
    Don't put words in my mouth about who I'm claiming to speak for.
    Then don't use "we" when you mean "me" unless you are royalty.

    Quote Originally Posted by diremage View Post
    If there weren't people who disagreed with me, we wouldn't have self-driving cars running people over and killing their drivers and passengers right now.
    Irrelevant to the issue of voting security.

    Quote Originally Posted by diremage View Post
    AI cars don't need to keep secret who tells them to do what, but they DO need to avoid getting hit with ransomware that tells the car's passengers, "Please deposit 3 bitcoins in the next 5 minutes or your car is going to drive you off a cliff. Thank you for your business!"
    And this has happened how often? Ah, yes, never.

    But what has happened is non-AI cars being turned off in the middle of highways. So the problem here is not with the self-driven AI at all.

    Grey Wolf
    Last edited by Grey_Wolf_c; 2019-07-30 at 12:55 PM.
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  24. - Top - End - #324
    Bugbear in the Playground
     
    HalflingRogueGuy

    Join Date
    May 2018

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Jasdoif View Post
    For starters, car crashes are way easier to detect after the fact than vote fraud is.
    Unless vampires!

    You can detect when vampires are manipulating the vote.

  25. - Top - End - #325
    Pixie in the Playground
    Join Date
    Sep 2008

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    Doesn't you car come with mirrors that allow you to see to your sides and back?

    Also, the large, large majority of accidents are not caused by blind spot collisions, but by human error when the information was available, but was not properly consumed (whether because of overconfidence {"I can make this red light"}, tiredness, impairment from legal or illegal drugs or distractions such as phones). Unless you have a way to enforce 100% attention on the road on humans, the AI will have multiple advantages over you, not just its 360 degree vision. That's kinda the point.
    Heck if I know what kind of sensors cars have these days, but I know my mirrors don't let me see outside the human-visible spectrum. And mirrors are great but gauging distance isn't one of the functions they're ideal for.

    Quote Originally Posted by Grey_Wolf_c View Post
    Then don't use "we" when you mean "me" unless you are royalty.
    Our schismatic self prefers "we", since it's crowded in here. EDIT: Which I'm pretty sure is what Walt Whitman was copping to in that poem. Either that, or that he was Legion. Perhaps both.
    Last edited by Crookwise; 2019-07-30 at 01:03 PM.

  26. - Top - End - #326
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Crookwise View Post
    Heck if I know what kind of sensors cars have these days, but I know my mirrors don't let me see outside the human-visible spectrum. And mirrors are great but gauging distance isn't one of the functions they're ideal for.
    So you'd want some kind of helmet that provided you with 360 degree vision, and allowed to see in infrared and magnetic vision? Isn't that how Mechwarriors operate their Battletechs?

    Ok, I'll grant you it'd be cool, but I'm not sure it'd address the problem of overconfidence, distraction and impairment I mentioned above.

    Grey Wolf
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  27. - Top - End - #327
    Ogre in the Playground
     
    GreatWyrmGold's Avatar

    Join Date
    May 2009
    Location
    In a castle under the sea
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Spoiler: Comical Quandries
    Show

    Quote Originally Posted by Ironsmith View Post
    Alternatively, trick him into violating dwarven law. Granted, being petrified will probably protect him from sunlight, but as long as nobody burns a Remove Curse or some such on him, he's a neutralized threat.
    I wonder what happens to dominated people if the dominator is petrified. Probably the same thing that happens if they die, re-die, or are otherwise stopped, but I'd find it amusing if they just stood there, waiting for their next instructions from the newest statue in the chamber.


    Quote Originally Posted by WindStruck View Post
    I've been talking about non-dwarves the entire time.
    If I actually thought you were talking about the creature type, I probably wouldn't have asked for clarification until after a confused reply from you. I was trying to flag the response as a joke.


    Quote Originally Posted by Dion View Post
    Unless vampires!
    You can detect when vampires are manipulating the vote.
    Yeah, but the dwarves clearly can't.

    Spoiler: Car Stuff
    Show

    Quote Originally Posted by Rogar Demonblud View Post
    No, there are states where self-serve gas is actually illegal. New Jersey and Oregon, IIRC.
    Huh, didn't know that. I guess that's one way to bolster the number of minimum-wage jobs nobody with other options wants in your state...can't think of any other reason to make self-serve illegal.


    Quote Originally Posted by diremage View Post
    -snip, I'm talking about the part about accidents specifically-
    Probably worth pointing out two things.
    1. Humans cause accidents, too. The reasons are different (not noticing pedestrians instead of mistaking them for puddles), but they happen.
    2. There's one important point I buried in the middle of a joke a few posts ago. When a human gets in an accident, nobody except maybe the people in the accident can learn from it. It's harder to figure out why self-driving cars made the mistakes they did and teach them not to do it again, but once you do, every self-driving car (with sufficiently-compatible hardware and an owner who bothers to download the software updates) learns from the mistakes of one.
    I wish there was a way to figure out where the faults in current self-driving systems lie without risking lives, but from my understanding, the state of the art is beyond the point where it can learn from driving around on controlled test tracks.


    Quote Originally Posted by Mightymosy View Post
    In my scenario all cars are controlled with a central computer: thus, they won't run into each other because the PC controlls all of them.
    From my understanding, moving cars seem to be the thing self-driving cars tend to have the least issue with. They're big, they're shiny, and they're a big part of the controlled-test-drive tracks people started training them on. The biggest remaining problem for self-driving cars is pedestrians, and the only reason wildlife and "miscellaneous inanimate obstructions" don't tie with them is because pedestrians are everywhere.
    Also, there's the communication problem. Wireless communication with enough bandwidth to handle everything you'd need for driving every car isn't available in many places; you'd need to have most of the driving software in the car itself (both for non-autoautomobile obstacles and for not overloading local wireless). Of course, wired connections could solve that issue, but at that point it makes more sense to just build tram cars or light rail. (Which is a good idea, mind you, it just solves different problems in our transportation network.)

    Don't get me wrong, an all-auto-automobile road would have massive advantages over a semi-auto, semi-manual-automobile road. But they'd be things like "We can iron out inefficiencies caused by making central signals that change slowly enough for semi-distracted humans to notice," not "Self-driving programming is easy now".


    Quote Originally Posted by Grey_Wolf_c View Post
    Blaming the problem on the software rather than in companies being skinflints is missing the whole point.
    Of course, skinflint companies can skimp on software development...but the nice thing about software is that (barring licensing fees) one line of code is as cheap as any other. It'll be hard for new auto-automobile programers to enter the market (assuming there's no publically-available database), but once the giants have a solid self-driving program, there won't be any reason for them to provide a worse product.
    ...Unless they wanted to skimp on the sensors, I guess, so here's hoping future automotive lawmakers make minimum sensor requirements for self-driving cars.


    Quote Originally Posted by Worldsong View Post
    I think the idea is that while all the cars could realistically avoid harm from each other they can't be expected to get a perfect score on dealing with sudden changes brought upon by external sources. And even then the idea would be that the amount of accidents would be minimal but just in case such a situation does occur the car should have some idea of how to react, if only to minimize the tragedy.
    I've always thought this was kind of a silly point. I don't see how switching to a different logic and calculating the number of deaths caused by each course of action during the last fraction of a second before a crash would be meaningfully better than just keeping the "avoid crashing" functions running during that fraction of a second.


    Quote Originally Posted by pendell View Post
    It occurs to me that human casualties are going to result in a net loss to market share and consequently to ROI. People might flee the vehicles as "death traps".
    I thought they already did. Isn't that what happened to the Pinto?


    Quote Originally Posted by Grey_Wolf_c View Post
    So to prevent retraining, they automatized in software. And now people blame the software, rather than blaming Boeing for not biting the bullet and accepting that to compete with the new airbus models, they need a new design of their own, and every 737 pilot just needs to accept their 4 decade old model is obsolete, can't compete with modern designs, and it is time to learn to fly a new plane. And the people that employ them need to accept they need to pay for the retraining as well as for the new planes.
    The last point is the biggest one. The way this went down is that American Airlines made an order for a bunch of 737 upgrades (which Boeing later designed and called the MAX), because they didn't want to retrain pilots but did want better planes. The pilots don't call the shots in what planes they fly, and Boeing could have turned down the contract but mumble business mumble numbers mumble okay they bear some of the responsibility for not accepting that the customer can be wrong.
    Also, out of curiosity, are we all not-citing the same video? Because a lot of the talking points sound familiar, just viewed through another lens


    Quote Originally Posted by Grey_Wolf_c View Post
    No, "we" don't. The issues faced by electronic voting and AI driving are completely different, and the concerns unrelated.
    Wait, you mean software engineers aren't some kind of hivemind hellbent on linking everything to that Skynet social media OS in the crappy Terminator movie I saw part of once?

    Quote Originally Posted by Grey_Wolf_c View Post
    AI driving doesn't need to keep secret who told them to drive from A to B. It's security concerns are not comparable to the issue of electronic voting.
    They aren't even linked to the same definition of "security"! I'd say that they're both things the public is overly-paranoid about, except that the public seems underly-paranoid about electronic voting machines.


    Quote Originally Posted by Crookwise View Post
    On the issue of self-driving cars, I think we need to work on "cars with a third-person camera" before we even broach the subject of vehicular intelligence. I can avoid what I can see, and it's not fair to compare me to a car/system that has a spherical frame of reference, if I can't see all the same things it can.
    Quote Originally Posted by Crookwise View Post
    Heck if I know what kind of sensors cars have these days, but I know my mirrors don't let me see outside the human-visible spectrum. And mirrors are great but gauging distance isn't one of the functions they're ideal for.
    Why? I don't care if you have a perfectly good excuse for not seeing that car—if a computer with 360 degree vision or radar or whatever wouldn't have collided with it, it's a better driver. (Assuming all else is held equal, of course.)
    Okay, the reasons for your inferiority are out of your control. Great, it shouldn't make you feel like a bad person. It's still going to be the difference between life and death in a significant proportion of circumstances.


    Quote Originally Posted by diremage View Post
    AI cars don't need to keep secret who tells them to do what, but they DO need to avoid getting hit with ransomware that tells the car's passengers, "Please deposit 3 bitcoins in the next 5 minutes or your car is going to drive you off a cliff. Thank you for your business!"
    That requires an absurdly specific list of things the self-driving car can't do that it should have to do/can do that is shouldn't have a reason to, plus a hacker who knows the auto-automobile code well enough to manipulate it into doing something hilariously outside its intended function and can't find a better way to make money with his skills. (Also for bitcoins to be consistently valuable and common enough for random commuters to have a few, but that's a less-important issue.)

    Spoiler: mjasghar
    Show

    Quote Originally Posted by mjasghar View Post
    That’s what someone said
    Let the AIs learn by themselves
    No consideration of collateral damage as a result and even saying only 2 performance indicators matter
    Probably safe to assume "safety" was one of the performance indicators. It's safe to assume that because the other two options are "speed" and..."scenery," I guess?

    Do you say oh the driver can take over? That would require a constantly alert driver who can override at any time for any reason - which invalidates the whole idea and causes issues with liability
    Issues with liability, yes. Requires an alert driver, yes. Invalidates, the whole idea, no. Have you ever been on a long road trip? Imagine being able to relax in the driver seat and talk with your friends and family instead of navigating through rush hour traffic because you got too close to a big city at the wrong time! I've never been in the driver's seat for that situation, but "driver needs to put his full attention on the road" isn't fun for anyone. Just being able to relax and keep an eye out for the "Warning, we need human intervention!" siren would be a vast improvement over manual automobiles.
    Also, most self-driving cars have dozens of redundant sensors. The Tesla has everything from cameras to radar. Bird poop on one sensor won't make it crash.

    It’s amazing that people can read a comic which puts forward a situation where blindly following a system leads to disaster are arguing for blindingly putting their lives in the hands of a system
    It's amazing that people can read arguments that self-driving cars are imperfect but better than human drivers and argue that those people are blindly trusting the cars.
    Alternatively, it's amazing that people can read a comic which puts forward a situation where blindly trusting other people leads to disaster are arguing for blindly putting their lives in the hands of other people. Because that's what driving is—you put your hands in the lives of every other driver on the road.

    No, that's a lie, both of those are what society is. You need to put your trust in the individuals around you and the systems we're all part of. You need to trust that nobody in the grocery store parking lot will run you over to steal your groceries, and that if they did your health insurance/national health plan would work with your local hospital to ensure you didn't die while the courts properly punished the a-hole who ran you over. You cannot function in a society without some level of trust.
    But trust does not require blindness. You can acknowledge that a system is imperfect while still trusting it. I sure hope you can, at least, because doing otherwise means you either blindly trust the modern sociopolitical arrangement (let's not go into details about why that's unwise) or you live in paranoia about either government agents screwing you or normal people screwing you while the government does nothing.


    Quote Originally Posted by mjasghar View Post
    When someone says it’s okay for people to die to get progress it’s hardly a straw man
    Someone else must die for the greater good
    Let's assume that's what SOMEONE is saying. Let's also consider that the progress is stopping a system where even more people are being killed. In fact, from what I understand of the statistics, the tests are killing fewer people than would have died if the tests were conducted by human drivers in manual automobiles!
    See how details matter?


    Quote Originally Posted by mjasghar View Post
    So who programs the software that decides what an acceptable fatality rate is?
    And who stops some rich person getting the software hacked so their life is always the priority?
    Maybe this is a cultural thing but I’m amazed so many people are willing to let a computer rule their life
    1. Nobody. What, do you think there's a point where the software goes "Okay, I've prevented enough fatalities this month, time to stop trying"? No, that's what people do. An AI would always do the best it could to accomplish its goals, 24/7. That is, in fact, what people consider the greatest risk of AI to be. (Also, the standard answer is "When the fatality rate is lower than human drivers." Which is easier than it sounds, since as noted AIs work at peak effectiveness 24/7 due to not getting hungry or drunk or cranky.)
    2. That would require the rich guy to hack in a ridiculously complex chunk of code through a ridiculously unprotected connection that should not exist. If that existed, so many people would be stuffing untested code into the system that it would bug out, conveniently removing it from the equation. (Besides, the software would need to be updated to account for the airborne swine.)
    3. I'm amazed so many people are willing to let people rule their lives. Let's face it, people are kinda terrible. Pol Pot? Stalin? That prick who cut me off in traffic? All people.


    Spoiler: The longer a GitP discussion goes on, the more likely it is to devolve into Star Wars
    Show

    Quote Originally Posted by Schroeswald View Post
    I though TFA was really good and that Solo was okay, it’s much easier to write an okay movie than a really good one so my chatbot has only reached that far.
    Fair enough. My personal quality ranking put them a bit lower, but that's mostly because I usually dislike standard Hollywood blockbusters (e.g, most Star Wars) unless they do something unique to spice things up. (Though there are exceptions; the more Aquaman accepted the colorful over-the-top nature of its source material, the more I enjoyed it.)
    Incidentally, that point is why The Last Jedi is my favorite Star Wars movie, bar none. It has structural flaws, but the flaws that actually matter aren't the ones most people whine about*, so I tend not to take critics seriously until they provide a detailed explanation of why they dislike it.

    *Which I'd say is evidence that Star Wars critics are chatbots, but the complaints mirror Social Injustice Warrior talking points that even chatbots know not to em
    Quote Originally Posted by The Blade Wolf View Post
    Ah, thank you very much GreatWyrmGold, you obviously live up to that name with your intelligence and wisdom with that post.
    Quotes, more

    Winner of Villainous Competitions 8 and 40; silver for 32
    Fanfic

    Pixel avatar by me! Other avatar by Recaiden.

  28. - Top - End - #328
    Bugbear in the Playground
     
    Ironsmith's Avatar

    Join Date
    Mar 2017
    Location
    US
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Dion View Post
    Unless vampires!

    You can detect when vampires are manipulating the vote.
    Unless they're digital vampires. They've got some nasty bytes.
    Who're you? ...Don't matter.

    Want some rye? 'Course ya do!


    Here's to us.
    Who's like us?
    Damn few,
    and they're aaall dead.


    *gushes unintelligibly over our cat, Sunshine*

    [Nexus characters, grouped by setting:
    Ouroboros: here
    Maesda: here
    Others: here
    ]

  29. - Top - End - #329
    Dragon in the Playground Moderator
     
    Peelee's Avatar

    Join Date
    Dec 2009
    Location
    Birmingham, AL
    Gender
    Male

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    Also, the large, large majority of accidents are not caused by blind spot collisions, but by human error when the information was available, but was not properly consumed (whether because of overconfidence {"I can make this red light"}, tiredness, impairment from legal or illegal drugs or distractions such as phones).

    Grey Wolf
    I would say the biggest cause of accidents can be summed up in one simple phrase that encapsulates all those things: "Not being predicable." If someone is predictable, then a collision is very likely avoidable, regardless of what they're doing. Driving the wrong way on a one-way street? If everyone can predict that the wrong-doer is going to stay in their lane, or change to a specific lane, they can all avoid getting hit. Car changing lanes while you're in their blind spot? If you can tell they're going to try, you can speed up, slow down, change lanes yourself, or honk as they start. Tiredness, impaired judgement, overconfidence, all these things wreck predictability.
    Cuthalion's art is the prettiest art of all the art. Like my avatar.

    Number of times Roland St. Jude has sworn revenge upon me: 2

  30. - Top - End - #330
    Pixie in the Playground
    Join Date
    Sep 2008

    Default Re: OOTS #1172 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    So you'd want some kind of helmet that provided you with 360 degree vision, and allowed to see in infrared and magnetic vision? Isn't that how Mechwarriors operate their Battletechs?

    Ok, I'll grant you it'd be cool, but I'm not sure it'd address the problem of overconfidence, distraction and impairment I mentioned above.

    Grey Wolf
    Oh heck, as long as we're in the realm of science-ish stuff, why not ocular implants? Some people are already working on that, after all.

    Quote Originally Posted by GreatWyrmGold View Post
    Why? I don't care if you have a perfectly good excuse for not seeing that car—if a computer with 360 degree vision or radar or whatever wouldn't have collided with it, it's a better driver. (Assuming all else is held equal, of course.)
    Okay, the reasons for your inferiority are out of your control. Great, it shouldn't make you feel like a bad person. It's still going to be the difference between life and death in a significant proportion of circumstances.
    Yeah and if an AI can do it better we should at least demand the drivers on the road be given the means to approach its capabilities if we're going to call them qualified to drive. Which is really the only reason I jumped into this conversation; unaugmented humans are a liability even if you accept that no one needs to be ideal.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •