PowerSwitch Main Page
PowerSwitch
The UK's Peak Oil Discussion Forum & Community
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Driverless cars
Goto page 1, 2, 3, 4, 5  Next
 
Post new topic   Reply to topic    PowerSwitch Forum Index -> Transport
View previous topic :: View next topic  
Author Message
biffvernon



Joined: 24 Nov 2005
Posts: 18541
Location: Lincolnshire

PostPosted: Sun Mar 06, 2016 9:49 am    Post subject: Driverless cars Reply with quote

There's a rather good letter in this week's New Scientist

Keith Macpherson wrote:


Driverless cars could lead to unintended consequences. At present, pedestrians are reluctant to step out into traffic: they don't want to be hit by a car. But in the future, they will learn they can freely cross busy roads. Driverless cars will stop because of Isaac Asimov's First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Gridlock will ensue.

_________________
http://biffvernon.blogspot.co.uk/
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 10:31 am    Post subject: Reply with quote

Another major philosophical issue with driverless cars is the making of moral judgements between the value of different human lives. For example, imagine I am driving down the road and a pedestrian walks into the road. I am faced with the following moral dilemma: either swerve to the left and career down a ravine to my very likely death, swerve to the right and run a very real risk of hitting an oncoming vehicle or carry on forwards, almost certainly killing the pedestrian, but probably sparing both myself and the occupants of any oncoming vehicles.

None of the above choices are morally consequence free. Nevertheless, I must make one of them and then justify my actions afterwards and, possibly, face the legal consequences of those actions. How, in a driverless car, is a computer to be held morally or legally responsibly for those actions?

One answer, presumably, would be to have some kind of very complex moral algorithm built into the computer in advance. But, that then begs the following two questions; Firstly, who gets to decide what is an acceptable moral algorithm and what is not? Secondly, would any manufacturer of such devices ever be persuaded to sign themselves up to such an unpredictable and potentially unlimited moral and legal liability?

It strikes me that that the technology behind driverless cars is entirely driven by economics and that the moral dilemmas I have outlined above are very real, but that they will be bulldozed over by the juggernaut of that economic imperative. Leaving the rest of us to pick up the moral and legal debris somewhere down the line.

All of the above, taps into a wider debate about where it is appropriate to use AI and where it is not. For me, it is appropriate to use it only where there are no immediate moral consequences of the "decisions" made by such technologies since a computer simply cannot be held morally responsible for those decisions and it is utterly impractical to expect the manufacturers of them to be able to predict all of the individual moral nuances of such decisions in advance.

Driverless cars clearly fall foul of these principles.
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
clv101
Site Admin


Joined: 24 Nov 2005
Posts: 8083

PostPosted: Sun Mar 06, 2016 10:55 am    Post subject: Reply with quote

Little John wrote:
It strikes me that that the technology behind driverless cars is entirely driven by economics ...

Indeed, economics drives it. However looking at the wider moral picture, today's human driven cars kill around 2000 people each year in the UK, 34,000 in the US (with around twice the fatality rate per km). If AI can cut that rate in half, saving a thousand lives per year in the UK then isn't there a moral imperative to deploy the technology? Even if individual decisions are questionable?

AI cars don't have to be perfect, they just have to be better than the average human driver.
_________________
PowerSwitch on Facebook | The Oil Drum | Twitter | Blog
Back to top
View user's profile Send private message Visit poster's website
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 11:07 am    Post subject: Reply with quote

clv101 wrote:
Little John wrote:
It strikes me that that the technology behind driverless cars is entirely driven by economics ...

Indeed, economics drives it. However looking at the wider moral picture, today's human driven cars kill around 2000 people each year in the UK, 34,000 in the US (with around twice the fatality rate per km). If AI can cut that rate in half, saving a thousand lives per year in the UK then isn't there a moral imperative to deploy the technology? Even if individual decisions are questionable?

AI cars don't have to be perfect, they just have to be better than the average human driver.
Individual moral responsibility does not work like that. What you are talking about there is utilitarianism. Utilitarianism is questionable in principle, Though, I am personally not so troubled by it where the moral consequences of utilitarian policies are not immediate and directly relatable to individual moral decisions and actions. That's a kind of fudge in itself, I concede. But it's just about possible to use that kind of fudge as a justification of the "greater good". An example would be where it may be NHS policy to divert resources into one area more than another as part of a wider strategy to promote the greater good. By the time such a policy filters down to the actual NHS practitioners on the ground, they do not have to face the moral dilemma of deciding to treat one person over another. They simply use the available resources in the manner in which they have been allocated.

Where it becomes completely untenable is where the moral consequences of a utilitarian policy are directly linked to specific decisions made in real time in the minutia of people lives that directly lead to someone dying as in the moral dilemma involving a driver-less car I outlined in my previous post.

You are confusing and/or conflating individual moral responsibility and utilitarianism. This is not possible to do. They are different things.


Last edited by Little John on Sun Mar 06, 2016 11:20 am; edited 1 time in total
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
clv101
Site Admin


Joined: 24 Nov 2005
Posts: 8083

PostPosted: Sun Mar 06, 2016 11:19 am    Post subject: Reply with quote

Would you support driverless cars in the UK if they were shown to halve total road deaths even if their complex moral decision making algorithm when faced the a choice between likely fatalities was just a random choice?
_________________
PowerSwitch on Facebook | The Oil Drum | Twitter | Blog
Back to top
View user's profile Send private message Visit poster's website
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 11:21 am    Post subject: Reply with quote

No, for the reasons I have given. But, I accept that a random "choice" would be the least morally unnacceptable way in which such "decisions" could be made.
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
clv101
Site Admin


Joined: 24 Nov 2005
Posts: 8083

PostPosted: Sun Mar 06, 2016 11:27 am    Post subject: Reply with quote

It just seems odd to allow an extra 1000 people to die each year when their deaths could be avoided by technology introduction.

Why put onus on the moral aspects of the AI and not just look at it as a black box technology. Why treat AI in a different way to ABS brakes, seat belts, airbags etc (all of which I presume you support)?
_________________
PowerSwitch on Facebook | The Oil Drum | Twitter | Blog
Back to top
View user's profile Send private message Visit poster's website
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 11:30 am    Post subject: Reply with quote

clv101 wrote:
It just seems odd to allow an extra 1000 people to die each year when their deaths could be avoided by technology introduction.

Why put onus on the moral aspects of the AI and not just look at it as a black box technology. Why treat AI in a different way to ABS brakes, seat belts, airbags etc (all of which I presume you support)?
ABS brakes, seat belts and air bags do not have to make real time moral decisions. Come on CLV, this really is elementary philosophy.
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
biffvernon



Joined: 24 Nov 2005
Posts: 18541
Location: Lincolnshire

PostPosted: Sun Mar 06, 2016 11:42 am    Post subject: Reply with quote

Air bags and seat belts do make real time decisions.
IF input from inertia sensor exceeds pre-set value THEN deploy.
Driverless car does the same
IF camera detects pedestrian within pre-set spatial range THEN alter course and/or apply breaks.
The logic is the same, if a bit more complicated in the processing, and in the case of air-bags there are consequential risks of deployment.

But the point raised by the letter to NS is harder to address. It works on the Docklands Light Railway because jay-walking pedestrians are rather rare in that environment. On the DLR do the trains have sensors that apply the breaks if an unexpected item in the travelling area is detected?
_________________
http://biffvernon.blogspot.co.uk/
Back to top
View user's profile Send private message Send e-mail Visit poster's website
clv101
Site Admin


Joined: 24 Nov 2005
Posts: 8083

PostPosted: Sun Mar 06, 2016 11:51 am    Post subject: Reply with quote

I don't think that's the right framework to look at this. ABS has algorithms to decide when to apply, airbags have algorithms to decide when to deploy, AI has algorithms to decide which way to swerve to avoid collision. There's nothing magic about AI, it's just software.

I'm happy to treat AI as a black box, or for AI to do nothing more complex as take a random choice if it saves many lives. I certainly couldn't justify lots of extra avoidable deaths philosophical issues of a car's software.
_________________
PowerSwitch on Facebook | The Oil Drum | Twitter | Blog
Back to top
View user's profile Send private message Visit poster's website
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 12:06 pm    Post subject: Reply with quote

biffvernon wrote:
Air bags and seat belts do make real time decisions.
IF input from inertia sensor exceeds pre-set value THEN deploy.
Driverless car does the same
IF camera detects pedestrian within pre-set spatial range THEN alter course and/or apply breaks.
The logic is the same, if a bit more complicated in the processing, and in the case of air-bags there are consequential risks of deployment.

But the point raised by the letter to NS is harder to address. It works on the Docklands Light Railway because jay-walking pedestrians are rather rare in that environment. On the DLR do the trains have sensors that apply the breaks if an unexpected item in the travelling area is detected?
The "decision" by the air bags is not a moral one, it is a mechanistic one You did read the word "moral" next to the word "decision" littered throughout my previous posts, right? Or, are you trying to imply that you do not understand the difference between mechanistic decisions and moral ones?

Last edited by Little John on Sun Mar 06, 2016 3:30 pm; edited 1 time in total
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
Little John



Joined: 08 Mar 2008
Posts: 7014
Location: UK

PostPosted: Sun Mar 06, 2016 12:12 pm    Post subject: Reply with quote

clv101 wrote:
I don't think that's the right framework to look at this. ABS has algorithms to decide when to apply, airbags have algorithms to decide when to deploy, AI has algorithms to decide which way to swerve to avoid collision. There's nothing magic about AI, it's just software....


I shall repeat, here, my answer to biff Vernon. The "decision" by the air bags is not a moral one, it is a mechanistic one You did read the word "moral" next to the word "decision" littered throughout my previous posts, right? Or, are you trying to imply that you do not understand the difference between mechanistic decisions and moral ones?

Quote:
.....I'm happy to treat AI as a black box, or for AI to do nothing more complex as take a random choice if it saves many lives. I certainly couldn't justify lots of extra avoidable deaths philosophical issues of a car's software.

So, the greater good is everything then.

Tell me, would you euthanise all children born with certain congenital conditions, thus freeing up much needed resources which, if deployed elsewhere, would lead to the "greater good" of the health of the rest of the population? Furthermore, I take it you would have no objection in leaving the decision as to which child was worthy of saving and which was not up to a computerised "black box"...right? After all, there's no need to worry ourselves with the trivial philosophical issues that such decision may be contingent upon? Of, if we are concerned about such issues, we could just randomly select a number of children equal to the number in the population that have congenital illnesses as it would, by freeing up said resources, improve the health of the population overall.

If not, why not?

You don't get away with trying to make out this is merely a technology issue that does not involve the necessity of philosophical judgements. You are making a philosophical judgement when you say that the greater good is what matter the most. You just don't seem to want to admit that this is what you are doing and what the potential wider moral consequences of such an approach are. One example aspect of which, I have just outlined.

Either you are being disingenuous or you really haven't thought about this as hard as you think you have.


Last edited by Little John on Sun Mar 06, 2016 4:55 pm; edited 9 times in total
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
adam2
Site Admin


Joined: 02 Jul 2007
Posts: 7228
Location: North Somerset

PostPosted: Sun Mar 06, 2016 3:18 pm    Post subject: Reply with quote

Drverless HGVs also.
http://www.bbc.co.uk/news/uk-politics-35737104

Though in this case, not completely automatic. Convoys are proposed in which the lead vehicle will have a human driver, and those following will be automatic.
Apart from labour saving, fuel would be saved by reduced wind resistance.
_________________
"Installers and owners of emergency diesels must assume that they will have to run for a week or more"
Back to top
View user's profile Send private message
biffvernon



Joined: 24 Nov 2005
Posts: 18541
Location: Lincolnshire

PostPosted: Sun Mar 06, 2016 4:45 pm    Post subject: Reply with quote

It'll be fun when a convoy of artics tries to go through a medieval town. Smile
_________________
http://biffvernon.blogspot.co.uk/
Back to top
View user's profile Send private message Send e-mail Visit poster's website
johnhemming2



Joined: 30 Jun 2015
Posts: 2159

PostPosted: Sun Mar 06, 2016 5:50 pm    Post subject: Reply with quote

I worry about the lack of jobs for taxi drivers. (inc private hire).
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    PowerSwitch Forum Index -> Transport All times are GMT + 1 Hour
Goto page 1, 2, 3, 4, 5  Next
Page 1 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group