1 Abstract:
The concept of Artificial Intelligence (AI) is immensely revolutionizing every part of the modern technology-oriented society in making life much easier to live. Many researchers are applying deep learning mechanisms in developing autonomous driving. The main objective of this project is to investigate and understand this upcoming trend of self-driving cars and investigate their judgmental capability and if ethics can be implemented into these machines to replace experienced humans. The focus of this report is to address the social implications of this disruptive technology, in addition to identifying its core issues regarding safety and other concerns. This project also aims to make feasible recommendations to resolve a few of the major issues, since autonomous cars are considered the future of transportation. (Qing Rao, Jelena Frtunikj,2018)
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
2 Introduction:
Self-driving cars are no longer just an idea but is forming into reality and in the coming decades, it is expected to reach great heights of automation. Having a machine replace human driving skills is what is known as vehicle intelligence. This enables the vehicle to learn by experience and continuously improve and eventually be better than a human driver. There are many organizations competing with manufacturing companies to raise the level of automation. These add layers of cognitive intelligence to basic vehicle performance.
The current safety policies fail to define the new concerns in autonomous driving. Many researchers have stated that according to them their main problem is that more and more control is being passed on to these machines in comparison to humans despite humans being more experienced and more aware about the situation around them. These machines receive their information from their environment using sensors, cameras, and LIDAR. Thus, programmers are reconstructing events around car crashes to great precisions in order to analyze the logical decision behind it to train more machines to the same level of accuracy.
2.1 Problem Statement and Context:
Autonomous cars might be the latest trend, but it comes with a lot of problems and risks, such as any new technology would. The technology is widely evolving thus staying updated is very essential to carry out this project. The concept of trust and safety involved in this technology is a subjective concept.
Some of the problems involved with self-driving cars as noted would be:
- Responsibility: Who will be responsible for the accidents caused – the man or the machine? Who is going to bear the legal responsibility?
- Changing road infrastructures: How are the broken roads or unpredictable road conditions going to be handled by the vehicle?
- Ethics: Can ethics be programmed into self-driving cars? (Noah J. GoodDall,2016)
- Judgmental calls: Will they be able to make the right judgment?
- Eye contact: Will a vehicle be able to replace actual human intervention?
- Cost: How affordable is this rising technology going to be?
- Lack of trust: Will humans be able to completely trust the machines with their lives?
This report would try focusing all the above-mentioned issues on a high level and majorly focus on the liability of Autonomous self-driving cars to make the correct judgmental calls and the ability to have ethics programmed into machines and make them a major part of its decision making.
2.2 Aim and Significance:
Self-driving cars are believed to be the future of transportation, but there are many scenarios where they aren’t ready for use just yet. They are also not yet completely accepted or trusted by humans. They face various issues when subjected to different situations, with regards to judgmental calls, liability, and ethics. Sometimes drivers must make instantaneous decisions based on the obstacles they are facing on the road. This could vary from being something as simple as an emergency stop that could be easily programmed into an autonomous vehicle to something more ethical such as choosing between hitting a child on the road or hitting a cardboard box blocking the lane.
The self-driving cars often find it difficult to be able to make judgments on what repercussions their actions could cause, such as a pile-up on the highway due to quick braking. (Anna Skarbek-Żabkin, Marcin Szczepanek,2018) Every judgment cannot be made technically (with specific algorithm rules and solutions to follow) and have a derived solution since such judgments might not be the best judgment in terms of real-world situations. This requires a human component that is familiar with the situations and understands the consequences.
Thus, it is very essential to understand how a machine (autonomous vehicle) would vary from human judgment. The issues, such as judgmental calls and ethics, are major portions of life that a human decision would contain this is what aids a user to decide if this technology is safe or how can it be made better, it helps us figure out how these issues can be rectified and how can we incorporate the human component into these machines and improve its functionality.
2.3 Expected Outcome:
Automated vehicles face greater challenges with having to decide quickly even with incomplete information and situations that not every programmer can predict and program into the machine. Having to make rational and ethical decisions while providing them with an equivalent of years of human experience is the main challenge here.
This outcome of the report would be to address and identify the various risks with regards to functional safety in self-driving cars, investigate the fundamental capability of this evolving technology, and how it would potentially change our lives. Apart from these, this report will also venture into the various advantages and aim at providing recommendations to overcome its disadvantages. The recommendations might not be the perfect solutions, but it aims at being the most thoughtful and practical.
3 Background:
Up until an uber self-driving car struck and killed a 49-year-old pedestrian in Elaine Herzberg in March 2018, autonomous vehicles were considered a completely flawless and a very successful innovation. This incident has given rise to more questions regarding its safety, techniques it follows, and who should be held responsible in occurrences of such terrible situations.
3.1 Literature Review:
3.1.1 Categorizing:
Various researchers categorize autonomous cars into various levels based on the degree of automaticity Levels 0, 1, 2, 3, 4, 5 ranging from 0 being no automation to 5 where the vehicle is completely automated. The researchers also specify that levels 0-2 have human monitors and level 3 onwards have machine monitoring, thereby trying to achieve 0 or no human interference. They have their input systems, output devices, and their control systems that they use to identify similar situations based on their programming and react to these situations. (Anna Skarbek-Żabkin, Marcin Szczepanek,2018) A similar technique is used to classify their risks into various classes varying from 1-4 where 1 is intolerable and 4 is negligible (Roger Woodman, Alan F.T. Winfield, Chris Harper, and Mike Fraser, 2012).
3.1.2 Biasing:
Many of the researchers elucidated the issue of data completeness. Lack of data completeness leads to major issues such as judgmental capability such as biasing in judgement, lack of ethics in judgements and thus leading to lack of trust with people.
They say that an autonomous vehicle is a type of supervised machine learning concept and the model here must be taught what it must do. For example, if the vehicle was trained to identify cars, only then it will not be able to recognize a pedestrian. This would be like the changing traffic scenes in real life where predicting all the possible solutions and covering 100% real world scenes is near to impossible. (Qing Rao, Jelena Frtunikj,2018). This also leads to a lot of algorithm biasing since the model is only said what is to be done, while autonomous vehicles are neutral and impartial, they can end up being biased in few decisions. Some examples have demonstrated systematic racial bias in algorithm used for the prediction purpose. (David Danks, Alex John London,2017).
One possible way in which biasing can occur is when the data used for training the algorithm used for the vehicle is biased this would lead to decisions that are biased. This type of biasing is very difficult to identify, one such example would be collection of data that is going to be used for prediction all coming from one part of the location for example getting all data from Pittsburg to build a google car which is intended to be used worldwide. The resultant model will of course be biased since the model is intendent for the entire world, but data is only from a specific location. (David Danks, Alex John London,2017)
3.1.3 Ethics and Trust:
Biasing in turn leads to another important concept under discussion being the ethics, a key assumption made by most of the researchers is that a good person with good judgment will know when it is fine to disregard law itself in order to maintain the spirit of it while having to incorporate this ability into a machine is the major threat for the engineers. For instance, the laws generally relies on the drivers common sense rarely will a self-driving car be able to make any such decisions. Engineers must be ahead of time predict possible ethical situations to tell the machine what must be done under that situations to survive this challenge. (Noah J. Goodall,2016) One company trying to incorporate these exceptions is Google, they have acknowledged to allow their self-driving cars to make speed limit exceptions under different traffic conditions and emergency situations such as trips to hospitals under critical scenarios. This would be a step towards programming ethics into self-driving cars. (R.McAllister,Y.Galy,A. Kendall, M.V.Wilk, A.Shah, R.Cipolla,A.Wellery,2017)
Getting Ethics into self-driving cars would pave its way in increasing the level of trust since the general public are not expecting the cars to have superhuman wisdom but their trust will increase enormously if the vehicle is able to make rational decisions considering the ethical implications their judgement would have.
3.1.4 Robustness:
Autonomous vehicles are expected to be robust to some extent, this would imply that they are able to handle errors and at the same time are able to continue their normal operation even under any abnormal conditions and unseen situations. Therefore, it is expected that that vehicle can support changes and be able to adapt to a dynamic environment. (Roger Woodman, Alan F.T. Winfield, Chris Harper and Mike Fraser, 2012).
3.1.5 Model:
The autonomous car follows a very standard model as shown in fig1, it has cameras to spot the lanes and signals, a LIDAR is present for calculating the range these inputs are used to calculate collisions using a software which then sends signals to alter the steering wheel position according the situation based on rules and algorithms programed into it. (Ionut Alexandru Budisteanu,2018)
Why is it essential to consider a general model of an autonomous car? It is important to analyses the existing features to determine where changes can be made to make this model more feasible and ensure more trust. We can evidently see that situations that are more likely to occur are only considered and no component of ethical decisions are incorporated, the focus of this report would be to recommend ways in which this component can be made a part of the model.
Fig.1-Autonomous car model. (Source: Ionut Alexandru Budisteanu,2018)
4 Research Focus:
The concept of autonomous cars itself overall is deeply flawed. It aims at replacing humans and their years of experience by trying to improve the standard of living and make driving an easy concept, but in contradiction, it fails to develop skills equivalent to human experience, as to even bridge a small gap, it need years of training. They have a high possibility of getting hacked. Trusting an autonomous vehicle is a bigger issue as their judgmental calls are just the ones programmed and not everything can be programmed. In case of any accidents, who is responsible for the machine or the human?
4.1 Research Question:
This research will mainly focus on the ability of these autonomous vehicles to make good judgmental calls, thus trying to answer the question:
- Will autonomous vehicles be able to make good judgmental calls under critical situations and be able to substitute human intelligence to make ethical and situational judgment calls?
4.2 Methodology:
The methodology used to compile this report is a traditional literature review based on the thematic reviews of previous research carried out in the respective fields and areas of interest namely, self-driving cars, safety with self-driving cars, and safety standards in self-driving cars. This report also includes analyzing the various social implications in practical applications that have occurred in the past. This would involve analysis of journals, publications, and newspaper articles to understand the past occurrences and quoting them to provide recommendations. It would be qualitative research based on systematic data collected from various literature and these will be classified based on the analyses and summarized in the literature review.
5 Critical Analysis:
Most studies suggest autonomous vehicles designed for roads much clear and better conditions that is for well-developed wealthy nations whereas real-world scenarios are not very neat it is filled with hazards, human and animal interferences notably with many rules being broken. Having many designs of autonomous cars being focusing on the west, these designs become very difficult to be implemented in other placed around the globe. These lead to autonomous cars failing miserably in heavy-traffic nations.
Various researchers have raised questions regarding the liability of these programs. For instance, if the situation involves a truck whose color blends with the color of the clouds, will the system be able to separate it from the contour? (Anna Skarbek-Żabkin, Marcin Szczepanek,2018)
5.2 Recommendations:
The various problems with autonomous cars and their effects have been discussed above in this report, further ahead the report would attempt to provide recommendations in solving these problems, some of the solutions that could possibly reduce the negative effects are:
Ethics:
Model with ethical component included in it, this can be achieved by not just relaying on predictable situations or already present situations but including ethical situations. How can these be incorporated?
A possible solution would be having a VR simulation of possible ethical situations and having participants under this simulation and having then decide what they would do under that situation. Such as having various obstacles like human, animal, objects and allowing the person to decide which one they would runover under critical situation. This would give rise to many categories and observations can be made based on these categories that would aid in understanding the reason behind the judgement and these reasons can be coded into the machine as rules to address same or any similar situations. This would attempt to include ethics and would address some ethical issues that might come up.
Human Component:
Having an alert to warn the human inside the car before a critical decision is taken to ensure the human is aware of the decision and also giving human decisions more priority to ensure they can override the machine decision thereby handling human component and also responsibility issues since the human will be aware of the decision and has the authority to change it if needed.
Standards:
Either the models should all be made on uniform agreed standard that can be applied throughout the world or the models should be country or city specific restricting its use in other countries to avoid any mishap.
Trust:
If the above components are incorporated successfully the level of trust automatically increases for a human since more of the concerns are addressed and taken care of.
6 Results:
The report aims at summarizing the various issues pertaining to autonomous self-driving cars and various methods in which these issues can possibly be rectified.
It is observed that in most of the researchers conclude that the major issues of self-driving cars is their ambiguity in making the correct judgment, their lack of ethics in judgements and not being able to keep track of any sudden changes in the ideal situations this indeed leads to lack of trust among the public.
This report address these problems and provides some recommendations in including ethics as a major component in decision making which can be some simply by studying the decisions a simple human would make under VR situations to generalize the categories. These solutions are not aimed at being perfect but aims at reducing the negative effects of autonomous cars to increase their trust level since they are a major part of future transportation.
7. Conclusion and Future Outlook
The project aims at addressing the previously-mentioned problems on a high level and mainly focuses on the judgmental capability of these autonomous cars and if ethics could ever be incorporated into them to be an integral part of their decisions.
Every situation cannot be replicated and used for predicting solutions, since the data does not compete and is always changing. Thus, such judgments made based on changing situations might not be the best judgment. This requires a human component which is familiar with the situations and understands the consequences. Thus, it is very essential to understand how a machine would vary from human judgment and see its liability, making it is essential to address all its problems to see if these can be changed or tolerated for the technology to sustain.
Reference
- Qing Rao, Jelena Frtunikj,2018, BMW Group, ACM, IEEE 1st International Workshop on Software Engineering for AI in Autonomous Systems, Deep Learning for Self-Driving Cars: Chances and Challenges, P.1-3
- Anna Skarbek-Żabkin, Marcin Szczepanek,2018, Autonomous vehicles and their impact on road infrastructure and user safety, P.1-3
- Noah J. Goodall,2016, Can you program ethics into self-driving cars.P.1-3
- Md. Saif Ullah Miah,2017, The Issues and the Possible Solutions for Implementing Self-Driving Cars in Bangladesh, P.1-3.
- Raj Gautam Dutta, Feng Yu, Teng Zhang, Yaodan Hu, Yier Jin,2018, Security for Safety: A Path Toward Building Trusted Autonomous Vehicles, P.1-4
- Roger Woodman, Alan F.T. Winfield, ChrisHarper and Mike Fraser, 2012, Building safer robots: Safety driven control, P.1605-1607.
- David Danks, Alex John London,2017,26th International Joint Conference on Artificial Intelligence, Algorithmic Bias in Autonomous Systems, P.1-2
- Amir A. Mokhtarzadeh, Zhao Jian Yangqing,2018, Human-Robot Interaction and Self-Driving Cars Safety Integration of Dispositif Networks, P.494-496
- Ionut Alexandru Budisteanu,2018, Using Artificial Intelligence to create a low-cost self-driving car, P.4-5
10. R. McAllister, Y.Galy, A. Kendall, M.V.Wilk, A.Shah, R.Cipolla,A.Wellery,2017,Concrete Problems for Autonomous Vehicle Safety:Advantages of Bayesian Deep Learning,P.4745-4747.
Appendix 1
Project Plan:
Cite This Work
To export a reference to this article please select a referencing style below: