1 Abstract:
The concept of Artificial Intelligence (AI) is immensely revolutionizing every part of the modern technology-oriented society in making life much easier to live. Many researchers are applying deep learning mechanisms in developing autonomous driving. The main objective of this project is to investigate and understand this upcoming trend of self-driving cars and investigate their judgmental capability and see if ethics can be implemented into these machines which aim at replacing experienced humans. The focus of this report is to address the social implications of this disruptive technology, in addition to identifying its core issues regarding safety and other concerns. This project also aims to make feasible recommendations to resolve a few of the major issues, since autonomous cars are considered the future of transportation. (Qing Rao, Jelena Frtunikj,2018)
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
2 Introduction:
Self-driving cars are no longer just an idea but are forming into reality and in the coming decades, it is expected to reach great heights of automation. Having a machine replace human driving skills is what is known as vehicle intelligence. This enables the vehicle to learn by experience and continuously improve and eventually be better than a human driver. There are many organizations competing with manufacturing companies to raise the level of automation. These additional layers of cognitive intelligence to basic vehicle performance.
The current safety policies fail to define the new concerns in autonomous driving. Many researchers have stated that according to them their main problem is that more and more control is being passed on to these machines in comparison to humans despite humans being more experienced and more aware about the situation around them. These machines receive their information from their environment using sensors, cameras, and LIDAR. Thus, programmers are reconstructing events around car crashes to great precisions in order to analyze the logical decision behind it to train more machines to the same level of accuracy.
2.1 Problem Statement and Context:
Autonomous cars might be the latest trend, but it comes with a lot of problems and risks, such as any new technology would. The technology is widely evolving thus staying updated is very essential to carry out this project. The concept of trust and safety involved in this technology is a subjective concept.
Some of the problems involved with self-driving cars as noted would be:
- Responsibility: Who will be responsible for the accidents caused – the man or the machine? Who is going to bear the legal responsibility?
- Changing road infrastructures: How are the broken roads or unpredictable road conditions going to be handled by the vehicle?
- Ethics: Can ethics be programmed into self-driving cars? (Noah J. GoodDall,2016)
- Judgmental calls: Will they be able to make the right judgment?
- Eye contact: Will a vehicle be able to replace actual human intervention?
- Cost: How affordable is this rising technology going to be?
- Lack of trust: Will humans be able to completely trust the machines with their lives?
This report would try focusing all the above-mentioned issues on a high level and majorly focus on the liability of Autonomous self-driving cars to make the correct judgmental calls and the ability to have ethics programmed into machines and make them a major part of its decision making.
2.2 Aim and Significance:
Self-driving cars are believed to be the future of transportation, but there are many scenarios where they aren’t ready for use just yet. They are also not yet completely accepted or trusted by humans. They face various issues when subjected to different situations, with regards to judgmental calls, liability, and ethics. Sometimes drivers must make instantaneous decisions based on the obstacles they are facing on the road. This could vary from being something as simple as an emergency stop that could be easily programmed into an autonomous vehicle to something more ethical such as choosing between hitting a child on the road or hitting a cardboard box blocking the lane.
The self-driving cars often find it difficult to be able to make judgments on what repercussions their actions could cause, such as a pile-up on the highway due to quick braking. (Anna Skarbek-Żabkin, Marcin Szczepanek,2018) Every judgment cannot be made technically (with specific algorithm rules and solutions to follow) and have a derived solution since such judgments might not be the best judgment in terms of real-world situations. This requires a human component that is familiar with the situations and understands the consequences.
Thus, it is very essential to understand how a machine (autonomous vehicle) would vary from human judgment. The issues, such as judgmental calls and ethics, are major portions of life that a human decision would contain this is what aids a user to decide if this technology is safe or how can it be made better, it helps us figure out how these issues can be rectified and how can we incorporate the human component into these machines and improve its functionality. These are the various aspects that this report would try to cover and additionally try proving a recommendation to overcome these issues.
2.3 Expected Outcome:
Automated vehicles face greater challenges with having to decide quickly even with incomplete information and situations that not every programmer can predict and program into the machine. Having to make rational and ethical decisions while providing them with an equivalent of years of human experience is the main challenge here.
The outcome of this report would be to address and identify the various risks with regards to functional safety in self-driving cars, investigate the fundamental capability of this evolving technology, and how it would potentially change our lives either making it easier or make it entirely fall apart and causing more trouble in near future.
Apart from these, this report will also venture into the various advantages of this rising technology since it is inevitably considered the future of transportation and its growth cannot be stopped. The report also aims at providing additional recommendations to overcome some of the major problems to try and make the decision-making process like that of humans.
3 Research Focus:
The concept of autonomous cars itself is deeply flawed. It aims at replacing humans and their years of experience by trying to improve the standard of living and make driving an easy concept, but in contradiction, it fails to develop skills equivalent to human experience, as to even bridge a small gap, it needs years of training. These vehicles have a high possibility of getting hacked. Trusting an autonomous vehicle is a bigger issue as their judgmental calls are just the ones programmed and not everything can be programmed. In case of any accidents, who is responsible the machine or the human? The research would try to focus on these aspects on a high level and majorly try to examine and provide recommendations on the judgmental capabilities and programming ethics into these vehicles.
3.1 Research Question:
This research will mainly focus on the ability of these autonomous vehicles to make good judgmental calls, thus trying to answer the question:
- Will autonomous vehicles be able to make good judgmental calls under critical situations and can ethics be programmed into these vehicles and aid it in being able to substitute human intelligence to make ethical and situational judgment calls?
3.2 Methodology:
The methodology used to compile this report is a traditional literature review based on the thematic reviews of previous research carried out in the respective fields and areas of interest namely, self-driving cars, safety with self-driving cars, and safety standards in self-driving cars. This report also includes analyzing the various social implications in practical applications that have occurred in the past. This would involve analysis of journals, publications, and newspaper articles to understand the past occurrences and quoting them to provide recommendations. It would be qualitative research based on systematic data collected from various literature and these will be grouped and classified based on the analyses and be summarized in the literature review in a thematic manner.
The steps involved in completing this report were, first analyzing all the current literatures to understand the model and working of autonomous cars, next would be to identify the various gaps in these researches that could aid in proving recommendations and to understand and identify the underlying issues, third would be the major task of handling these issues by making useful recommendations that can be incorporated in the future and worked on to resolve the issues. The detailed plan carried out for this report can be seen in appendix1.
4 Background:
Up until an uber self-driving car struck and killed a 49-year-old pedestrian in Elaine Herzberg in March 2018, autonomous vehicles were considered a completely flawless and very successful innovation. This incident has given rise to more questions regarding its safety, techniques it follows, and who should be held responsible in occurrences of such terrible situations.
4.1 Literature Review:
4.1.1 Categorizing:
Various researchers categorize autonomous cars into various levels based on the degree of automaticity Levels 0, 1, 2, 3, 4, 5 ranging from 0 involving no automation to 5 where the vehicle is completely automated. The researchers also specify that levels 0-2 have human monitors and level 3 onwards have machine monitoring, thereby trying to achieve 0 or no human interference. They have their input systems, output devices, and the control systems interlinked aimed to identify similar situations based on their programming rules and react to these situations in a specified manner. (Anna Skarbek-Żabkin, Marcin Szczepanek,2018) A similar technique is used to classify their risks into various classes varying from 1-4 where 1 is intolerable and 4 is negligible (Roger Woodman, Alan F.T. Winfield, Chris Harper, and Mike Fraser, 2012).
4.1.2 Model:
The autonomous car follows a very standard model as shown in fig1, it has cameras to spot the lanes and signals, a LIDAR is present for calculating the range these inputs are used to calculate collisions using a software which then sends signals to alter the steering wheel position according to the situation based on rules and algorithms programmed into it. (Ionut Alexandru Budisteanu,2018).
The Lidar is also used to measure the distance using the pulse of light, they collect the information and build the map for navigation. These Lidars are not always sharp in measuring the presence of objects beside the autonomous cars, they are affected by various factors and one such factor would be other autonomous cars whose signal will interfere with each other, hence they use sensors as back-ups to detect any movements in the surroundings (Cade Metz,2018).
Fig.1-Autonomous car model. (Source: Cade Metz,2018)
4.1.3 Decision Making:
The machine uses machine learning algorithms(deep neural network) to learn continuously and tries predicting decisions like humans. The main deep learning approaches used by self-driving cars are data preparation, model generation, and model deployment. In the preparation the main aim is to get the data ready for training and testing, this would include cleaning the data, labeling it and storing it. The model generation involves developing training and evaluating network architectures and making sure the data is trained by minimizing the error in the output. This trained model is then pruned and standardized for specific target hardware during the end deployment stage. (Qing Rao, Jelena Frtunikj,2018)
The machine uses supervised algorithms which has a set of training dataset and this is used for the learning process, the machine continues to learn till they have a minimal error. It uses unsupervised algorithms to derive value from its available data, this is used to detect patterns and divide the available data into subgroups based on their similarities. Thus, it is suggested that these precise learning processes help in reducing the driving errors too large numbers. (Savaram Ravindra, Tekslate,2017)
Having discussed how this reduces errors many researchers also discuss how none of these methods focus on ethics as a major part of decision making. The ethical dilemma these autonomous cars face is beyond the scope of general rules and predictable datasets. The autonomous cars are expected to resolve this situation instantaneously if it must be accepted worldwide. (Peter Nowak,2018)
4.1.4 Biasing:
Many of the researchers elucidated the issue of data completeness. Lack of data completeness leads to major issues such as judgmental capability such as biasing in judgment, lack of ethics in judgments and thus leading to a lack of trust with people.
They say that an autonomous vehicle is a type of supervised machine learning concept and the model here must be taught what it must do. For example, if the vehicle was trained to identify cars, only then it will not be able to recognize a pedestrian. This would be like the changing traffic scenes in real life where predicting all the possible solutions and covering 100% real world scenes is near to impossible. (Qing Rao, Jelena Frtunikj,2018). This also leads to a lot of algorithms biasing since the model is only said what is to be done, while autonomous vehicles are neutral and impartial, they can end up being biased in a few decisions. Some examples have demonstrated systematic racial bias in the algorithm used for the prediction purpose. (David Danks, Alex John London,2017).
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
One possible way in which biasing can occur is when the data used for training the algorithm used for the vehicle is biased this would lead to decisions that are biased. This type of bias is very difficult to identify, one such example would be a collection of data that is going to be used for prediction all coming from one part of the location, for example, getting all data from Pittsburg to build a Google car which is intended to be used worldwide. The resultant model will, of course, be biased since the model is intended for the entire world, but data is only from a specific location. (David Danks, Alex John London,2017)
4.1.5 Ethics and Trust:
Biasing in turn leads to another important concept under discussion being the ethics, a key assumption made by most of the researchers is that a good person with good judgment will know when it is fine to disregard law itself in order to maintain the spirit of it while having to incorporate this ability into a machine is the major threat for the engineers. For instance, the laws generally rely on the drivers common sense rarely will a self-driving car be able to make any such decisions. Engineers must be ahead of the time predict possible ethical situations to tell the machine what must be done under those situations to survive this challenge. (Noah J. Goodall,2016) One company trying to incorporate these exceptions is Google, they have acknowledged allowing their self-driving cars to make speed limit exceptions under different traffic conditions and emergency situations such as trips to hospitals under critical scenarios. This would be a step towards programming ethics into self-driving cars. (R.McAllister, Y. Galy, A. Kendall, M.V.Wilk, A.Shah, R.Cipolla, A.Wellery,2017)
Getting Ethics into self-driving cars would pave its way in increasing the level of trust since the general public are not expecting the cars to have superhuman wisdom but their trust will increase enormously if the vehicle is able to make rational decisions considering the ethical implications their judgment would have.
4.1.6 Robustness:
Autonomous vehicles are expected to be robust to some extent, this would imply that they are able to handle errors and at the same time are able to continue their normal operation even under any abnormal conditions and unseen situations. Therefore, it is expected that that vehicle can support changes and be able to adapt to a dynamic environment. (Roger Woodman, Alan F.T. Winfield, Chris Harper and Mike Fraser, 2012).
4.1.7 Critical Analysis:
The various literature reviews suggested some gaps in the field of autonomous vehicles they were identified as:
Firstly, most studies suggest methods of autonomous vehicles operation and these are designed for roads much clear and better conditions that are for well-developed wealthy nations whereas the real-world scenarios are not very neat it is filled with hazards, human and animal interferences notably with many rules being broken. Having many designs of autonomous cars being focusing on the west, these designs become very difficult to be implemented in other placed around the globe. These lead to autonomous cars failing miserably in heavy-traffic nations.
Secondly, various researchers have raised questions regarding the reliability of these programs. For instance, if the situation involves a truck whose color blends with the color of the clouds, will the system be able to separate it from the contour? (Anna Skarbek-Żabkin, Marcin Szczepanek,2018)
Thirdly, We can evidently see from the design model in Fig1 that the instances programmed into the vehicle are situations that are most likely to occur, considering only high probability incidences doesn’t solve the risk involved in autonomous cars and having no component of ethical decisions being incorporated will only make it more unreliable among the general public, the focus of this report would be to provide recommendations to overcome these gaps.
5 Recommendations:
The various problems with autonomous cars and their effects have been discussed above in this report, further ahead the report would attempt to provide recommendations in solving at least some of these problems, some of the solutions that could possibly reduce the negative effects are:
- Human Component:
Ensuring human control can always overrule the machine decisions could be one way to increase the human component but at the end having an aim to develop a level 5 autonomous vehicle would mean having no human intervention and this idea would crumble the whole concept of level 5 vehicles thus maybe starting out having human components maybe by having an alert to warn the human inside the car before a critical decision is taken to ensure the human is aware of the decision and also giving human decisions more priority to ensure they can override the machine decision thereby increasing human trust and then slowly removing this component as trust develops would help in increasing the level of trust in autonomous cars among people and indeed this would help the car to learn more about the various situations where the predicted decision is different from the human decision and can update its decision-making ability over the period.
- Standards:
Either the models should all be made on a uniform agreed standard that can be applied throughout the world or the models should be country or city-specific restricting its use in other countries to avoid any mishap. If the above components are incorporated successfully the level of trust automatically increases for a human since more of the concerns are addressed and taken care of.
- Ethics:
A model with ethical component included in it, this can be achieved by not just relying on predictable situations or already present situations but by also including the day to day ethical situations the vehicle might have to face. How can these be incorporated would be our next hurdle to analyze.
Taking the data preparation and model generation stages, a possible solution would be having a VR simulation of possible ethical situations and having participants under this simulation and having them decide what they would do under that situation. Such as having various obstacles like human, animal, objects and allowing the person to decide which one they would run over under critical situation. This would give rise to many categories and observations can be made based on these categories that would aid in understanding the reason behind the judgment and these reasons can be coded into the machine as rules to address same or any similar situations. This would attempt to include ethics and would address some ethical issues that might come up.
For example if a situation under consideration for simulation is of hitting a child jumping into the path or hitting a box on the right track , the solutions and all categories are considered as our dataset if a clustering algorithm such as K-means(algorithm) is applied to this then that would divide the dataset into categories such as age, gender, decision and a classification algorithm such as deep neural networks or SVM-Support Vector Machines (algorithm) being applied after this would give us classification rules linking to the which person belonging to which gender or age would choose which solution and this can be programmed as the exception rules for the autonomous car that would at the end have ethical component included in its decision and thus avoid hitting the child and hit the box thereby avoiding accidents.
Fig.2 Ethics Inclusion-Suggested Model
Fig.3 Ethics Inclusion-Suggested Model
6.Results:
The report tries to answer the questions pertaining to the judgmental capability of an autonomous self-driving car and if it could replace human in making ethical and situational judgments. It is observed that in most of the researchers conclude that the major issues of self-driving cars are their ambiguity in making the correct judgment, their lack of ethics in judgments and not being able to keep track of any sudden changes in the ideal situations this indeed leads to lack of trust among the public.
This report has tried to address these problems and provides some recommendations in incorporating ethics as a major component in decision making which can be done by studying the human-decisions using a simple VR situations analyses method to generalize the categories. The report also identified the various flaws present in the decision-making processes of an autonomous car, their biasing and failure under dynamic conditions that have lead to a lack of trust among the public with the technology. The various gaps in literature were also identified and stated as design focus, ethics, and judgmental capability under critical situations. The model structure and the decision-making process has been examined in detail to aid in proving recommendations for the issues in this report. These solutions are not aimed at being perfect but aim at reducing the negative effects of autonomous cars to increase their trust level since they are a major part of future transportation, but considering a complete level 5 autonomous vehicle without any human interference would still be an ambiguity in near future, there would be a need for some form of human interactions everywhere if it needs to be trusted and applied worldwide.
7. Conclusion and Future Outlook
The report addresses the previously mentioned problems and mainly focuses on the judgmental capability of these autonomous cars and tries to develop a method to incorporate ethics into self-driving cars and make them an integral part of their decision-making process. It aims at answering the research question of identifying if self-driving cars have good judgmental capability and providing methods to incorporate ethics into their decision-making process. It is identified through the report that self-driving cars have many flaws in their process and have failed badly in many instances but is still a promising technology, even if level 5 vehicles are difficult to imagine as having a human component which is familiar with the situations and understands the consequences is still a better option but still a vehicle which has ethical components incorporated can still win more trust among people and avoid many errors.
Thus, it is very essential to understand how a machine would vary from human judgment and see its liability, making it is essential to address all its problems to see if these can be changed or tolerated for the technology to sustain. The future look would be to incorporate these recommendations into their software stage to test results and identify the accuracy of this method and also determine how much of accuracy it would, in turn, add to the autonomous vehicle thereby determining the amount of reduction in accidents.
References
- Qing Rao, Jelena Frtunikj,2018, BMW Group, ACM, IEEE 1st International Workshop on Software Engineering for AI in Autonomous Systems, Deep Learning for Self-Driving Cars: Chances and Challenges, P.1-3
- Anna Skarbek-Żabkin, Marcin Szczepanek,2018, Autonomous vehicles and their impact on road infrastructure and user safety, P.1-3
- Noah J. Goodall,2016, Can you program ethics into self-driving cars.P.1-3
- Md. Saif Ullah Miah,2017, The Issues and the Possible Solutions for Implementing Self-Driving Cars in Bangladesh, P.1-3.
- Raj Gautam Dutta, Feng Yu, Teng Zhang, Yaodan Hu, Yier Jin,2018, Security for Safety: A Path Toward Building Trusted Autonomous Vehicles, P.1-4
- Roger Woodman, Alan F.T. Winfield, ChrisHarper and Mike Fraser, 2012, Building safer robots: Safety driven control, P.1605-1607.
- David Danks, Alex John London,2017,26th International Joint Conference on Artificial Intelligence, Algorithmic Bias in Autonomous Systems, P.1-2
- Amir A. Mokhtarzadeh, Zhao Jian Yangqing,2018, Human-Robot Interaction and Self-Driving Cars Safety Integration of Dispositif Networks, P.494-496
- Ionut Alexandru Budisteanu,2018, Using Artificial Intelligence to create a low-cost self-driving car, P.4-5
10. R. McAllister, Y.Galy, A. Kendall, M.V.Wilk, A.Shah, R.Cipolla, A.Wellery,2017, Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning, P.4745-4747.
11. Savaram Ravindra, Tekslate,2017, The Machine Learning Algorithms Used in Self- Driving Cars
12. Peter Nowak,2018, Special to The Globe and Mail, The Ethical Dilemmas of Self-Driving Cars
- Cade Metz,2018, The New York Times, How Driverless Cars See the World Around Them.
Appendix 1
Project Plan:
Cite This Work
To export a reference to this article please select a referencing style below: