Literature Review: Challenges in Autonomous Vehicles

Chibili Mugala
5 min readJun 7, 2023

--

3d illustration by Shubham Dhage

Introduction

Autonomous vehicles rely heavily on artificial intelligence (AI) technologies to navigate and make decisions in real-world environments. However, there are several challenges that need to be addressed in the development of autonomous vehicle AI:

  1. Perception and Sensing: Autonomous vehicles need accurate and robust perception systems to interpret and understand the surrounding environment. Challenges include accurately detecting and recognizing objects, accurately estimating depth and distance, and handling challenging weather or lighting conditions.
  2. Decision-Making and Planning: Making complex decisions in real-time while accounting for factors such as traffic rules, road conditions, and pedestrian behaviour is a significant challenge. Developing AI systems that can handle diverse and unpredictable scenarios, make safe decisions, and plan optimal routes is critical.
  3. Safety and Reliability: Ensuring the safety and reliability of autonomous vehicles is paramount. Addressing issues like system failures, cyber threats, and vulnerabilities in AI algorithms is crucial to minimize risks and build trust in autonomous technologies.
  4. Legal and Ethical Considerations: Autonomous vehicles raise important legal and ethical questions. Determining liability in case of accidents, establishing regulations and standards, addressing privacy concerns, and making ethical decisions in challenging situations are complex issues that need careful consideration.
  5. Data Collection and Training: Autonomous vehicle AI systems require vast amounts of high-quality training data to learn from. Collecting diverse and representative data, annotating it accurately, and continuously updating the AI models with new scenarios and edge cases pose significant challenges.
  6. Human Interaction and User Experience: Designing intuitive and effective human-machine interfaces for autonomous vehicles is crucial to ensure user acceptance and trust. Ensuring smooth transitions between autonomous and manual driving modes and effectively communicating the vehicle’s intentions to pedestrians and other road users are essential challenges.

Literature Review

Prominent studies that have attempted to solve the moral quondam of AI in autonomous vehicle are the Moral Machine Experiment by (Awad et al., 2018) and the socio-technical framings of the trolley problem (Himmelreich, 2018). The former collected millions of purely-ethical data points concerning how autonomous vehicles should react in a crash. It sought out whether quasi-robots should spare the lives of the many, older pedestrians or the few young ones. The trolley problem is highlighted too. However, the data which is publicly available shows the theoretical nature of this experiment. The study was concerned with participant’s intensions of how they would like the vehicle to react and lacked the impeding data to the driver and the potential mitigation or avoidance of the near-crashes. Himerlreich’s work explores multiple mundane scenarios for which the trolley problem, a moral dilemma for self-driving vehicles, exist.

However, more recent research drifts from the placement of the trolley problem to a technological and a more ethical realistic framing of autonomous vehicle perception and decision-making (Cunneen et al., 2020). Recent research has navigated through various constructs of explicable intelligence for autonomous vehicles. The essence of explainability approaches is to satisfy specific requirements of stakeholders, referred to as stakeholders’ desiderata. (Langer et al., 2021) presents a systematic approach to satisfying stakeholders’ desiderata as current XAI are vague and are often driven by largely disconnected disciplines. It offers a model that explicitly conceptualizes and relates main investigations and considerations in the evaluation, adjustment and selection of explainability approaches to satisfy stakeholders’ desiderata. This will be the bedrock of this research; by following a well-guided model and all-inclusive model that accounts negates ambiguity and unfounded assumptions in both explainability approach design and implementation.

While there is a the European Commission assessment criteria for trustworthy AI in the autonomous vehicle domain (Fernandez Llorca, D. and Gomez Gutierrez, 2021) , the EU has no obligatory regulation that govern the development of autonomous vehicles (Othman, 2022). This research proposes to build on intercontinental research such as China’s self-driving car legislation study (Ziyan & Shiguo, 2021) which gives a detailed analysis of legislation trends and practical suggestions for promoting vehicle autonomy within China and the US. Extending this to the EU setup will form a more universal outcome. Thereby facilitating the development of an AI with less acceptance issues.

With great power comes great responsibility, this can be said for autonomous vehicles navigating through the complex highways. Autonomous vehicles will need to understand and optimally react in the event of a near-accident or the unavoidable accident. A recurring challenge is the explainability of decisions made by AI. An exploratory study by (Hussain et al., 2021) leverages an engineering perspective in devising an interpretable and transparent XAI . This is significant as it provides a basis to further current knowledge in appropriating explainability by relating to scientific, ethicistic and user evaluations of AVs.

Conclusion

As detailed especially in the introductory part, AVs still have a long way to fulfil stakeholder demands. All these problems require years of research and development, iterative improvements of technologies and systematic resolution of the issues on a collective basis. Additionally, addressing these challenges requires interdisciplinary collaboration, continuous research, robust testing, and a strong focus on safety, reliability, and ethical considerations.

References

Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6

Cunneen, M., Mullins, M., Murphy, F., Shannon, D., Furxhi, I., & Ryan, C. (2020). Autonomous Vehicles and Avoiding the Trolley (Dilemma): Vehicle Perception, Classification, and the Challenges of Framing Decision Ethics. In Cybernetics and Systems (Vol. 51, Issue 1, pp. 59–80). https://doi.org/10.1080/01969722.2019.1660541

Fernandez Llorca, D. and Gomez Gutierrez, E. (2021). Trustworthy Autonomous Vehicles (Issue December). https://doi.org/10.2760/120385

Himmelreich, J. (2018). Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations. Ethical Theory and Moral Practice, 21(3), 669–684. https://doi.org/10.1007/s10677-018-9896-4

Hussain, F., Hussain, R., & Hossain, E. (2021). Explainable Artificial Intelligence ( XAI ): An Engineering Perspective. 1–11.

Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)? — A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. https://doi.org/10.1016/j.artint.2021.103473

Othman, K. (2022). Exploring the implications of autonomous vehicles: a comprehensive review. In Innovative Infrastructure Solutions (Vol. 7, Issue 2). Springer International Publishing. https://doi.org/10.1007/s41062-022-00763-6

Ziyan, C., & Shiguo, L. (2021). China’s self-driving car legislation study. Computer Law & Security Review, 41, 105555. https://doi.org/https://doi.org/10.1016/j.clsr.2021.105555

--

--

Chibili Mugala

A nerdy data scientist with a passion for explainable artificial intelligence, computer vision & autonomous vehicles. https://linktr.ee/chibili