Recommender Systems : Legal and Ethical Issues.

Bibliographic Details
Main Author: Genovesi, Sergio.
Other Authors: Kaesling, Katharina., Robbins, Scott.
Format: eBook
Language:English
Published: Cham : Springer International Publishing AG, 2023.
Edition:1st ed.
Series:The International Library of Ethics, Law and Technology Series
Subjects:
Online Access:Click to View
Table of Contents:
  • Intro
  • Contents
  • Chapter 1: Introduction: Understanding and Regulating AI-Powered Recommender Systems
  • References
  • Part I: Fairness and Transparency
  • Chapter 2: Recommender Systems and Discrimination
  • 2.1 Introduction
  • 2.2 Reasons for Discriminating Recommendations
  • 2.2.1 Lack of Diversity in Training Data
  • 2.2.2 (Unconscious) Bias in Training Data
  • 2.2.3 Modelling Algorithm
  • 2.2.4 Interim Conclusion and Thoughts
  • 2.3 Legal Frame
  • 2.3.1 Agreement - Data Protection Law
  • 2.3.2 Information - Unfair Competition Law
  • 2.3.3 General Anti-discrimination Law
  • 2.3.4 Interim Conclusion
  • 2.4 Outlook
  • 2.4.1 Extreme Solutions
  • 2.4.2 Further Development of the Information Approach
  • 2.4.3 Monitoring and Audit Obligations
  • 2.4.4 Interim Conclusion and Thoughts
  • 2.5 Conclusions
  • References
  • Chapter 3: From Algorithmic Transparency to Algorithmic Choice: European Perspectives on Recommender Systems and Platform Regulation
  • 3.1 Introduction
  • 3.2 Recommender Governance in the EU Platform Economy
  • 3.2.1 Mapping the Regulatory Landscape
  • 3.2.2 Layers of Terminology in EU Law: "Rankings" and "Recommender Systems"
  • 3.3 Five Axes of Algorithmic Transparency: A Comparative Analysis
  • 3.3.1 Purpose of Transparency
  • 3.3.2 Audiences of Disclosure
  • 3.3.3 Addressees of the Duty to Disclose
  • 3.3.4 Content of the Disclosure
  • 3.3.5 Modalities of Disclosure
  • 3.4 The Digital Services Act: From Algorithmic Transparency to Algorithmic Choice?
  • 3.4.1 Extension of Transparency Rules
  • 3.4.2 User Control Over Ranking Criteria
  • 3.5 Third Party Recommender Systems: Towards a Market for "RecommenderTech"
  • 3.6 Conclusion
  • References
  • Chapter 4: Black Hole Instead of Black Box?: The Double Opaqueness of Recommender Systems on Gaming Platforms and Its Legal Implications
  • 4.1 Introduction.
  • 4.2 The Black Box-Problem of AI Applications
  • 4.2.1 Transparency and Explainability: An Introduction
  • 4.2.2 Efficiency vs. Explainability of Machine Learning
  • 4.2.3 Background of the Transparency Requirement
  • 4.2.4 Criticism
  • 4.2.5 In Terms of Recommender Systems
  • 4.3 The Black Hole-Problem of Gaming Platforms
  • 4.3.1 Types of Recommender Systems
  • 4.3.1.1 Content-Based Filtering Methods
  • 4.3.1.2 Collaborative Filtering Methods
  • 4.3.1.3 Hybrid Filtering Methods
  • 4.3.2 Black Hole Phenomenon
  • 4.4 Legal Bases and Consequences
  • 4.4.1 Legal Acts
  • 4.4.2 Digital Services Act
  • 4.4.2.1 Problem Description
  • 4.4.2.2 Regulatory Content Related to Recommender Systems
  • 4.4.3 Artificial Intelligence Act
  • 4.4.3.1 Purpose of the Draft Act
  • 4.4.3.2 Regulatory Content Related to Recommender Systems
  • 4.4.4 Dealing with Legal Requirements
  • 4.4.4.1 User-Oriented Transparency
  • 4.4.4.2 Government Oversight
  • 4.4.4.3 Combination of the Two Approaches with Additional Experts
  • 4.5 Implementation of the Proposed Solutions
  • 4.5.1 Standardization
  • 4.5.2 Control Mechanisms
  • 4.6 Conclusion
  • References
  • Chapter 5: Digital Labor as a Structural Fairness Issue in Recommender Systems
  • 5.1 Introduction: Multisided (Un)Fairness in Recommender Systems
  • 5.2 Digital Labor as a Structural Issue in Recommender Systems
  • 5.3 Fairness Issues from Value Distribution to Work Conditions and Laborers' Awareness
  • 5.4 Addressing the Problem
  • 5.5 Conclusion
  • References
  • Part II: Manipulation and Personal Autonomy
  • Chapter 6: Recommender Systems, Manipulation and Private Autonomy: How European Civil Law Regulates and Should Regulate Recommender Systems for the Benefit of Private Autonomy
  • 6.1 Introduction
  • 6.2 Autonomy and Influence in Private Law
  • 6.3 Recommender Systems and Their Influence
  • 6.4 Manipulation.
  • 6.5 Recommender Systems and Manipulation
  • 6.5.1 Recommendations in General
  • 6.5.2 Labelled Recommendations
  • 6.5.3 Unrelated Recommendations
  • 6.5.3.1 In General
  • 6.5.3.2 Targeted Recommendations
  • 6.5.3.2.1 In General
  • 6.5.3.2.2 Exploiting Emotions
  • 6.5.3.2.3 Addressing Fears Through (Allegedly) Harm-Alleviating Offers
  • 6.5.4 Interim Conclusion: Recommender Systems, Manipulation and Private Autonomy
  • 6.6 Regulation Regarding Recommender Systems
  • 6.6.1 Unexpected Recommendation Criteria
  • 6.6.2 Targeted Recommendations Exploiting Emotions or Addressing Fears
  • 6.6.3 Regulative Measures to Take Regarding Recommender Systems
  • 6.7 Conclusion
  • References
  • Chapter 7: Reasoning with Recommender Systems? Practical Reasoning, Digital Nudging, and Autonomy
  • 7.1 Introduction
  • 7.2 Practical Reasoning, Choices, and Recommendations
  • 7.3 Recommender Systems and Digital Nudging
  • 7.4 Autonomy in Practical Reasoning with Recommender Systems
  • 7.5 Conclusion
  • References
  • Chapter 8: Recommending Ourselves to Death: Values in the Age of Algorithms
  • 8.1 Introduction
  • 8.2 Distorting Forces
  • 8.2.1 Past Evaluative Standards
  • 8.2.2 Reducing to Computable Information
  • 8.2.3 Proxies for 'Good'
  • 8.2.4 Black Boxed
  • 8.3 Changing Human Values
  • 8.4 Same Problem with Humans?
  • 8.5 Conclusion
  • References
  • Part III: Designing and Evaluating Recommender Systems
  • Chapter 9: Ethical and Legal Analysis of Machine Learning Based Systems: A Scenario Analysis of a Food Recommender System
  • 9.1 Introduction
  • 9.2 An Example Application: FoodApp- the Application for Meal Delivery
  • 9.3 Current Approaches to Ethical Analysis of Recommender Systems
  • 9.4 Ethical Analysis
  • 9.5 Legal Considerations
  • 9.5.1 Data Protection Law
  • 9.5.2 General Principles and Lawfulness of Processing Personal Data
  • 9.5.3 Lawfulness.
  • 9.5.4 Purpose Limitation and Access to Data
  • 9.5.5 Data Minimization and Storage Limitation
  • 9.5.6 Accuracy, Security and Impact Assessment
  • 9.6 Results of the Combined Ethical and Legal Analysis Approach
  • 9.7 Conclusion and Outlook
  • References
  • Chapter 10: Factors Influencing Trust and Use of Recommendation AI: A Case Study of Diet Improvement AI in Japan
  • 10.1 Society 5.0 and Recommendation AI in Japan
  • 10.2 Model for Ensuring Trustworthiness of AI Services
  • 10.3 Components of a Trustworthy AI Model
  • 10.3.1 AI Intervention
  • 10.3.2 Data Management
  • 10.3.3 Purpose of Use
  • 10.4 Verification of Trustworthy AI Model: A Case Study of AI for Dietary Habit Improvement Recommendations
  • 10.4.1 Subjects
  • 10.4.2 Verification 1: AI Intervention
  • 10.4.3 Verification 2: Data Management
  • 10.4.4 Verification 3: Purpose of Use
  • 10.4.5 Method
  • 10.4.6 Results
  • 10.4.6.1 AI Intervention
  • 10.4.6.2 Data Management
  • 10.4.6.3 Purpose of Use in Terms of Service Agreements
  • 10.5 Necessary Elements for Trusted AI
  • References
  • Chapter 11: Ethics of E-Learning Recommender Systems: Epistemic Positioning and Ideological Orientation
  • 11.1 Introduction
  • 11.2 Methods of Recommender Systems
  • 11.3 Recommender Systems in e-Learning
  • 11.3.1 Filtering Techniques: What Implications on Social and Epistemic Open-Mindedness?
  • 11.3.2 Model Selection: A Risk of Thinking Homogenization?
  • 11.3.3 Assessment Methods: What Do They Value?
  • 11.4 Problem Statement
  • 11.5 Some Proposals
  • 11.5.1 Knowledge-Based Recommendations
  • 11.5.2 A Learner Model Coming from Cognitive and Educational Sciences
  • 11.5.3 A Teaching Model Based on Empiric Analyses
  • 11.5.4 Explainable Recommendations
  • 11.6 Discussion and Conclusion
  • References.