The xplAInr framework follows 7 stages of a system life cycle, which have been broken down into 22 cards of action. See below for the full cards. Each card provide considerations, activities or tools for developing and managing more ethical and explainable AI systems.

Developers of autonomous and intelligent systems don’t need to adhere to all stages/cards, but during the Pre-Planning stage should identify which ones they will follow.

4) Data & System Setup

  • Scalable Data Architecture

  • Input/Output Benchmarking

  • Model Training

  • Sensor Calibration

5) System Operation

  • ML Deployment

  • Sensor/Data Fusion

  • Moderation, Review, QA/QC

1) Pre-Planning

  • Resource Planning

  • Stakeholder Needs Analysis

  • Values-based Procurement

  • Environmental Impact Assessment

2) Project Design

  • Task Representation Documentation

  • Change Management Documentation

6) User Safety & Agency

  • UX/UI Safety & Controls

  • System Auditability

  • System Sustainability

  • Interoperability & Portability

3) Risk, Security, Compliance

  • Complacence & (Hard/Soft) Governance

  • Pre-Deployment Internal Audit

  • Risk Mitigation Protocols

  • Security Assessment & Monitoring

7) System End-of-Life

  • Responsible Decommissioning

1) Pre-Planning Stage

C1: Resource Planning

Premeditated training & tooling for developers to critically assess epistemic and ontological frameworks that might contribute to bias or other design flaw

 

C2: Stakeholder Analysis

Assessment of needs and protections for all stakeholder groups, including but not limited to “users” and “customers”

 
  • Map out all stakeholders for application

  • Analysis multi-stakeholders for potentially marginalized users (non-abled people, minors, etc).

  • Analysis cultural variables of each stakeholders (e.g. global user base).

  • Analysis for non-able-bodied users.

  • Analysis no non-user privacy protections protocols.

C3: Values-Based Procurement

Ensure procurement (e.g. data, hardware, API, cloud storage, etc.) adheres to organizational values.

 

C4: Environmental Impact Assessment

Minimize ecological footprint and prioritize sustainable practices throughout the development workflow and product/service life cycle.

 
  • Factor in externalities like energy use when conceptualizing products and service.

  • How to o optimize energy-use

  • Identify environmentally-conscious hosting procurement

2) Project Design

C5: Task Representation

Ensure each process, task system, stack layer, etc. is planned and documented to ensure all technical features, sensor specs, etc. are in sync with planned objectives.

C

 
  • Create documentation framework for self-reporting specific technical features.

  • Select management tools that enable more than one staff member is able to operationalize any single task?

C6: Change Management

Protocols to log and track system specs and technical features across lifecycle of development and deployment, and across all relevant staff.

 
  • Organizational redundancy so roles don’t become points-of-failure.

  • Cross-team knowledge sharing to ensure critical system information isn’t known exclusively by select individuals

  • Incorporate change management strategies into task documentation reporting.

  • Identify change management policies to ensure that each segment of each task is documented to allow for redundant capacity?

  • Change Management tasks are continuous even after deployment and try to use AI to collect data and make predictions

3) Risk, Security, Compliance

C7: Compliance & Governance (Hard/Soft)

Adhere to relevant regulatory governance and technical standards compliance.

 
  • List legal compliance with territorial-specific regulation of relevant data/privacy e.g. GDPR).

  • Identify multi-territorial compliance (when applicable).

  • Select standards and compliance monitoring.to be used

C8: Internal Audit

Ensure scalability so rapid up scaling doesn’t cause service interruption. Establish protocols for expanding language support, geographic diversity, etc. Ensure arbitration mechanisms are still applicable for “non-tr

 
  • Establish QA/Scalability test plan

  • Identify Redundancy protocols.

  • Schedule a translation audit and localization review

  • Schedule atechnical & legal review of Terms of Service.

  • Perform a dry-run of arbitration mechanisms

C9: Risk Mitigation

Create internal protocols that provide a holistic approach to risk management that spans from legal compliance to network architecture.

 
  • Establish legal multijurisdictional compliance assurance plan

  • Establish an user device safety best practice and verification & validation test plan (including physical health, mental health, etc.).

  • How is the data resilience, robustness, and redundancy being tested

  • How is perpetually monitor network traffic for suspicious activities being monitored

  • How are employee and third party data access being managed

C10: Security Assessment & Monitoring

Full-stack resilient security architecture, mitigation protocols and a ready-to-deploy incidence response systems.

 
  • Diverse cybersecurity ecosystem

  • Create full-stack penetration test plan

  • Employ red-teaming principles

  • Build redundancy capacity

  • Utilize strong encryption and salt+hash databases to mitigate compromise in case of an attack.

  • Design & deploy opsec best-practices to mitigate social engineering and other HUMINT-based attacks

4) Data & System Setup

C11: Scalable Data Architecture

Ensure that full-stack data architecture is privacy-preserving, bias-mitigated, auditable, and stored with high-assurance cryptographic security.

 
  • Create Data Bill of Material (dBOM)

  • Data collection minimization.

  • Understand bias mitigation procedures

  • Data structure audits (e.g. no plain text / unencrypted databases).

  • Capacity for reproducibility & replicability assurance.

  • Privacy-preserving best practices e.g. databases salted & hashed

  • Validation & verification of 100% encrypted cloud buckets.

C12: Input/Output Benchmarking

I/O benchmarks for sensor calibration, model thresholds, user/account throughput.

 
  • Dataset benchmarking: bias mitigation for public datasets, novel datasets, and third-party datasets.

  • Verifiable, transparent data underpinning “accuracy” claims. 

  • I/O benchmarks for sensor calibration based on dynamic real-world conditions.

  • Sensor calibration for diverse user bases (e.g. skin tones, accents, etc.).

  • Multi-baseline benchmarking to contrast the output of individual users and groups of users.

C13: Model Training

Establish algorithm model thresholds, best practices, transparency, accountability and oversight procedures to protect against unforeseen compounded bias – at model training stage.

 
  • Human-readable documentation for various learning approaches eg self-supervised

  • Use recursive artifact design.

  • Create model training plan and understand bias

  • Automated anomaly detection.

C14: Sensor Calibration

Ensure sensor inputs for multiple combined sensors – don’t lead to bias or anomalies, while creating transparency for users/non-users.

 
  • Minimising data & model issues from adding further sensors and combining signals.

  • minimise protocols of passive sensor collection (e.g. using event-based sensors instead of active-capture).

  • Activation assurance for device-specific input.

  • Calibration benchmark transparency.

  • User notification of sensor-capture status

  • Sensor-variable synchronicity (e.g. variable spectrum capture, event-based visual sequencing, etc.)

5) System Operation

C15: ML Deployment

ML systems designed for regression explainability, output replicability & reproducibility, and decision oversight

 
  • Verifiable adherence to performance benchmarking.

  • Plan of multi phase (recursive) ML operations.

  • Score the replicability & reproducibility of ML-operationalized results

  • Create Human-readable explainability artifacts throughout ML processes.

  • Oversight of model-based data extrapolation.

C16: Sensor / Data Fusion

Ensure that the confluence of signals being input into a system doesn’t contribute to performance issues.

 
  • Verification and validation each input (ensure that one sensor input doesn’t create static for another sensor input).

  • Calibration of sensor/data fusion (passive, active, real-time, etc.) to create multimodal inferences based on more than one input classifier.

  • Human-readable terms of service to ensure transparent sensor intake for on-device sensor deployment.

  • Transparent use of any/all human-facing sensors: overt notification that sensors are being used to collect affective state data and/or for emotion recognition / detection.

C17: Moderation, Review, QA/QC

Formalized protocols for: safety-oriented content policies, transparency around content review, and arbitration mechanism for users to contest system decisions.

 
  • Does the user-generated content undergo automated safety review? If not, what internal protocols trigger a review e.g. sentiment analysis?

  • Quality assurance/control (QA/QC) for emotion-tagged content display

  • Identify system flag “vulnerable” or at-risk users? If so, how is it tested

  • Reporting system for users to flag specific content/features. What arbitration mechanisms are in place for users to contest flagged/reported content?

  • Transparency for content provenance: can users query how specific content was generated?

6) User Safety & Agency

C18: UX/UI Safety & Controls

Create safety-centric user-experience / user-interaction (UX/UI) features (e.g. notifications and user controls).

 
  • On-device user notifications for affective state data capture.

  • Push/serve user notifications to indicate algorithmic processes utilizing affective state data are operating.

  • User controls to toggle on/off collection/capture of affective state data and/or emotion recognition/detection?

  • Transparent display mechanics to avoid using manipulative UX interventions like dark patterns.

C19: System Auditability

Transparency of system analysis. regulatory compliance, safety & wellness oversight, arbitration.

 
  • Do users have ability to query various benchmarks (e.g. to compare personal output against system-wide output).

  • Regulatory compliance (e.g. ensuring no personally identifiable data).

  • Safety & Wellness oversight: safeguard protocols to ensure system output doesn’t endanger physical or mental health of stakeholders.

  • Ability for users (and non-users) to arbitrate system output in accordance with per-defined terms of service.

C20: System Sustainability

Ensure mitigation of future issues with continued system use.

 
  • Do you have a Performance continuity plan for continuous monitoring after deployment

  • Sensor & model threshold monitoring e.g. being exceeded or becoming redundant in future

  • Identify future potential for demonetization, security breach, etc.

C21: Interoperability & Portability

Secure & fair interoperability & data portability (internally and with third parties)

 
  • Identify parameters on cross-database integration.

  • Cryptographically mediated interoperabile datasets.

  • Identifyplug and-play models (e.g. training models, API-outputs for third-party model training, device-portable).

  • Delineated protocols for model reuse.

  • Interoperability: data, API, device-portable.

  • User awareness & control of integration & data sharing with third parties.

7) System End-of-Life

C22: Responsible Decommissioning

Ensure safe, private and ecologically sound system end-of-life.

 
  • Identifyresponsible disposal: protocols to eliminate e-waste (ecologically responsible hardware disposal).

  • Validate & verify the wiping and/or overwriting of drive data upon decommissioning – and ensure models are not reused without consent of data subjects.

  • Protocols for ensuring that the disposal of data, models, etc. or physical devices, don’t create an attack surface.