Artificial Intelligence Policy & Governance Framework

Our Commitment to Responsible AI

Our Vision
At TRG Screen, we believe artificial intelligence (AI) has the power to transform industries—making software and services more accessible, efficient, and secure for everyone. As we develop and deploy AI across our products and services, we are committed to doing so responsibly, transparently, and in ways that earn and maintain the trust of our customers and their clients.

This document outlines our principles for responsible AI development and the governance framework that guides our work. These principles are not merely aspirational—they are embedded in our processes, tools, and culture.

Our Core AI Principles
We have established six foundational principles that guide every aspect of how we design, develop, deploy, and maintain AI systems. These principles reflect our commitment to our customers, their clients, regulators, and the broader global ecosystem we serve.


1. Trust and Transparency
Actions:

  • We clearly disclose when AI is being used in our products and services
  • We provide meaningful explanations of AI-driven decisions and recommendations
  • We maintain comprehensive documentation of our AI systems, including their design, training data, and decision-making processes
  • We enable our customers to understand and explain AI impacts to their own clients and regulators
  • We are honest about the limitations and potential errors in our AI capabilities


2. Privacy and Data Protection
Our Actions:

  • We adhere to all applicable data protection regulations, including GDPR, CCPA, and other relevant laws across the jurisdictions we serve
  • We never use customer data to train AI models without explicit, informed consent
  • We implement robust data security measures, including data isolation and encryption at rest and in transit
  • We maintain data retention and deletion policies aligned with best practices
  • We provide customers with meaningful control over their data used in AI systems
  • We minimize data collection to only what is necessary for the AI system's intended purpose


3. Security and Reliability
 Our Actions: 

  • We implement comprehensive security measures to protect AI systems from cyber threats, adversarial attacks, and data poisoning
  • We conduct regular security assessments, vulnerability testing, and penetration testing
  • We maintain audit trails of AI system interactions and decisions
  • We design AI systems to be robust against unexpected or adversarial inputs
  • We implement human oversight mechanisms for critical AI-driven decisions


4. Accountability and Governance
Our Actions:

  • We maintain an AI Governance Council with executive oversight of all AI activities
  • We define clear roles and responsibilities for AI development, deployment, and monitoring
  • We maintain comprehensive documentation and version control for all AI systems
  • We conduct routine reviews of AI systems and their impacts
  • We provide accessible channels for reporting AI-related concerns
  • We hold individuals and teams accountable for AI system performance and compliance


5. Regulatory Compliance and Industry Standards
Our Actions:

  • We design AI systems to comply with regulations across the industries and jurisdictions we serve
  • We maintain alignment with emerging AI governance frameworks and standards
  • We support our customers in meeting their own regulatory obligations related to AI use
  • We participate in industry working groups to help shape responsible AI practices
  • We conduct impact assessments before deploying AI in sensitive use cases

How We Implement Responsible AI

Our principles are embedded throughout our organization through three interconnected pillars: Governance, Engineering, and Culture.

Governance

  • AI Governance Council: Executive-level oversight body that reviews high-risk AI applications, sets policies, and ensures alignment with our principles
  • Risk Classification Framework: Systematic approach to categorizing AI use cases by potential impact, with appropriate controls for each tier
  • Third-Party AI Assessment: Rigorous evaluation process for any AI systems or models from external providers before integration
  • Incident Response: Established protocols for identifying, escalating, investigating, and remediating AI-related incident


Engineering Practices

  • Responsible AI by Design: Security, privacy, fairness, and transparency requirements integrated from the earliest stages of development
  • Testing and Validation: Comprehensive testing protocols including bias detection, adversarial testing, and performance evaluation across diverse scenarios
  • Continuous Monitoring: Real-time monitoring of AI system performance, drift detection, and automated alerting for anomalies
  • Documentation Standards: Comprehensive documentation requirements for all AI systems, including model cards and data sheets


Culture and Training

  • Mandatory Training: All personnel involved in AI development receive training on responsible AI practices, ethics, and this policy
  • Cross-Functional Collaboration: AI development teams include diverse perspectives from engineering, legal, compliance, security, and user experience
  • Open Dialogue: We encourage employees to raise concerns about AI systems without fear of retaliation

Our Commitments to Customers

We recognize that our customers have their own obligations to clients, regulators, and stakeholders. We are committed to supporting their responsible use of AI.

Commitment

Description

Documentation

We provide comprehensive documentation for all AI features

Transparency

We clearly identify AI-powered features and provide explanations of how they work

Data Sovereignty

We respect customer data boundaries and provide clear controls over data use in AI systems

Regulatory Support

We provide documentation and support to help customers meet their regulatory obligations related to AI

Continuous Updates

We keep customers informed about significant changes to AI systems and provide advance notice when possible

Explainability

We design AI tools to clearly explain how each output is generated, enabling users to validate the underlying logic or take corrective action

Accuracy

We test and validate AI outputs to minimize inaccuracy and ensure expected performance

Human in the Loop

We use humanintheloop safeguards to ensure humans validate AI outputs before critical actions are taken

Continuous Improvement

Responsible AI is not a destination but a journey. We are committed to continuous improvement through:

Questions and Feedback
We welcome questions, feedback, and dialogue about our AI principles and practices. Customers, partners, and stakeholders may contact us through their account representatives or at ai@trgscreen.com.

For concerns about AI system behavior or potential issues, please contact our AI Governance team directly.


Read as PDF button

Transparency documents per product