AI principles at PulseAI

At PulseAI, we strive to make a positive impact on patient care by enabling all doctors to access state-of-the-art decision support tools. We believe that artificial intelligence has the power to revolutionise healthcare by removing workload from physicians, reducing clinical errors and developing new systems for the diagnosis of disease. We understand that there are significant ethical challenges with AI technology, and we strive for responsible innovation in this field. 

At PulseAI we believe that all AI should: 

Be for the good of humanity. 

  • We only build AI that helps people. 

  • We assess both the positive and negative consequences of AI we build and only make available that which we believe is positive for society. 

  • We provide our services at a fair price. 

Be accountable to people.

  • All AI services provided by PulseAI are accountable to human oversight. 

  • We strive to allow customers to deliver feedback and ask for details about our AI decision-making processes. We seek to enable this type of disclosure wherever possible. 

  • We actively attempt to understand all reported errors in our AI and provide feedback to users about changes we make to address them.

Be transparent about what is and what is not AI content. 

  • We believe that you should always be clear what content is generated by AI and what is not.  

  • We will make it clear in interactions with our systems all data that results from AI analysis. 

Interact with people in the way they expect. 

  • We believe AI should be designed to fit into existing social and moral norms, wherever it is being applied. 

  • We endeavour to provide AI analysis in plain language that is easily understandable without further processing. 

  • We tailor the language used based on the target audience of each AI service. 

Be explainable by people.

  • We accept that the black-box nature of some AI makes explainability a real challenge. 

  • We believe that carefully designed AI can mitigate explainability problems, even if the model itself is not explainable.  

  • We aspire to only build AI systems that an expert could easily perceive or detect an error in the AI analysis.

  • We treat all reported errors with urgency and seek to understand and improve our AI at every opportunity. 

Be fair.

  • We consistently seek to remove bias and promote inclusive representation in our AI systems. 

  • We collect data from large populations and seek to understand the impact of diversity on our AI models. 

  • We have a special reporting process for anyone using our services to provide feedback on any analysis they think is impacted by bias or incorrect assumptions. 

Incorporate appropriate data privacy policies and regulatory oversight. 

  • We seek to minimize the amount of data our systems require for analysis and only provide our AI with information relevant to its designed task. 

  • We endeavour to explain why any required data is needed and what we use it for. 

  • We hold all patient data securely and have strict policies defining which people or services are allowed to access it. 

  • We will always provide an option that allows users to opt-out of their data being used to improve our services. However, we may still be required to hold a copy of this data for regulatory or compliance purposes. 

  • We comply with all relevant laws and regulations for data protection in the markets we operate in.

Be built using good software engineering and security practices.

  • We build our systems to be robust and extensively tested in-house prior to release. 

  • We provide clear instructions for use for all of our services, even if they are not required for regulatory purposes. 

  • We implement systems to automatically detect user input errors or other faults where feasible to do so. 

  • We have an internal risk assessment and management process to ensure we understand and mitigate risk within our services. 

  • We apply industry-accepted best practices for cybersecurity across all of our systems. 

  • We closely monitor our active products for performance and signs of retraining drift where relevant. 

Have its performance assessed in a representative way. 

  • We faithfully apply the industry-accepted best practices on machine learning / artificial intelligence. 

  • We ensure that all AI services are trained on data that is representative of the intended patient population. 

  • We ensure all training and test datasets are completely independent. 

  • We will always report sufficient information about our datasets and model performance to allow users to fairly assess the performance of our services against others. 

  • We will always report performance metrics transparently and will never seek to use misleading statistics. 

  • Where possible, we try to assess not only the AI model performance but the performance of the human-AI team for clinical decision support services. 

In Conclusion

We use these principles as the guiding foundation for everything we do at PulseAI. We are always open to feedback on how we can improve our values or services, so don’t be afraid to get in touch.


Previous
Previous

AI-based ECG interpretation for Smartwatches

Next
Next

Dealing with ECG data overload with Artificial Intelligence