Équilibrer l'IA : faire le bien et éviter le mal - IBM Blog

Équilibrer l'IA : faire le bien et éviter le mal – IBM Blog

Nœud source: 3084303


Équilibrer l'IA : faire le bien et éviter le mal – IBM Blog




Growing up, my father always said, “do good.” As a child, I thought it was cringeworthy grammar and I would correct him, insisting it should be “do well.” Even my children tease me when they hear his “do good” advice and I’ll admit I let him have a pass on the grammar front.

In the case of responsible artificial intelligence (AI), organizations should prioritize the ability to avoid harm as a central focus. Some organizations may also aim to use AI for “doing good.” However, sometimes AI requires clear guardrails before one can agree with “good.”

Read the “Presidio AI Framework” paper to learn how to address generative AI risks with guardrails across the expanded AI life cycle

As IA générative continues to go mainstream, organizations are excited about the potential to transform processes, reduce costs and increase business value. Business leaders are eager to redesign their business strategies to better serve customers, patients, employees, partners or citizens more efficiently and improve the overall experience. Generative AI is opening doors and creating new opportunities and risks for organizations globally, with human resources (HR) leadership playing a key role in managing these challenges.

Adapting to the implications of increased AI adoption could include complying with complex regulatory requirements such as NIST, Loi de l'UE sur l'IA, NYC 144, EEOC américain ainsi que La White House AI Act, which directly impact HR and organizational policies, as well as social, job skilling and collective bargaining labor agreements. Adopting responsible AI requires a multi-stakeholder strategy as affirmed by top international resources including NIST, OCDE, Institut d'Intelligence Artificielle Responsable, Data and Trust Alliance ainsi que IEEE.

This is not just an IT role; HR plays a key role

HR leaders now advise businesses about the skills required for today’s work as well as future skills, considering AI and other technologies. According to the WEF, employers estimate that 44% of workers’ skills will be disrupted in the next 5 years. HR professionals are increasingly exploring their potential to improve productivity by augmenting the work of employees and empowering them to focus on higher-level work. As AI capabilities expand, there are ethical concerns and questions every business leader must consider so their AI use does not come at the expense of workers, partners or customers.

Learn the principles of trust and transparency recommended by IBM for organizations to responsibly integrate AI into their operations.

Worker education and knowledge management are now tightly coordinated as a multi-stakeholder strategy with IT, legal, compliance and business operators as an ongoing process, as opposed to a once-a-year check box. As such, HR leaders need to be innately involved in developing programs to create policies and grow employees’ AI acumen, identifying where to apply AI capabilities, establishing a responsible AI governance strategy and using tools like AI and automation to help ensure thoughtfulness and respect for employees through trustworthy and transparent AI adoption. 

Challenges and solutions in adopting AI ethics within organizations

Although AI adoption and use cases continue to expand, organizations may not be fully prepared for the many considerations and consequences of adopting AI capabilities into their processes and systems. While 79% of surveyed executives emphasize the importance of AI ethics in their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics, according to IBM Institute for Business Value research.

This discrepancy exists because policies alone cannot eliminate the prevalence and increasing use of digital tools. Workers’ increasing usage of smart devices and apps such as ChatGPT or other black box public models, without proper approval, has become a persistent issue and doesn’t include the correct change management to inform workers about the associated risks. 

For example, workers might use these tools to write emails to clients using sensitive customer data or managers might use them to write performance reviews that disclose personal employee data. 

To help reduce these risks, it may be useful to embed responsible AI practice focal points or advocates within each department, business unit and functional level. This example can be an opportunity for HR to drive and champion efforts in thwarting potential ethical challenges and operational risks.

Ultimately, creating a IA responsable strategy with common values and principles that are aligned with the company’s broader values and business strategy and communicated to all employees is imperative. This strategy needs to advocate for employees and identify opportunities for organizations to embrace AI and innovation that push business objectives forward. It should also assist employees with education to help guard against harmful AI effects, address misinformation and bias and promote responsible AI, both internally and within society.

Top 3 considerations for adopting responsible AI

The top 3 considerations business and HR leaders should keep in mind as they develop a responsible AI strategy are:

Make people central to your strategy

Put another way, prioritize your people as you plot your advanced technology strategy. This means identifying how AI works with your employees, communicating specifically to those employees how AI can help them excel in their roles and redefining the ways of working. Without education, employees could be overly worried about AI being deployed to replace them or to eliminate the workforce. Communicate directly with employees with honesty about how these models are built. HR leaders should address potential job changes, as well as the realities of new categories and jobs created by AI and other technologies.

Enable governance that accounts for both the technologies adopted and the enterprise

AI is not a monolith. Organizations can deploy it in so many ways, so they must clearly define what responsible AI means to them, how they plan to use it and how they will refrain from using it. Principles such as transparency, trust, equity, fairness, robustness and the use of diverse teams, in alignment with OECD or RAII guidelines, should be considered and designed within each AI use case, whether it involves generative AI or not. Additionally, routine reviews for model drift and privacy measures should be conducted for each model and specific diversity, equity and inclusion metrics for bias mitigation.

Identify and align the right skills and tools needed for the work

The reality is that some employees are already experimenting with generative AI tools to help them perform tasks such as answering questions, drafting emails and performing other routine tasks. Therefore, organizations should act immediately to communicate their plans to use these tools, set expectations for employees using them and help ensure that the use of these tools aligns with the organization’s values and ethics. Also, organizations should offer skill development opportunities to help employees upskill their AI knowledge and understand potential career paths.

Download the “Unlocking Value from Generative AI” paper for more guidance on how your organization can adopt AI responsibly

Practicing and integrating responsible AI  into your organization is essential for successful adoption. IBM has made responsible AI central to its AI approach with clients and partners. In 2018, IBM established the AI Ethics Board as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI. It is comprised of senior leaders from various departments such as research, business units, human resources, diversity and inclusion, legal, government and regulatory affairs, procurement and communications. The board directs and enforces AI-related initiatives and decisions. IBM takes the benefits and challenges of AI seriously, embedding responsibility into everything we do.

I’ll allow my father this one broken grammar rule. AI can “do good” when managed correctly, with the involvement of many humans, guardrails, oversight, governance and an AI ethics framework. 

Watch the webinar on how to prepare your business for responsible AI adoption

Découvrez comment IBM aide ses clients dans leur parcours de transformation des talents

Cet article a-t-il été utile?

OuiNon


Plus de Intelligence artificielle




L'importance de la diversité dans l'IA n'est pas une opinion, c'est une question de mathématiques

5 min lire - Nous voulons tous voir nos valeurs humaines idéales se refléter dans nos technologies. Nous attendons des technologies telles que l’intelligence artificielle (IA) qu’elles ne nous mentent pas, qu’elles ne fassent pas de discrimination et qu’elles soient sûres pour nous et nos enfants. Pourtant, de nombreux créateurs d’IA sont actuellement confrontés à des réactions négatives en raison des biais, des inexactitudes et des pratiques problématiques en matière de données exposées dans leurs modèles. Ces problèmes nécessitent plus qu’une solution technique, algorithmique ou basée sur l’IA. En réalité, une approche holistique et sociotechnique est nécessaire.…




Comment les compagnies d'assurance travaillent avec IBM pour mettre en œuvre des solutions basées sur l'IA générative

7 min lire - IBM travaille avec ses clients du secteur de l'assurance sur différents fronts, et les données de l'IBM Institute for Business Value (IBV) ont identifié trois impératifs clés qui guident les décisions de gestion des assureurs : Adopter la transformation numérique pour permettre aux assureurs de proposer de nouveaux produits, de stimuler la croissance des revenus et d'améliorer la clientèle. expérience. Améliorez la productivité de base (métier et informatique) tout en réduisant les coûts. Adoptez la modernisation incrémentielle des applications et des données en utilisant le cloud hybride sécurisé et l’IA. Les assureurs doivent répondre aux impératifs clés suivants pour faciliter la transformation de leur…




Libérer la puissance des chatbots : des avantages clés pour les entreprises et les clients

6 min lire - Les chatbots peuvent aider vos clients actuels et potentiels à trouver ou à saisir rapidement des informations en répondant instantanément aux demandes utilisant la saisie audio, la saisie de texte ou une combinaison des deux, éliminant ainsi le besoin d'intervention humaine ou de recherche manuelle. Les chatbots sont partout, fournissant un service client et aidant les employés qui utilisent des haut-parleurs intelligents à la maison, les SMS, WhatsApp, Facebook Messenger, Slack et de nombreuses autres applications. Les derniers chatbots d'intelligence artificielle (IA), également connus sous le nom d'assistants virtuels intelligents ou d'agents virtuels, ne sont pas seulement…




Rejoignez-nous à la pointe de l'IA pour les entreprises : Think 2024

<1 min lire - Vous souhaitez utiliser l’IA pour accélérer la productivité et l’innovation de votre entreprise. Vous devez aller au-delà de l’expérimentation pour passer à l’échelle. Il faut aller vite. Rejoignez-nous à Boston pour Think 2024, une expérience unique et engageante qui vous guidera dans votre parcours d'IA pour les entreprises, peu importe où vous êtes sur la route. De la préparation à l'IA avec une approche réfléchie du cloud hybride, à la mise à l'échelle de l'IA dans les principales fonctions commerciales et les besoins de l'industrie, en passant par l'intégration de l'IA dans…

Bulletins d'information IBM

Recevez nos newsletters et nos mises à jour thématiques qui fournissent les dernières idées en matière de leadership éclairé et d'informations sur les tendances émergentes.

S'abonner

Plus de newsletters

Horodatage:

Plus de IdO IBM