Balancing AI: Do good and avoid harm - IBM Blog

Balancing AI: Do good and avoid harm – IBM Blog

Source Node: 3084303
Balancing AI: Do good and avoid harm - IBM Blog <!----> <!-- -->

Growing up, my father always said, “do good.” As a child, I thought it was cringeworthy grammar and I would correct him, insisting it should be “do well.” Even my children tease me when they hear his “do good” advice and I’ll admit I let him have a pass on the grammar front.

In the case of responsible artificial intelligence (AI), organizations should prioritize the ability to avoid harm as a central focus. Some organizations may also aim to use AI for “doing good.” However, sometimes AI requires clear guardrails before one can agree with “good.”

Read the “Presidio AI Framework” paper to learn how to address generative AI risks with guardrails across the expanded AI life cycle

As generative AI continues to go mainstream, organizations are excited about the potential to transform processes, reduce costs and increase business value. Business leaders are eager to redesign their business strategies to better serve customers, patients, employees, partners or citizens more efficiently and improve the overall experience. Generative AI is opening doors and creating new opportunities and risks for organizations globally, with human resources (HR) leadership playing a key role in managing these challenges.

Adapting to the implications of increased AI adoption could include complying with complex regulatory requirements such as NIST, the EU AI Act, NYC 144, US EEOC and The White House AI Act, which directly impact HR and organizational policies, as well as social, job skilling and collective bargaining labor agreements. Adopting responsible AI requires a multi-stakeholder strategy as affirmed by top international resources including NIST, OECD, the Responsible Artificial Intelligence Institute, the Data and Trust Alliance and IEEE.

This is not just an IT role; HR plays a key role

HR leaders now advise businesses about the skills required for today’s work as well as future skills, considering AI and other technologies. According to the WEF, employers estimate that 44% of workers’ skills will be disrupted in the next 5 years. HR professionals are increasingly exploring their potential to improve productivity by augmenting the work of employees and empowering them to focus on higher-level work. As AI capabilities expand, there are ethical concerns and questions every business leader must consider so their AI use does not come at the expense of workers, partners or customers.

Learn the principles of trust and transparency recommended by IBM for organizations to responsibly integrate AI into their operations.

Worker education and knowledge management are now tightly coordinated as a multi-stakeholder strategy with IT, legal, compliance and business operators as an ongoing process, as opposed to a once-a-year check box. As such, HR leaders need to be innately involved in developing programs to create policies and grow employees’ AI acumen, identifying where to apply AI capabilities, establishing a responsible AI governance strategy and using tools like AI and automation to help ensure thoughtfulness and respect for employees through trustworthy and transparent AI adoption. 

Challenges and solutions in adopting AI ethics within organizations

Although AI adoption and use cases continue to expand, organizations may not be fully prepared for the many considerations and consequences of adopting AI capabilities into their processes and systems. While 79% of surveyed executives emphasize the importance of AI ethics in their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics, according to IBM Institute for Business Value research.

This discrepancy exists because policies alone cannot eliminate the prevalence and increasing use of digital tools. Workers’ increasing usage of smart devices and apps such as ChatGPT or other black box public models, without proper approval, has become a persistent issue and doesn’t include the correct change management to inform workers about the associated risks. 

For example, workers might use these tools to write emails to clients using sensitive customer data or managers might use them to write performance reviews that disclose personal employee data. 

To help reduce these risks, it may be useful to embed responsible AI practice focal points or advocates within each department, business unit and functional level. This example can be an opportunity for HR to drive and champion efforts in thwarting potential ethical challenges and operational risks.

Ultimately, creating a responsible AI strategy with common values and principles that are aligned with the company’s broader values and business strategy and communicated to all employees is imperative. This strategy needs to advocate for employees and identify opportunities for organizations to embrace AI and innovation that push business objectives forward. It should also assist employees with education to help guard against harmful AI effects, address misinformation and bias and promote responsible AI, both internally and within society.

Top 3 considerations for adopting responsible AI

The top 3 considerations business and HR leaders should keep in mind as they develop a responsible AI strategy are:

Make people central to your strategy

Put another way, prioritize your people as you plot your advanced technology strategy. This means identifying how AI works with your employees, communicating specifically to those employees how AI can help them excel in their roles and redefining the ways of working. Without education, employees could be overly worried about AI being deployed to replace them or to eliminate the workforce. Communicate directly with employees with honesty about how these models are built. HR leaders should address potential job changes, as well as the realities of new categories and jobs created by AI and other technologies.

Enable governance that accounts for both the technologies adopted and the enterprise

AI is not a monolith. Organizations can deploy it in so many ways, so they must clearly define what responsible AI means to them, how they plan to use it and how they will refrain from using it. Principles such as transparency, trust, equity, fairness, robustness and the use of diverse teams, in alignment with OECD or RAII guidelines, should be considered and designed within each AI use case, whether it involves generative AI or not. Additionally, routine reviews for model drift and privacy measures should be conducted for each model and specific diversity, equity and inclusion metrics for bias mitigation.

Identify and align the right skills and tools needed for the work

The reality is that some employees are already experimenting with generative AI tools to help them perform tasks such as answering questions, drafting emails and performing other routine tasks. Therefore, organizations should act immediately to communicate their plans to use these tools, set expectations for employees using them and help ensure that the use of these tools aligns with the organization’s values and ethics. Also, organizations should offer skill development opportunities to help employees upskill their AI knowledge and understand potential career paths.

Download the “Unlocking Value from Generative AI” paper for more guidance on how your organization can adopt AI responsibly

Practicing and integrating responsible AI  into your organization is essential for successful adoption. IBM has made responsible AI central to its AI approach with clients and partners. In 2018, IBM established the AI Ethics Board as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI. It is comprised of senior leaders from various departments such as research, business units, human resources, diversity and inclusion, legal, government and regulatory affairs, procurement and communications. The board directs and enforces AI-related initiatives and decisions. IBM takes the benefits and challenges of AI seriously, embedding responsibility into everything we do.

I’ll allow my father this one broken grammar rule. AI can “do good” when managed correctly, with the involvement of many humans, guardrails, oversight, governance and an AI ethics framework. 

Watch the webinar on how to prepare your business for responsible AI adoption Explore how IBM helps clients in their talent transformation journey

Was this article helpful?

YesNo

More from Artificial intelligence

The importance of diversity in AI isn’t opinion, it’s math

5 min read - We all want to see our ideal human values reflected in our technologies. We expect technologies such as artificial intelligence (AI) to not lie to us, to not discriminate, and to be safe for us and our children to use. Yet many AI creators are currently facing backlash for the biases, inaccuracies and problematic data practices being exposed in their models. These issues require more than a technical, algorithmic or AI-based solution.  In reality, a holistic, socio-technical approach is required.…

How insurance companies work with IBM to implement generative AI-based solutions

7 min read - IBM works with our insurance clients through different fronts, and data from the IBM Institute for Business Value (IBV) identified three key imperatives that guide insurer management decisions: Adopt digital transformation to enable insurers to deliver new products, to drive revenue growth and improve customer experience. Improve core productivity (business and IT) while reducing cost. Embrace incremental application and data modernization utilizing secure hybrid cloud and AI. Insurers must meet the following key imperatives to facilitate the transformation of their…

Unlocking the power of chatbots: Key benefits for businesses and customers

6 min read - Chatbots can help your customers and potential clients find or input information quickly by instantly responding to requests that use audio input, text input or a combination of both, eliminating the need for human intervention or manual research. Chatbots are everywhere, providing customer care support and assisting employees who use smart speakers at home, SMS, WhatsApp, Facebook Messenger, Slack and numerous other applications. The latest artificial intelligence (AI) chatbots, also known as intelligent virtual assistants or virtual agents, not only…

Join us at the forefront of AI for business: Think 2024

< 1 min read - You want to use AI to accelerate productivity and innovation for your business. You need to move beyond experimentation to scale. You have to move fast. Join us in Boston for Think 2024, a unique and engaging experience that will guide you on your AI for business journey, no matter where you are on the road. From building AI readiness with a thoughtful hybrid cloud approach, to scaling AI across core business functions and industry needs, to embedding AI into…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

Subscribe now More newsletters

Time Stamp:

More from IBM IoT