Balancing Act: The Value of Human Expertise in the Age of Generative AI - DATAVERSITY

Balancing Act: The Value of Human Expertise in the Age of Generative AI – DATAVERSITY

Source Node: 3052574

Humans are considered the weakest link in the enterprise when it comes to security. Rightfully so, as upwards of 95% of cybersecurity incidents are caused by human error. Humans are fickle, fallible, and unpredictable, making them easy targets for cybercriminals looking to gain entry to organizations’ systems.  

This makes our reliance on machines that much more important. Up until this point, we’ve been able to trust machines to operate with code as the truth. Even though they can be compromised through vulnerabilities in the code or through the social flaws of their human operators, the problems are usually met with a clear-cut solution. 

However, with the rise of generative AI (GenAI) and large language models (LLMs), organizations are now facing social engineering attacks that trick the AI into doing things it wasn’t intended to do. As we offload more to AI, it will be interesting to see these new attack patterns play out.

In the face of this dilemma, it’s once again up to humans to navigate this complex and evolving AI security landscape. This calls on CISOs to communicate clearly the benefits as well as the shortcomings of AI and to recognize the long list of security considerations tied to AI-powered products and capabilities. 

Rushed Implementation of Generative AI Brings New Cybersecurity Challenges

To begin, a common issue when it comes to GenAI and LLMs is a broad overreliance on AI-generated content. Trusting AI-generated content without verifying or checking for misleading or misinformation without human input or oversight can lead to the propagation of erroneous data that informs poor decision-making and reduced critical thinking. LLMs are known to hallucinate, so some of the disinformation may not even result from malicious intent.

In the same vein, the quantity of insecure code that is being introduced following the evolution of GenAI will also become a significant challenge for CISOs, if not proactively anticipated. AI engines are known to write buggy code with security vulnerabilities. Without the proper human oversight, GenAI empowers people without the proper technical foundations to ship code. This leads to increased security risk throughout the software development lifecycle for organizations using these tools improperly.

Data leakage is another prevalent issue. In some cases, attackers can use prompt injection to extract sensitive information that the AI model has learned from another user. Many times this can be harmless, but malicious use is certainly not precluded. Bad actors could intentionally probe the AI tool with meticulously crafted prompts, aiming to extract sensitive information that the tool has memorized, leading to the leak of sensitive or confidential information.

AI May Increase Some Cybersecurity Gaps but Has Significant Potential to Close Others

Lastly, it’s understood that the propagation of GenAI and LLMs will regress some of our industry’s attack surface reduction for a few reasons. First, the ability to generate code with GenAI lowers the bar of who can be a software engineer, resulting in weaker code and even weaker security standards. Second, GenAI requires vast amounts of data, which means the scale and impact of data breaches will grow exponentially. Third, as with any emerging technology, developers may not be fully aware of the ways their implementation can be exploited or abused. 

Nevertheless, it’s essential to adopt a balanced perspective. While Gen AI’s facilitation of code generation may raise concerns, it also brings positive attributes to the cybersecurity landscape. For instance, it can effectively identify security vulnerabilities such as Cross-Site Scripting (XSS) or SQL injection. This dual nature underscores the importance of a nuanced understanding. Rather than viewing AI as solely detrimental, it emphasizes the complementary relationship between artificial intelligence and human involvement in cybersecurity. CISOs must grasp the associated risks of GenAI and LLMs while concurrently exploring human-centric approaches to implement GenAI and fortify their organizations.

Humans Pick Up What AI Leaves Behind

CISOs are not just tasked with unraveling the complexities of GenAI. They must pave a way forward for their organization and demonstrate to leadership how their organization can continue to thrive in a GenAI-dominated world. 

While end users are often responsible for many security vulnerabilities, there is no better defense to cybercrime than a well-trained and security-minded human. No matter what threat detection tools an organization has in place, there is simply no replacing the person behind the screen when it comes to testing software. 

Organizations can outpace cybercriminals using the power of ethical hacking. While some are hesitant to invite hackers into their network because of outdated misconceptions, these law-abiding cybersecurity experts are the greatest match to take on bad actors – because, unlike AI, they can get inside the cyberattackers’ heads.

In fact, hackers are already supplementing automated tools in the fight against cybercriminals, with 92% of ethical hackers saying they can find vulnerabilities that scanners cannot. By pulling back the veil on hacking for good, business leaders can embrace ethical hacking and human support to strike a more effective balance between AI and human experts in fighting modern cybercrime. Our recent Hacker-Powered Security Report highlights this, with 91% of our customers saying that hackers provide more impactful and valuable vulnerability reports than AI or scanning solutions. As AI continues to shape our future, the ethical hacker community will stay committed to ensuring its safe integration.

The combination of automation with a network of highly skilled hackers means companies can pinpoint critical application flaws before they are exploited. When organizations effectively blend automated security tools with ethical hacking, they close gaps in the ever-evolving digital attack surface. 

This is because humans and AI can work together to improve security team productivity: 

  1. Attack surface reconnaissance: Modern organizations can grow an extensive and complex IT infrastructure comprising a variety of both authorized and unsanctioned hardware and software. Developing an all-inclusive index of IT assets like software and hardware is important for reducing vulnerabilities, streamlining patch management and aiding compliance with industry mandates. It also helps identify and analyze the points through which an attacker might target an organization.
  2. Continuous assessments: Moving beyond point-in-time security, organizations can combine the ingenuity of human security experts with real-time attack surface insights to achieve continuous testing of the digital landscape. Continuous penetration testing enables IT teams to view the results of constant simulations that exhibit how a breach would look in the current environment and potential weak spots where teams can adapt in real time.
  3. Process enhancements: Trusted human hackers can hand security teams valuable information about vulnerabilities and assets to aid process enhancements..

Conclusion

As generative AI continues to evolve at such a rapid pace, CISOs must leverage their understanding of how humans can collaborate to enhance AI security and garner support from their board and leadership team. As a result, organizations can have adequate staffing and resources to tackle these challenges effectively. Striking the right balance between swift AI implementation and comprehensive security through collaboration with ethical hackers strengthens the argument for investing in appropriate AI-powered solutions.

Time Stamp:

More from DATAVERSITY