AI and security: It is complicated but doesn't need to be | IoT Now News & Reports

AI and security: It is complicated but doesn’t need to be | IoT Now News & Reports

Source Node: 3071147

AI is growing in popularity and this trend is only set to continue. This is supported by Gartner which states that approximately 80% of enterprises will have used generative artificial intelligence (GenAI) application programming interfaces (APIs) or models by 2026. However, AI is a broad and ubiquitous term, and, in many instances, it covers a range of technologies. Nevertheless, AI presents breakthroughs in the ability to process logic differently which is attracting attention from businesses and consumers alike who are experimenting with various forms of AI today. At the same time, this technology is attracting similar attention from threat actors who are realising that it could be a weakness in a company’s security while it could also be a tool that helps companies to identify these weaknesses and address them.

Security challenges of AI

One way that companies are using AI is to review large data sets to identify patterns and sequence data accordingly. This is achieved by creating tabular datasets that typically contain rows and rows of data. While this has significant benefits for companies, from improving efficiencies to identifying patterns and insights, it also increases security risks as should a breach occur, this data is sorted out in a way that is easy for threat actors to use.

Further threat evolves when using Large Language Model (LLM) technologies which removes security barriers as data is placed in a public domain for anyone that uses the technology to stumble upon and use. As LLM is effectively a bot that doesn’t understand the detail, it produces the most likely response based on probability using the information that it has at hand. As such many companies are preventing employees from putting any company data into tools like ChatGPT to keep data secure in the confines of the company.

Security benefits of AI

While AI may present a potential risk for companies, it could also be part of the solution. As AI processes information differently from humans, it can look at issues differently and come up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems that humans have struggled with for many years. As such, when it comes to information security, algorithms are king and AI, Machine Learning (ML) or a similar cognitive computing technology, could come up with a way to secure data.

This is a real benefit of AI as it can not only identify and sort massive amounts of information, but it can identify patterns allowing organisations to see things that they never noticed before. This brings a whole new element to information security. While AI is going to be used by threat actors as a tool to improve their effectiveness of hacking into systems, it will also be used as a tool by ethical hackers to try to find out how to improve security which will be highly beneficial for businesses.

The challenge of employees and security

Employees, who are seeing the benefits of AI in their personal lives, are using tools like ChatGPT to improve their ability to perform job functions. At the same time, these employees are adding to the complexity of data security. Companies need to be aware of what information employees are putting onto these platforms and the threats associated with them.

As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations need to realise that they can’t have it both ways, and data they put into such systems will not remain private. For this reason, companies will need to review their information security policies and identify how to safeguard sensitive data while at the same time ensuring employees have access to critical data.

Not sensitive but useful data

Companies are aware of the value that AI can bring while at the same time adding a security risk into the mix. To gain value from this technology while keeping data private they are exploring ways to implement anonymised data using pseudonymisation for example which replaces identifiable information with a pseudonym, or a value and does not allow the individual to be directly identified.

Another way companies can protect data is with generative AI for synthetic data. For example, if a company has a customer data set and needs to share it with a third party for analysis and insights, they point a synthetic data generation model at the dataset. This model will learn all about the dataset, identify patterns from the information and then produce a dataset with fictional individuals that don’t represent anyone in the real data but allows the recipient to analyse the whole data set and provide accurate information back. This means that companies can share fake but accurate information without exposing sensitive or private data. This approach allows for massive amounts of information to be used by machine learning models for analytics and, in some cases, to test data for development.

With several data protection methods available to companies today, the value of AI technologies can be leveraged with peace of mind that personal data remains safe and secure. This is significant for businesses as they experience the true benefits that data brings to improving efficiencies, decision making and the overall customer experience.

Article by Clyde Williamson, a chief security architect and Nathan Vega, a vice president, product marketing and strategy at Protegrity.

Comment on this article below or via X: @IoTNow_

Time Stamp:

More from IoT Now