Transforming the Landscape: The AI Revolution in Cybersecurity

In a whirlwind of technological advancements, it’s undeniable that the pace of progress has been breathtaking. Cybersecurity, in particular, has witnessed groundbreaking strides in outsmarting digital threats. The industry has been on a transformative journey, from bidding farewell to traditional anti-malware signatures to harnessing the power of cutting-edge threat intelligence.

But amidst these impressive feats, one phenomenon stands tall—Artificial Intelligence (AI). Long by our side in various forms, AI now emerges as a game-changer, pushing boundaries beyond imagination. What was once limited to mere weather forecasts or random music playlists, AI has evolved into a dynamic entity, shaping education, businesses, and even governmental policies.

We sat to chat with curious students, seasoned CISOs, and various people in roles in between to find out what they think of AI in cybersecurity and to consider what may lie ahead.

Insights from Cybersecurity Pros

Ravit Jain

Cybersecurity Podcast Host, Gartner Ambassador, LinkedIn Instructor, Author

LinkedIn | Twitter

Ravit Jain has seen the development and application of AI firsthand in his varied tech-focused roles, and he finds it all very exciting. He spoke with us about the potential for AI to revolutionize work and communication and to help solve complex problems. Of course, the excitement doesn’t mean we shouldn’t approach AI cautiously and consider ethics as these tools become more widespread.

Ravit uses AI himself and finds it particularly powerful for marketing, data analysis, and predictive modeling. AI helps Ravit and his team gain insight into consumer behavior, personalize and target his messaging, automate previously time-consuming tasks, and increase efficiency in reaching new people.

There are a few key challenges on Ravit’s mind, including data privacy and security, potential bias in AI algorithms, and losing the human touch in marketing (and connection with customers, as a result).

Anastasios Arampatzis

Cybersecurity Copywriter

LinkedIn | Twitter

As a veteran content writer, Anastasios Arampatzis covers cybersecurity from all angles. As a writer, Anastasios also reads – a lot – and has heard many viewpoints on the rise of AI. Whether this new technology is our best ally, or our most cunning adversary depends on who you ask. If you ask him, the answer is somewhere in between.

Anastasios is concerned that global populations are not ready for AI’s impact if we’re not mindful, particularly in marginalized and poor communities, which tend to have lower digital literacy. Asked how he, as a writer, approaches AI tools for content creation, Anastasios sees a big opportunity. AI tools are a wonderful assistant for generating content ideas and improving written content, particularly for non-native speakers crafting documents in English.

What are the challenges? Anastasios worries about the normalization of disinformation and reminds us that organizations should always include fact- and plagiarism checks in their workflows, no matter how their teams use AI for their marketing and content.

Konstantinos Kakavoulis

Founding Partner at Digital Law Experts (DLE), Co-Founder at Homo Digitalis

LinkedIn | Twitter

Konstantinos Kakavoulis believes we shouldn’t be too quick to pass judgment on the impact of tools like ChatGPT on life and society. He acknowledges, though, that we’ve already seen some tremendous output from these tools on the arts, education, and software code.

As a partner at a digital law firm, Konstantinos pointed out that there are countries and legal entities that have chosen to ban or restrict the usage of AI tools, meaning acceptance is not universal. He supports this approach: after all, no one knows what will come, and we must be cautious. Konstantinos supports AI regulation in ways that address these tools’ structural, societal, political, and economic impacts while protecting fundamental rights and democratic values.

Konstantinos uses AI both in his personal and professional life. AI tech is everywhere, even when we haven’t been told that the applications we use are enabled by artificial intelligence. It’s up to users and legislators alike to regulate the use of AI wisely.

Alison Cameron

Cybersecurity Copywriter

LinkedIn

As a seasoned cybersecurity copywriter, Alison Cameron has seen some remarkable ways AI has impacted her clients for the better. AI tools can support content creation, including attention to SEO directives, and she adds a human touch by fact-checking, copyediting, and introducing her voice into the piece.

Alison is concerned, however, about companies taking things a step further and firing their writers altogether. These organizations take a shortsighted approach, assuming that AI tools can do the work at a much lower cost. The teams that will thrive, she says, are those who use AI tools to enable human writers to be faster and more efficient.

When we asked if Alison uses AI tools herself, she spoke highly of using platforms like ChatGPT to overcome creative blocks, sense-checking her articles, and ensuring she’s effectively covering her topic. For her, the coming year will be about finding alignment in how to use AI tools as allies in the content space.

Stuart Coulson

InfoSec Consultant

LinkedIn | Twitter

We asked Stuart Coulson about his general thoughts on AI based on his experience as a consultant. Stuart reminds us that it’s far too early to make assessments despite the astronomic hype. Instead, he recommends paying attention, considering your business needs, and taking the buzz lightly. As with any new technology, it will take time for these tools to find their true place within the business landscape.

Stuart advises businesses to consider their use cases and only use these tools where they truly add value. Don’t get lured in by the siren song of big promises; invest in a tool that is a distraction or a poor fit for your specific needs. Adding to that, he says: remember that human creativity will outstrip AI.

Stuart has tried AI tools for content creation but has had challenges with accuracy. Instead, he uses platforms like ChatGPT for ideas rather than content. He also applies it to human-generated writing, checking it for speaker bias or reframing passages to emphasize a point. Regarding cybersecurity, Stuart wants businesses to remember to be wary of the material provided by AI tools without critical thinking.

Ross Moore

InfoSec Analyst

LinkedIn | Twitter

Ross Moore acknowledges that ChatGPT and similar platforms have caused trepidation across industries and roles, primarily because communication is a thread that weaves through all aspects of a business. He also points out that discussions about AI are, oftentimes, actually about Machine Learning (ML) or Generative AI (GenAI), leading to some confusion or misinformation in the conversation – and concerns.

Ross sees potential for AI in cybersecurity roles as the tools available can help employees do their role more effectively and in less time. AI can be used in endpoint protection to identify attack patterns and improve prevention. GenAI can be used to help security professionals clearly communicate with the people they need to inform, a use Ross himself leverages AI for.

He does see some challenges, however, with AI/ML tools. To start, Ross advises us to pay attention to ethical use and be clear about personal and professional ethos when using these technologies. It’s also important to understand the regulatory and compliance impacts on individual organizations. He also highlights “washing,” in which organizations misrepresent the extent of AI usage in their products. Misleading customers and partners can have serious consequences. Ultimately, Ross believes AI should be used to supplement, not replace, current workflows as these technologies evolve.

Anthony M. Freed

Strategic Communications Leader

LinkedIn

To start, Anthony M. Freed tells us about one of his pain points: AI, he says, is not intelligent. Instead, it’s efficient machine learning. Until these AI tools can assess and identify based on its existing knowledge and take steps to synthesize information, these models are primarily focused on organizing what they know and returning results based on that.

Anthony sees tools like ChatGPT as an opportunity to save time with creating outlines and structure, but their capacity stops short of generating quality content. He says writers shouldn’t feel threatened at this stage as AI tools are not capable of deeper insights and thought leadership pieces.

Given that current AI models take information and repurpose it, Anthony says, there is a risk for the originators of the content they’re being fed. These creators are not given credit for the work they put in to synthesize new ideas, and AI tools are at risk for plagiarism. Those using AI for assistance need to be aware of how these tools work, where they get their information, and to use them as a helper rather than an outsourcing opportunity.

Ian Thornton-Trump CD

CISO Cyjax

LinkedIn | Twitter

As a CISO, Ian Thornton-Trump CD has security implications of AI at the top of his mind. He sees a great opportunity for improving the speed and efficiency of business processes, as long as we achieve the balance between AI creation and human checks and balances. The inaccuracies in large language models (LLMs) mean decision-making based on the information provided a risky choice.

Ian points to incidents at organizations (such as Samsung) that have had classified or private information leaked through AI models, highlighting the challenge in the training and usage of AI tools. Security vendors must ensure they vet AI tools and understand how they work before allowing them within their environments.

At Cyjax, Ian uses AI algorithms to assess correlation and corroboration across datasets before reporting to analysts for further value adds. This allows his organization to implement checks and balances for risk management workflows while increasing efficiency. The challenge, he says, is understanding what these tools can and cannot do, and taking adequate time to implement AI in workflows. Ian is taking a “wait and see” approach, though he’s excited about the potential.

Steven Prentice

Cybersecurity Podcast Host

LinkedIn | Twitter

“A robot is only a robot until it becomes an appliance,” says Steven Prentice, discussing the ways that AI can give new life to day-to-day activities, including smart parking solutions and intelligent vacuum cleaners. Fear and hesitation are natural reactions we’ve seen before in past technological advancements, and it will take time for people to become more comfortable with these tools.

Steven uses AI to break through writer’s block, and he finds it tremendously helpful. He stops short of using GPT tools to generate copy because of issues of accuracy, and its limitations on matching his style and addressing his clients’ needs.

The challenges Steven identified are risks to stage, film, and voice actors whose likenesses could be used in synthetic content or deepfakes. This isn’t a new issue, he reminds, but it will require vigilance and adjustment within the industry to navigate a new approach to an old problem, particularly given the speed at which content can be created and disseminated.

Mosopefoluwa Amao

Cybersecurity Student

LinkedIn

GPT is here to say, student Mosopefoluwa Amao says, and she’s excited to see where these tools develop. The inaccuracy of tools like ChatGPT is cause for concern, and cybersecurity professionals need to be sure they use the tools as a support system and do not rely too heavily on them without proper checks and balances.

Mosopefoluwa is using AI to schedule tasks, plan her days, paraphrase content, perform initial research, and for personal tasks like meal planning. AI has made her more efficient and helps to ensure she covers all of the details by alerting her if anything is out of place, particularly in content creation.

Martina Dove

Senior User Experience Researcher

LinkedIn | Twitter

There’s so much that AI can do for us and it’s only getting better, Martina Dove tells us. As a user experience researcher, Martina is concerned about organizations jumping on the AI bandwagon and potentially creating products or services that will disadvantage or harm groups of people. Of paramount concern for her is ensuring that we don’t empirically trust these tools to be impartial in their assessments.

Martina is sensitive to the nuance and complexity of being human and wonders where AI will meet this faucet of creation and communication. Analyzing sentiment in people’s statements using AI has been hit or miss because of the layers of nuance, particularly in conversations where statements may seem contradictory on the surface.

Martina worries about the impact of AI on cybersecurity, particularly related to social engineering attacks. Generative AI will grow more convincing and sophisticated and high-quality phishing emails can deceive end users. More than written content, these deceptive AI tools can also perform voice cloning, which is particularly worrisome.

Kaitlin Harvey

Digital Content Manager

LinkedIn

Kaitlin Harvey is cautiously excited about the rise of AI tools, particularly as we’re only at the beginning of seeing the potential of AI coupled with human creativity. Kaitlin is curious what the AI landscape will look like a year from now, as there’s no true replacement for creative, off-the-cuff creations. For now, she’s found AI to be a helpful tool for organizing thoughts, providing topics and starter content, and helping to get something on the page.

AI helps Kaitlin save time in her tasks as a cybersecurity content writer as she uses AI tools for brainstorming headline ideas, simplifying and condensing complex passages, summarizing articles, creating outlines, and drafting starter content. She continues to explore and experiment with AI tools to see how they fit into her life, and acknowledges that AI-generated content is never ready for delivery without editing, but it’s a great start. AI helps her save time and reserves her energy to focus more on the aspects of life that she enjoys the most.

The content marketing realm must make a choice with AI: upskill and learn to tap into these tools, or risk falling behind.

David Corlette

Vice President Of Product Management

LinkedIn | Twitter

David Corlette looks at AI in a practical way and wants us to keep in mind that these tools don’t create but instead regurgitates based on what has been fed into the system in the past. Unfortunately, that’s why AI tools provide incorrect answers: because they’ve been fed incorrect information. In his line of work, David has seen promise in using AI for probabilistic matching for detecting malware, but less success in detecting using yes/no signature-based matching. That means, in his experience, AI is very useful for detecting previously unknown malware.

AI is just a method, he reminds us, and it depends on individuals and the collective to define how this method is implemented. “AI isn’t useful unless it’s well-designed to incorporate data on the right signals, and unless it’s well-trained,” David says. For now, consumers aren’t sure how to evaluate a vendor’s claims or rate the accuracy of a given AI engine, unfortunately. Hopefully, time will improve not only the tools but our understanding of their opportunities and pitfalls.

If you enjoyed this blog summarizing our eBook, click this link to download our eBook here.

Scroll to top