By: Travis Hutton
Kunal Dilip Dhanak is a leading cybersecurity strategist and AI ethics specialist based in Toronto, Ontario. With a deep background in computer science, Kunal’s career is defined by his focus on creating robust cybersecurity frameworks that not only safeguard sensitive data but also emphasize the ethical use of AI technologies. His expertise lies at the intersection of cybersecurity and AI ethics, where he advocates for transparency, accountability, and fairness in the use of emerging technologies. Kunal’s work is deeply rooted in his belief that technology should always serve the greater good, and he is passionate about educating the next generation in cybersecurity practices, particularly in underserved communities.
What sparked your interest in cybersecurity and AI ethics, and how have you seen the field evolve?
My interest in cybersecurity began with a fascination for understanding how systems work, but it deepened when I realized the societal impact that technology can have. Early in my career, I saw how breaches or unethical uses of technology could disrupt lives on a personal and global scale. This shifted my focus from merely protecting systems to considering the ethical implications of the tools we create.
As for AI ethics, it became clear to me that the more sophisticated our technologies become, the more nuanced the challenges are. AI can be incredibly powerful, but it can also perpetuate biases, invade privacy, and cause unintended harm if left unchecked. Over the years, I’ve seen AI go from a buzzword to a fundamental part of how businesses and governments operate. However, I’ve also seen the risks increase, especially as regulation struggles to keep pace with innovation. Today, the field is evolving towards more awareness, but there’s still a long way to go in ensuring that AI serves everyone equally and ethically.
You talk a lot about the intersection of AI and ethics. How do you ensure that AI is used responsibly in cybersecurity?
In cybersecurity, we’re always dealing with high stakes—personal data, national security, and even human rights are on the line. When integrating AI into cybersecurity solutions, the priority is to ensure that the AI system is transparent and accountable. The goal is to create systems that defend without overstepping into privacy violations or bias.
To ensure responsible use, I advocate for a couple of things: first, we need to build AI systems that are explainable. If an AI model is identifying threats, we should be able to trace how it’s making decisions. Second, collaboration is key—diverse teams lead to better AI outcomes because they consider a range of perspectives that prevent one-sided solutions. I also stress the importance of continuous oversight. Just because an AI system works today doesn’t mean it will tomorrow, especially as new vulnerabilities arise. This is why I push for regular audits and ethical reviews in all AI-driven cybersecurity solutions.
What’s the biggest misconception people have about AI and cybersecurity?
One of the biggest misconceptions is that AI will replace humans in cybersecurity. While AI is an incredibly powerful tool, it’s not a magic bullet. AI can automate processes, analyze data at speeds far beyond human capacity, and predict certain threats, but it still lacks the nuance and context that human judgment provides. AI can detect anomalies, but it’s humans who interpret those anomalies and decide on the right course of action.
Another misconception is that AI is always unbiased. This is a huge fallacy. AI systems are only as unbiased as the data they’re trained on, and if that data contains biases, the AI will replicate them. This is particularly dangerous in cybersecurity, where an AI system might unfairly flag certain groups or behaviors based on flawed data. That’s why I emphasize the need for diversity in AI development and strict ethical oversight.
What does the future of cybersecurity look like with AI at the forefront?
The future of cybersecurity with AI is incredibly promising but also complex. AI will undoubtedly play a crucial role in automating threat detection and response, which is essential given the increasing scale and sophistication of cyberattacks. AI will help us predict potential vulnerabilities, allowing organizations to be more proactive rather than reactive.
However, with AI comes a whole new wave of challenges. Cybercriminals will also use AI to develop more advanced, adaptive attacks. This means we’re not just building better defenses—we’re also fighting smarter adversaries. The key to the future will be collaboration between AI and human intelligence. AI can handle the bulk of data processing, but humans will still be critical in strategic thinking, ethical decision-making, and adjusting AI models to ensure they stay fair and effective.
What personal philosophy guides your work in AI and cybersecurity?
My personal philosophy is rooted in the belief that technology should be a force for good. We often get caught up in the race to innovate, but I always ask: “At what cost?” For me, it’s essential to ensure that the technology we develop, particularly in fields like AI and cybersecurity, aligns with human values. That means prioritizing privacy, fairness, and the greater societal impact over simply building faster, more powerful systems.
I also believe in continuous learning and adaptation. The landscape of cybersecurity and AI is constantly shifting, and if you’re not learning, you’re falling behind. I encourage my teams to stay curious, stay critical, and always think about how what we’re building today will shape the world tomorrow.
How do you unwind after dealing with the complexities of cybersecurity and AI ethics?
Honestly, after a day spent navigating some of the toughest challenges in cybersecurity, I need to step away from screens and technology. I enjoy simple things like yoga or a run through Toronto’s parks, which helps me reset and clear my mind. It might sound cliché, but being in nature helps me regain perspective—it reminds me that while technology is a big part of our world, it’s not the only part.
I also enjoy spending time reading—often about topics completely unrelated to tech. Philosophy, for example, offers an entirely different way of thinking, and I’ve found that studying human nature and ethics deeply influences how I approach the challenges in AI. It’s about keeping a balanced perspective and not getting too caught up in the whirlwind of innovation.
What advice would you give to the next generation of cybersecurity professionals?
The best advice I can give is to stay curious and don’t be afraid to ask difficult questions. Cybersecurity is a constantly evolving field, and the threats we face today may not be the ones we face tomorrow. That means you have to be adaptable, and the only way to do that is to continuously learn and challenge the status quo.
I’d also advise them to think about the ethical implications of their work. Cybersecurity isn’t just about stopping attacks; it’s about protecting people—real people—with lives that can be dramatically affected by data breaches, identity theft, or surveillance. Keeping that human aspect at the forefront of your work will make you a better cybersecurity professional and a more responsible innovator.
Published by: Martin De Juan