Cybersecurity

Rethinking threat detection with AI: A conversation with Microsoft India’s cybersecurity head Anand Jethalia on safeguarding India’s digital landscape


Staying ahead of threats is paramount with AI being the flavour of the season these days. With cybercriminals continuously evolving their tactics, traditional methods of defence often fall short. Enter Anand Jethalia, Country Head of Cybersecurity at Microsoft India & South Asia, and his team’s groundbreaking innovation: Microsoft Security Copilot.

Recently announced for global availability, Microsoft Security Copilot represents a quantum leap in AI-driven security solutions. Harnessing the power of generative AI, Copilot helps security professionals catch what others miss, enabling faster responses and bolstering team expertise. Drawing from a vast pool of data and threat intelligence, including a staggering 78 trillion security signals processed daily by Microsoft, Copilot delivers tailored insights to guide the next steps in fortifying digital defences.

Early adopters of Microsoft Security Copilot have reported significant gains in efficiency, with up to 40% time saved on foundational tasks and over 60% on routine operations. Moreover, accuracy has seen a remarkable 44% improvement, while response times have been slashed by 26%, showcasing the tangible impact of AI augmentation in cybersecurity.

Collaborating closely with Indian government bodies, Microsoft is actively enhancing the nation’s cyber resilience. From partnering with the Directorate General of Training to educate thousands in digital and cybersecurity skills to democratising AI through nationwide training initiatives, Microsoft is empowering the next generation of cybersecurity professionals. Here’s an interaction with Anand Jethalia, Country Head – Cybersecurity, Microsoft India & South Asia.

PD: Can you outline specific instances where Microsoft Security Copilot has detected and addressed cybersecurity threats in India, particularly those that traditional methods may have overlooked, amidst the evolving strategies of cybercriminals?

Anand Jethalia: We recently announced that Microsoft Copilot for Security will be generally available worldwide on April 1, 2024. The industry’s first generative AI solution will help security and IT professionals catch what others miss, move faster, and strengthen team expertise. Copilot is informed by large-scale data and threat intelligence, including more than 78 trillion security signals processed by Microsoft each day, and coupled 
with large language models to deliver tailored insights and guide the next steps. 

In our preview of Microsoft Security Copilot, customers reported saving up to 40 percent of their security analysts’ time on foundational tasks like investigation and response, threat hunting, and threat intelligence assessments. On more mundane tasks like preparing reports or troubleshooting minor issues, Security Copilot delivered gains in efficiency up to and above 60 percent. Early users of Copilot for Security also showed a 44% increase in accuracy and responded 26% faster across all tasks, emphasising its effectiveness in assisting security analysts. The most promising data coming out of our early research, however, is not the numbers, but what customers can do with these gains in efficiency and time saved.

PD: How does Microsoft collaborate with Indian government bodies to enhance the security of critical infrastructure, and to what extent does AI-driven cybersecurity play a role in this collaboration?

Anand Jethalia: Today there’s a strong need for an end-to-end cybersecurity approach to protect governments, businesses, and individuals. Recognising security as a collaborative effort, Microsoft actively partners with Indian governments and organisations to enhance the nation’s cyber resilience. A notable collaboration involved a Memorandum of Understanding (MoU) with the Directorate General of Training (DGT) to educate 6,000 students and 200 educators in digital and cybersecurity skills, covering AI, cloud computing, and web development.

Through the ADVANTA(I)GE INDIA program, Microsoft is democratising AI skills across the country, aiming to train 500,000 individuals in AI in collaboration with the Ministry of Skill Development and Entrepreneurship and ten state governments. This initiative builds on efforts to empower the youth with essential digital competencies.

Microsoft’s establishment of eight Cybersecurity Engagement Centers and its founding partnership in the Cyber Surakshit Bharat initiative underscores its commitment to enhancing cybersecurity skills among government security leaders. The launch of a comprehensive cybersecurity skilling program highlights Microsoft’s dedication to preparing India’s workforce for future challenges in cybersecurity.

By forging strategic partnerships and launching targeted training programs, Microsoft is playing a pivotal role in shaping a more secure digital future for India.

PD: How does Microsoft Security Copilot distinguish between emerging, complex threats and established attack patterns, and what measures are in place to ensure the continuous updating of threat intelligence?

Anand Jethalia: Microsoft Security Copilot revolutionises threat detection by distinguishing emerging threats from known patterns, addressing the limitations of traditional security tools in the face of rapidly evolving cyberattacks. It leverages an extensive data network, analysing 78 trillion signals daily and monitoring over 300 cyberthreat groups, offering security teams an unparalleled understanding of potential cyberattacks.

Enhanced with generative AI, Security Copilot has shown to improve response accuracy by 44% and speed task completion by 26%, demonstrating significant productivity gains. It integrates into a unified security operations platform alongside Microsoft Sentinel and Defender XDR, streamlining incident management and providing comprehensive threat visibility.

Security Copilot acts as a force multiplier amid the global shortage of cybersecurity talent, offering step-by-step guidance and automating incident resolution across Microsoft’s security ecosystem. It also ensures responsible generative AI use by integrating with Microsoft Defender for Cloud Apps to monitor and control generative AI applications, highlighting Microsoft’s dedication to advancing security measures in response to the ever-changing threat landscape.

PD: Could you elaborate on how Microsoft ensures ongoing validation of device trustworthiness in a dynamic environment, and how AI technologies contribute to real-time trust evaluations?

Anand Jethalia: At Microsoft, we employ our Zero Trust security model to continuously validate device trustworthiness, crucial in safeguarding against unauthorised access in a dynamic digital environment. This model underpins our approach to ensuring that all devices accessing company resources are verified for security, emphasising the importance of managing device health to prevent breaches.

Central to our strategy is the requirement for devices to be registered with management systems before accessing corporate resources, reflecting our commitment to a security-centric, human-focused approach. Our Zero Trust framework incorporates advanced authentication methods, including passwordless access and Temporary Access Pass via Azure Active Directory, enhancing user experience while maintaining rigorous access control.

Our integrated security solutions, such as Microsoft Sentinel, Microsoft 365 Defender, and Defender for Cloud, offer proactive protection against threats like ransomware, supported by Microsoft Purview for comprehensive data governance and insider risk mitigation. We prioritise security from the onset in our application and service development, ensuring a high-security baseline across our offerings. This commitment extends to the integration of security and privacy in product design, providing transparency and control over AI interactions and personal data management.

Our approach to AI is grounded in trustworthiness and responsibility, focusing on data privacy, reducing algorithmic bias, and ensuring transparency. We strive for cyber resilience by combining the right mix of technology, processes, and people, encouraging organisations to adopt a zero-trust philosophy for enhanced security.

PD: What are the strategies employed by Microsoft to collaborate with Indian government agencies in tailoring cybersecurity solutions to their specific needs, and how does AI aid in customising security measures for governmental entities?

Anand Jethalia: The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security. In just two years, the number of password attacks detected by Microsoft has risen from 579 per second to more than 4,000 per second. Security teams face an asymmetric challenge: they must protect everything, while cyberattackers only need to find one weak point. Moreover, security teams must do this while facing regulatory complexity, a global talent shortage, and rampant fragmentation.

To address these challenges, Microsoft is leveraging AI innovations to enhance public sector security. Our commitment to responsible AI deployment emphasises fairness, privacy, security, reliability, inclusiveness, and accountability. As AI reshapes societal operations, it’s imperative that the public sector adopts these advancements to offer more efficient services and equip its workforce with the tools needed for its mission.

Our introduction of Microsoft Security Copilot built on the Zero Trust security model and leveraging our extensive data resources, aims to rebalance the digital threat landscape in favour of security teams. By integrating Security Copilot across our security suite, including Defender, Sentinel, Intune, Entra, and Purview, we’re streamlining operations and fostering collaboration across security and IT roles. This unified approach enhances our ability to detect and mitigate cyberthreats swiftly, ensuring a more secure and efficient digital environment.

PD: How does Microsoft ensure the transparency and explainability of its AI-driven cybersecurity solutions, particularly in the context of a diverse threat landscape, enabling cybersecurity professionals to comprehend the decision-making processes behind threat detection and response?

Anand Jethalia: Microsoft outlines six key principles for responsible AI: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. These principles are essential to creating responsible and trustworthy AI as it moves into mainstream products and services.

In August 2023, Microsoft released a whitepaper titled, “Governing AI: A Blueprint for India.’ The report details five ways India could consider policies, laws, and regulations around AI and highlights Microsoft’s internal commitment to ethical AI, showing how the company is both operationalising and building a culture of responsible AI. There is a rich and active global dialogue about how to create principled and actionable norms to ensure organisations develop and deploy AI responsibly. We have benefited from this discussion and will continue to contribute to it.

We are committed to building trust in technology and its use, spanning data privacy, cybersecurity, responsible AI, and digital safety.

PD: Can you provide examples illustrating how AI enhances operational efficiency in cybersecurity processes, such as incident response and threat analysis, aligning with Microsoft’s goal of empowering Indian organisations while ensuring data security?

Anand Jethalia: Organisations globally are leveraging AI to unlock significant business benefits, from extracting deep insights and enhancing human expertise to boosting operational efficiency and transforming customer service. In the realm of cybersecurity, AI analyses vast amounts of data from various sources to provide actionable intelligence for security professionals, facilitating investigation, response, and reporting. It can also automate responses to cyberattacks based on predefined criteria, isolating compromised assets. Generative AI extends these capabilities further, generating original text, images, and content from existing data patterns.

In India, Microsoft collaborated with the industry body NASSCOM to develop the Responsible AI toolkit, offering businesses sector-agnostic tools and guidance for confident AI deployment that prioritises user trust and safety. We also contributed to NASSCOM’s Responsible AI Guidelines for Generative AI, setting out principles for its ethical use and addressing potential harms to build trust in generative AI technologies across sectors. Microsoft remains committed to supporting such initiatives, fostering the development of frameworks and standards for the responsible use of AI.



Source

Related Articles

Back to top button