サイバーセキュリティ

Securing The Future Today

By :

Future-of Security 1240x600

How to address three emerging cybersecurity threats poised to wreak network havoc, with a focus on AI, ZTNA, and PQC.

The urgent pace of technological change is sending tsunami-sized ripples through every aspect of how networks are built and managed.

This is especially true when it comes to threat landscapes.

Cyber adversaries are growing more sophisticated by the day, aided by emerging tools like artificial intelligence (AI), and soon, quantum computing.

Today, securing networks means not only addressing current risks but preparing for those just around the corner. We have identified three transformative security trends to address near-term as strategies evolve for building a proactive security posture: AI-driven security, Zero Trust Network Access (ZTNA), and post-quantum cryptography (PQC).

AI ZTNA PQC Security-trends-1

These technologies are powerful and hold great promise, but also introduce challenges that must be considered for sucessful adoption.

This blog discusses each of these important developments, key considerations, and actionable recommendations ahead of rollouts.

AI’s dual role in cybersecurity’s next act

For all of the bright opportunities associated with AI, it is understood that a dark side also lurks. When it comes to this particular threat, we recognize an imperative to fight fire with fire. Already, stakeholders like firewall and gateway vendors are building AI into solutions to mitigate threats.

AI assistants and co-pilots are being integrated into security solutions like next-generation firewalls (NGFWs) and Security Service Edge (SSE) offerings. The goal is to simplify firewall policy and configuration management to reduce security management complexity. We are already seeing machine learning (ML) and AI successfully predict and block AI-based attacks before they escalate. As tools collect data from previous security events, they are fed back into AI systems to analyze threat behavior, identify its presence on networks, and recommend action to prevent future attacks.

Validating AI’s efficacy as a defense system and its impact on network performance introduces new challenges. Does the AI truly predict the threats and correctly respond? Is it creating traffic congestion or service degradation?

New platforms must be put to the test to answer these crucial questions.

There is a concern about whether AI can stay fit for purpose versus growing lazy and prone to hallucinations over time. Because these solutions are only as powerful as the confidence stakeholders can place in them, users must be continuously reassured they are making the right decisions based on the right training data in a constantly changing environment.

Then there is the risk of human error.

Firewall misconfigurations are common due to overlapping rules and policies, with Cisco citing that 99% of breaches are attributable to this concerning problem. AI can weed out issues like these and simplify policy and configuration management but there’s also the potential to introduce new misconfigurations if not properly validated.

引用文

Firewall misconfigurations are common due to overlapping rules and policies, with Cisco citing that 99% of breaches are attributable to this concerning problem.

Risks can be mitigated by:

  • Validating AI optimizations. Automated testing can validate that AI-driven optimizations and changes are correctly implemented to reduce the risk of introducing new vulnerabilities.

  • Testing for performance. Performance impacts should be tested on an ongoing basis to ensure AI does not degrade network traffic and other services.

  • Continually testing efficacy. Continuous testing is the only way to verify that AI predictions and blocking mechanisms are effective, that poor decisions are not being made, and that hallucinations and laziness does not creep in.

In the end, it is about balancing innovation with risk management via rigorous testing and validation.

Using Zero Trust principles to eliminate inherent trust within networks

Zero Trust Network Access (ZTNA), which is built on “never trust, always verify” principles, overcomes traditional perimeter-focused security approaches by requiring continuous authentication and authorization of a user before granting access to network resources. It encompasses least privilege access to restrict access to only what is necessary for users to perform a given task and contextual decision making to make access decisions based on factors like location, device being used, and user identity.

ZTNA can only be effective with continued authentication and authorization of users versus granting blanket access to an entire network or a wide array of resources.

These constant challenges to user access requests have the potential to strain network resources and impact performance and user quality of experience (QoE), especially if ZTNA architectures are not configured or scaled properly. Implementation policies, therefore, must be precise and well-defined with interoperability validated across multiple types of systems, including identity, policy, and security.

Effective ZTNA implementations require:

  • Realistic and repeatable testing. User authentication workflow emulation, and performance and scalability validation ensure networks can handle actual usage demands without impacting performance.

  • Interoperability and policy testing. Seamless integration testing and analysis of end user QoE impacts ensures all vendor architecture components work together as intended and do not disrupt user workflows.

Any ZTNA approach involves careful planning, continuous validation, and seamless integration to effectively support users and reduce threats without introducing new challenges.

Quantum-safe cryptography ready to address outsized risks already within view

Quantum computing methods may only need an hour to break cryptographic algorithms like RSA-2048 that were designed to withstand a billion years of hacking efforts. An era when existing encryption methods become ineffective will arrive within the next decade as “harvest now, decrypt later” strategies are employed by cybercriminals.

Employing Post-Quantum Cryptography (PQC) algorithms now can be an effective strategy for defending against today's attacks that might take years before their destructive nature is felt. Governments and mission-critical industries like finance are already driving early adoption to mitigate future risks, with NIST recently releasing its first PQC standards with specificed algorithms.

But it is still early days.

PQC algorithms like CRYSTALS-Kyber and Dilithium, and also Quantum Falcon and SPHINCS+ are just now coming to market and have not yet undergone extensive testing and validation. Many organizations remain unaware that they exist, leaving parts of the network vulnerable.

PQC Cryptographic algorithms

As quantum computing becomes more prevalent, these entities are at risk of being caught flat-footed. Given quantum-safe cryptography requires careful planning, testing and implementation, it can potentially take years to implement. The time to begin preparing for this certain reality is now.

Organizations can take initial implementation steps with:

  • Independent risk assessments. Engage experts like Spirent to identify which cryptographic algorithms and protocols are vulnerable to quantum attacks, and assess the timeline and impact of potential quantum threats on the network.

  • Early quantum-safe algorithm testing. Prioritize evaluation and validation of new quantum-safe cryptographic methods to ensure they can be implemented effectively on existing security infrastructure.

AI, ZTNA, and PQC are positioned for outsized roles in the future of effective cybersecurity. Validation, scalability, interoperability, and performance challenges must all be considered today. Every organization should have plans underway to develop implementation roadmaps for these technologies.

With the right awareness, tools, and plans, organizations can begin preparing right now for the next era of cybersecurity threats already within view.

Spirent’s CyberFlood is already being used by early adopters for testing of AI-driven security optimizations and ZTNA architectures to ensure they meet performance demands without compromising security. CyberFlood also supports the emulation of PQC ciphers to assess the performance impact of these algorithms on evolving security infrastructure. Spirent SecurityLabs supports independent risk assessments and early PQC algorithm testing.

Learn more about testing for the future: read the white paper Security and Performance Testing for SASE and Zero Trust and watch the webinar Overcoming Security Challenges in Cloud-Native and Edge Environments

コンテンツはいかがでしたか?

こちらで当社のブログをご購読ください。

ブログニュースレターの購読

Stephen Douglas
Stephen Douglas

市場戦略統括

Spirent is a global leader in automated test and assurance for the ICT industry and Stephen heads Spirents market strategy organization developing Spirents strategy, helping to define market positioning, future growth opportunities, and new innovative solutions. Stephen also leads Spirent’s strategic initiatives for 5G and future networks and represents Spirent on a number of Industry and Government advisory boards. With over 25 years’ experience in telecommunications Stephen has been at the cutting edge of next generation technologies and has worked across the industry with service providers, network equipment manufacturers and start-ups, helping them drive innovation and transformation.