Emerging Trends in Artificial Intelligence and What They Mean for Risk Management

Artificial intelligence (AI) is a valuable risk management tool, but it also poses a degree of risk. As AI becomes more prevalent, it opens new possibilities while simultaneously raising new concerns.

Federal agencies and contractors have a responsibility to closely monitor developments in the scope and capacity of AI. In this article, we’ll explore some of the top emerging trends in AI, and we’ll explain their impact on risk management strategies for Federal agencies and contractors.

What are the Emerging Trends in Artificial Intelligence?

With its enormous capacity for pattern recognition, prediction and analytics, AI can be instrumental in identifying risk and driving solutions. Here are some of the most promising new AI applications for risk management.

Predictive Analytics

Predictive AI is widely used in applications like network surveillance, fraud detection and supply chain management. Here’s how it works.

Machine learning tools, a subsection of AI, rapidly “read” and analyze reams of historical data to find patterns. Historical data can mean anything from network traffic patterns to consumer behavior. Since machine learning tools can analyze vast datasets, they find subtle patterns that might not be evident to a human analyst working their way slowly through the same data. This kind of predictive analysis helps organizations identify risks before they escalate.

Once ML identifies the patterns, it can use them to make highly specific and accurate predictions. That can mean, for example, predicting website traffic and preventing unexpected outages due to increased usage. It can also mean spotting the warning signs of new computer viruses or identifying phishing emails.

Generative AI

Generative AI (GenAI) is often discussed in terms of its content creation capabilities, but the technology also has enormous potential for risk management.

GenAI can rapidly synthesize data from a wide range of inputs and use it to create a coherent analysis. For example, GenAI can make predictions about supply chain disruptions, based on weather patterns, geopolitical issues and market demand. Many generative systems use natural language processing to interpret context, summarize information and support more accurate decisions.

GenAI can also come up with solutions to the problems it identifies. The technology excels at breaking down silos and drawing connections between different sources of information. For example, the technology can suggest alternative shipping routes or suppliers in the event of a supply chain disruption.

It’s worth noting that, like any other AI tool, generative AI does best with human oversight. GenAI analysis should never be accepted at face value. Rather, employees can use it as an inspiration or a jumping-off point for further planning. Human expertise should always play a key role in the planning process, since GenAI isn’t always accurate.

Adaptive Risk Modeling

AI tools are capable of continuous learning and real-time analysis. Those capabilities lay the groundwork for adaptive risk modeling.

Adaptive risk modeling allows for a dynamic understanding of risk factors, instead of the traditional static approach. The old way of calculating risk relied on identifying patterns in historical data and using a linear model with a simple cause-and-effect analysis.

In contrast, adaptive risk modeling uses machine learning and deep learning to continually scan data sets for changes or new patterns. Instead of a static, linear model, AI risk modeling can build a dynamic model and continually update it.

Use Cases for AI Risk Management Tools

AI is widely used in the Public and Private Sectors to predict and manage risk, even with third parties involved. Here are some of the common use cases.

Federal Government Use Cases

A growing number of Federal agencies use AI tools to increase efficiency in their work. Some are beginning to pilot AI-powered agents to automate routine tasks and provide real-time recommendations for employees.

  • The Department of Labor leverages AI chatbots to answer inquiries about procurement and contracts.
  • The Patent and Trademark Office uses AI to rapidly surface important documents.
  • The Centers for Disease Control uses AI tools to track the spread of foodborne illnesses.

Financial Sector

Lenders increasingly use AI tools to assess the risk of issuing loans. Because AI can collect and analyze large data sets, the technology provides a comprehensive way to assess creditworthiness.

Financial institutions also use AI for fraud detection. AI tools can spot patterns in typical customer behavior and identify anomalies that could indicate fraud.

Insurance Industry

Insurance companies frequently use AI for underwriting, including risk assessment and risk mitigation. AI is also a useful tool for processing claims and searching for fraud.

Generative AI is also often used to provide frontline services to customers. For example, chatbots answer straightforward questions, provide triage and refer more complex questions to human operators.

Risks Associated with AI Technologies

AI is a valuable tool in mitigating risk, but it’s important to be aware of the risks the tools themselves present.

Chief among those risks is the problem of algorithmic bias. AI and ML excel at identifying patterns and codifying them. However, this means that AI is only as good as the data that feeds it. If AI/ML tools are trained on biased data, the tools will codify the biases embedded in that data. AI/ML takes the unspoken prejudices in datasets and turns them into hard and fast rules, which inform every decision going forward.

Agencies must also consider data privacy implications when AI tools process sensitive or regulated data. If human operators do not question the algorithm’s output, there’s a real risk that bias will become deeply ingrained, causing lasting harm to individuals and organizations and even creating regulatory compliance issues.

Addressing AI Bias

Federal agencies and contractors must understand exactly how AI tools are being deployed. Operators should frequently look “under the hood” of the AI algorithms, asking questions about how the outputs are generated. Opening the “black box” allows organizations to check for bias and prevent it from being codified. Strong data ethics practices ensure that AI systems are trained on fair, transparent and accountable data sources.

It’s best practice to implement a cross-functional AI governance council or team to oversee artificial intelligence. It’s also important to work closely with a trusted partner who has experience integrating AI into a GRC platform. The best AI tools help humans manage a Federal agency with efficiency. The question is, how to make the most of the available technology while mitigating the associated risk.

The Switch to Proactive Network Resilience Management to Maintain Operational Continuity

Due to the threat of modern ransomware gangs and Advanced Persistent Threats (APTs), critical infrastructure organizations face unprecedented challenges from sophisticated adversaries. These gangs and APT groups, such as Volt and Salt Typhoons, seek to compromise and disrupt the operations of critical national infrastructure (CNI) for financial gain or to cause economic and societal harm. Luckily, organizations can combat these attacks by shifting from traditional defensive approaches to a comprehensive network resilience strategy that ensures operational continuity through proactive management.

The Critical Shift from Defense to Resilience

With mission-critical systems increasingly dependent on network availability, cybersecurity is a top priority. Traditional security approaches have primarily focused on hardening defenses against external threats. However, this strategy has proven insufficient as sophisticated attackers continue to infiltrate networks and are increasingly exploiting weakly configured or vulnerable network devices to carry out their attacks. The consequences of such breaches extend beyond security concerns to operational, financial and reputational damage that can undermine an organization’s core mission.

Network devices are particularly attractive targets because they serve as the connective tissue for all organizational IT operations. When compromised, these devices provide attackers with persistence, lateral movement capabilities and access to sensitive data flows. Additionally, misconfigurations and unplanned changes to these devices—whether malicious or accidental—can result in disruptive outages at precisely the wrong moment.

To address these challenges, organizations need a tailored network resilience strategy built on the four pillars of operational resilience:

  1. Business Continuity: Maintaining critical business functions and mitigating interruptions to mission-critical services
  2. Risk Management: Assessing proactively to identify and address potential failure points before they impact operations
  3. Cybersecurity: Utilizing trusted hardening guides and security frameworks, such as those provided by the US National Institute of Standards and Technology (NIST) and the UK National Cyber Security Centre (NCSC), to monitor, detect and respond to cyber attacks and insider threats
  4. Disaster Recovery: Regaining access to and use of critical systems and restoring services as soon as possible following an outage

This approach recognizes that network security must be redefined as the proactive protection and assurance of business services, applications and data. This strategy shifts the goal from merely defending the perimeter to ensuring systems remain available and recoverable, and therefore trustworthy.

Titania, Network Resilience, blog, embedded image

Implementing Continuous Network Resilience Management

Organizations must switch to viewing their network security as something that must be continuously and proactively protected. By focusing on network readiness, resilience and recoverability, organizations can quickly detect problems within their network and reduce risk to their business, all which aligns with the latest compliance and security mandates. While shifting to continuous network resilience may seem daunting, Titania, a world-leader in network configuration analysis for routers, switches and firewalls, can help.

Here are five ways that Titania enables organizations to shift from risk-based vulnerability management to continuous network resilience management:

  1. Offers full network visibility, equipping organizations to swiftly identify anomalies. Titania’s platform establishes a configuration baseline that identifies all changes, differentiating between planned and unauthorized ones, enabling teams to automatically identify anomalies and potential indicators of compromise (IOCs). This includes identifying macro-segmentation violations, such as changes to or presence of unauthorized internet protocols (IPs), ports and users that could signal an active threat.
  2. Assesses network segmentation to contain breaches. Network segmentation prevents or delays bad actors from moving laterally within a business, which would allow them to access more of the network than otherwise possible. By hardening and effectively segmenting all routers, switches and firewalls, Titania helps reduce risk to a business’ mission-critical objectives.
  3. Analyzes and remediates network exposure. Titania helps organizations assess misconfigurations and software vulnerabilities based on the specific tactics, techniques and procedures (TTPs) that threat actors use. To minimize exposure to APTs and ransomware, Titania automatically prioritizes remediation workflows to address the most critical and likely TTP risks.
  4. Maintains accurate configuration management database (CMDB) to aid business continuity and disaster recovery.  By tracking all configuration changes, whether planned or unauthorized, Titania enables businesses to swiftly recover from any potential breaches. Titania also enables network operations center (NOC) teams to manage configurations-as-code, ensuring potential disruptions are identified and addressed during pre-deployment configuration testing.
  5. Assures networks comply with both internal and external mandates. Titania cross-checks network configurations to determine adherence to mandated requirements, automatically reporting pass/fail compliance with US, EU and international hardening standards and risk management frameworks (RMFs).

As threats continue to evolve and mission objectives become intertwined with network infrastructure, the ability to ensure operational continuity through comprehensive network resilience management will become a defining characteristic of successful cybersecurity programs. By implementing solutions that address the full spectrum of network security challenges, Government agencies and commercial organizations can protect their mission-critical services and maintain the trust of those who depend on them.

To learn more about implementing a comprehensive network resilience strategy for your organization, visit Titania’s Nipper Resilience product page.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Titania we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Third-Party Risk Management: Moving from Reactive to Proactive

In today’s interconnected world, cyber threats are more sophisticated, with 83% of cyberattacks originating externally, according to the 2023 Verizon Data Breach Investigations Report (DBIR). This has prompted organizations to rethink third-party risk management. The 2023 Gartner Reimagining Third Party Cybersecurity Risk Management Survey found that 65% of security leaders increased their budgets, 76% invested more time and resources and 66% enhanced automation tools to combat third-party risks. Despite these efforts, 45% still reported increased disruptions from supply chain vulnerabilities, highlighting the need for more effective strategies.

Information vs Actionable Alerts

The constant evolution and splintering of illicit actors pose a challenge for organizations. Many threat groups have short lifespans or re-form due to law enforcement takedowns, infighting and shifts in ransomware-as-a-service networks, making it difficult for organizations to keep pace. A countermeasure against one attack may quickly become outdated as these threats evolve, requiring constant adaptation to new variations.

In cybersecurity, information is abundant, but decision-makers must distinguish the difference between information and actionable alerts. Information provides awareness but does not always drive immediate action, whereas alerts deliver real-time insights, enabling quick threat identification and response. Public data and real-time alerts help detect threats not visible in existing systems, allowing organizations to make proactive defense adjustments.

Strategies for Managing Third-Party Risk

Dataminr Third Party Risk Management OSINT Blog Embedded Image 2024

Managing third-party risk has become a critical challenge. The NIST Cybersecurity Framework (CSF) 2.0 emphasizes that governance must be approached holistically and highlights the importance of comprehensive third-party risk management. Many organizations rely on vendor surveys, attestations and security ratings, but these provide merely a snapshot in time and are often revisited only during contract negotiations. The NIST CSF 2.0 calls for continuous monitoring—a practice many organizations follow, though it is often limited to identifying trends and anomalies in internal telemetry data, rather than extending to third-party systems where potential risks may go unnoticed. Failing to consistently assess changes in third-party risks leaves organizations vulnerable to attack.

Many contracts require self-reporting, but this relies on the vendor detecting breaches, and there is no direct visibility into third-party systems like there is with internal systems. Understanding where data is stored, how it is handled and whether it is compromised is critical, but organizations often struggle to continuously monitor these systems. Government organizations, in particular, must manage their operations with limited budgets, making it difficult to scale with the growing number of vendors and service providers they need to oversee. Threat actors exploit this by targeting smaller vendors to access larger organizations.

Current strategies rely too heavily on initial vetting and lack sufficient post-contract monitoring. Continuous monitoring is no longer optional—it is essential. Organizations need to assess third-party risks not only at the start of a relationship but also as they evolve over time. This proactive approach is crucial in defending against the ever-changing threat landscape.

Proactively Identifying Risk

Proactively identifying and mitigating risks is essential for Government organizations, particularly as threat actors increasingly leverage publicly available data to plan their attacks. Transparency programs, such as USAspending.gov and city-level open checkbook platforms, while necessary for showing how public funds are used, can inadvertently provide a playbook for illicit actors to target vendors and suppliers involved in Government projects. Public data often becomes the first indicator of an impending breach, giving organizations a narrow window—sometimes just 24 hours—to understand threat actors’ operations and take proactive action.

To shift from reactive to proactive, organizations must enhance capabilities in three critical areas:

  1. Speed is vital for detecting threats in real time. Using AI to examine open source and threat intelligence data helps organizations avoid delays caused by time-consuming searches.
  2. The scope of monitoring must extend beyond traditional sources to deep web forums and dark web sites, evaluating text, images and indicators that mimic official branding.
  3. While real-time information is essential, excessive data can lead to alert fatigue. AI models that filter and tag relevant information enable security teams to focus on the most significant risks.

Proactively addressing third-party risks requires organizations to stay prepared for immediate threats. By leveraging public data, they can strengthen defenses and act before vulnerabilities are exploited.

While self-reporting and AI tools are valuable, organizations must take ownership of their risk management by conducting their own due diligence. The ability to continuously monitor, identify and mitigate risks presents not just a challenge but an opportunity for growth and improvement. Ultimately, it is the organization’s reputation and security at stake, making proactive risk management key to staying ahead of today’s evolving threats.

To learn more about proactive third-party risk management strategies, watch Dataminr’s webinar “A New Paradigm for Managing Third-Party Risk with OSINT and AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Dataminr, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Rethinking and Modernizing the ATO Approval Process

The path to securing Authorization to Operate (ATO) approval presents a myriad of challenges, such as complex regulations, the potential for human error and the constant threat of cyberattacks. The role of an Authorized Official (AO) necessitates both speed and thoroughness to ensure an organization’s risk is minimized while also safeguarding sensitive information. Traditional manual, point-in-time assessments are proving insufficient, resulting in significant security risks. As digital transformation accelerates in both the Government and Private Sector, regulatory compliance requirements have also increased, yet the tools and processes used to meet these standards fall behind. This disconnect poses a challenge for AOs, underscoring the urgent need for innovation in the ATO approval journey.

Preventing Compliance Drift

RegScale Modernizing ATO Approvals Webinar Recap Embedded Image Blog 2024

To stay ahead of the threats against the nation while simultaneously reducing the friction and corrosion in the compliance process, a proactive approach of implementing necessary measures and safeguards before they are mandated by regulatory requirements is essential. As Brandt Keller, Software Engineer at Defense Unicorns, stated during a recent webinar discussing the ATO approval process, “New technologies are coming, and we need to implement them and understand what they do, how they do it and what controls they do or do not satisfy.” The role of compliance within the DevSecOps process is pivotal, especially when switching from one technology to another. This decision must consider how the change impacts compliance, as the environment shift can alter the ATO posture. Such changes may result in drift or even expose the system to malicious actors seeking to escalate privileges or perform unauthorized actions. While compliance and security are often viewed as separate processes, they can and should be integrated to provide an additional layer of defense.

Preventing drift in IT systems is a crucial aspect of maintaining continuous compliance. AOs must actively collect and report data to accurately reflect the current state of their systems. Leveraging open standards on a platform is essential for effectively utilizing data. To achieve this, AOs need reliable methods for producing and regularly assessing data. Building a system from the ground up with compliance in mind involves meticulously implementing and automating controls that can be rerun consistently. The process must be both repeatable—able to redo tasks—and reproducible—able to collect evidence and achieve the same results. Any deviation indicates a potential issue, a change or an environmental modification that has made it less compliant. This approach allows AOs to confidently attest that their ATO meets all required controls and prevents any drift.

Implementing Automation

Automating processes within DevSecOps pipelines has emerged as a pivotal strategy, particularly streamlining compliance checks before system deployment. This approach allows decision-makers to assess risk before a system is even deployed. Moreover, the ability to continuously evaluate and update data in real time enhances accuracy and ensures timely access to critical information. However, accessibility of data remains a challenge due to the number of disconnected environments in existence. Open standards such as OSCAL solve this problem by providing a unified framework for continuous data integration. By adopting platforms that adhere to open standards, organizations can foster innovation and empower AOs with data in a familiar and actionable format, thereby optimizing efficiency and bolstering security measures.

ATO Risk Management Framework (RMF) artifacts represented in OSCAL machine-readable formats break down information silos, achieving effective communication across teams and facilitating seamless data handoffs. Automation is pivotal in expediting the decision-making process, alleviating the burden on the human workforce, enabling AOs to access better-quality data and making risk-based decisions more efficiently. While the potential for error is still present, automation significantly mitigates human error in data handoffs across all controls and systems. It also helps security professionals focus on managing risk rather than completing rudimentary compliance tasks.

Automating technical and administrative controls is not the same. While traditional approaches rely on application programming interface (API) data, nontraditional methods such as infrastructure as code (IaC)—managing computing infrastructure through provisioning scripts—or compliance as code—managing regulatory requirements by encoding them into automated scripts or code—offer alternative paths. These approaches allow organizations to establish rules and apply validations programmatically, mirroring the precision and speed of technical controls. However, not all controls are created equal; some function as checkboxes without mitigating risks. The critical controls that significantly impact an environment’s security posture should be the priority for automation. As emphasized by Travis Howerton, Co-founder and CEO at RegScale, “it is less important what percent of total controls are covered than what percentage of your total risk you are mitigating with automation.”

The cadence mismatch between cyber threats that move at lightspeed, and heavily manual compliance processes must be fixed. “The big part of what has to modernize,” according to Howerton, “is taking more automated approaches, leveraging advances in technology and thought leaders in this space to figure out how we can do things in a more automated manner to bring the principles of DevSecOps to compliance.” This strategic focus will ensure thorough and repeatable processes and prepare AOs for a future where compliance and security are dynamically intertwined, ultimately supporting better risk-based decisions and unlocking the full potential of digital transformation. By accepting early that ATOs should be more real-time and continuous, AOs can better position themselves for the future.

Watch RegScale and Carahsoft’s webinar, AO Perspectives: Managing Risks and Streamlining ATO Decision-Making, to learn more about modernizing the ATO approval process.