Strengthening Cybersecurity in the Age of Low-Code and AI: Addressing Emerging Risks

As new technologies like low-code/no-code development and generative AI (GenAI) revolutionize how we build and interact with software, they also bring about new security challenges—especially for the public sector. Protecting sensitive information and online accounts is more critical than ever, as cybercriminals look to exploit gaps in these emerging systems. Ensuring robust security and threat visibility is now essential for safeguarding against the risks associated with these advancements, especially as traditional safeguards become less effective in the face of evolving threats.


Low-code Development Exposes New Risk

One of the unintended consequences of our shift to a low-code/no-code development paradigm is the delegation of complex development tasks to Large Language Models (LLMs) and GenAI systems, often bypassing seasoned developers and architects. This opens new opportunities for cybercriminals. These systems excel at functional requirements—‘Build me a website that accepts customer checkout requests’—but they rarely infer non-functional needs, like security, unless explicitly instructed.

In traditional software development, security considerations are often implicit, stemming from the experience of developers and architects who’ve spent years learning from real-world failures. GenAI, however, lacks this depth of experience and focuses narrowly on the task at hand. The result? Incomplete or inadequate security measures in software developed through these systems. As organizations lean more heavily on GenAI, we risk creating an insecure software ecosystem ripe for exploitation by threat actors.


The Proliferation of Knowledge-Based Verification Attacks

We’re on the brink of a surge in automated attacks exploiting vulnerabilities in Knowledge-Based Verification (KBV) systems. Large-scale data breaches, like the one that exposed millions of Social Security numbers last year, are eroding the effectiveness of this approach at confirming identity when creating an account or supporting a password reset. These processes often rely on KBV—such as answering questions about your mother’s maiden name or the street you grew up on—but this information is increasingly accessible to malicious actors.

Human Security GenAI Low Code Blog Embedded Image 2025

As these personal details become more widely available through data breaches and online marketplaces, attackers can easily bypass KBV systems. Worse yet, threat actors can now leverage LLMs to develop sophisticated tools to mine personal data at scale and orchestrate automated attacks against these KBV systems. Organizations face an urgent challenge: how to protect accounts in a world where traditional KBV methods are no longer secure or reliable while still offering users a legitimate path to create an account or regain access when needed.


LLM Safeguards Can Be Overridden or Bypassed by Running Models Locally

With the proliferation of local LLM instances and tools like Ollama, we’ll see safeguards embedded in commercial LLMs eroded or bypassed entirely. Running models locally can allow threat actors to fine-tune them, removing restrictions on malicious activity and enabling custom models optimized for cybercrime. This creates a new frontier for scaled attacks that are faster, more targeted, and harder to detect until it’s too late.

Imagine a threat actor fine-tuning a model to craft phishing campaigns, identify vulnerabilities in software, or automate account takeovers. The ability to localize and modify these models fundamentally shifts the balance, empowering attackers with tools tailored to their malicious intent. The guardrails built into commercial LLMs are no match for this growing trend, amplifying the need for robust detection and defense strategies at every level.

As the public sector continues to adopt innovative technologies, staying ahead of emerging cyber threats is crucial. The increasing sophistication of attacks, such as those targeting KBV systems and leveraging GenAI, highlights the need for stronger protections. By prioritizing comprehensive security measures and threat detection, organizations can mitigate the risks of these evolving vulnerabilities and safeguard their sensitive data and online accounts against malicious actors. It is essential to build and maintain resilient security strategies to ensure the integrity of digital infrastructures in this rapidly changing environment.


To learn more about how HUMAN Security helps the public sector protect citizen accounts, sensitive information, and critical infrastructure, click here.


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including HUMAN Security, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Protecting Government Benefit Programs from Automated Fraud

Nation-states, ransomware gangs, and cyber criminals have a new weapon of choice: AI-powered bots. These systems, which mimic human behavior to automate tasks, have already helped fraudsters siphon hundreds of billions of dollars from federal programs. If left unchecked, this problem will cause taxpayers severe financial harm. The incoming administration will need to move quickly to guard against this rapidly growing threat.

The need to better defend the nation’s technology infrastructure against AI-powered attacks is not a partisan issue, and it is likely our new cyber leaders can build upon some actions taken by the last administration, including the final cybersecurity EO, issued in January 2025, that highlighted the role that stolen or synthetic identities play in defrauding our government programs. While the focus on instituting modernized digital identity methods may be appropriate, we’d like to offer a few additional considerations for our incoming cyber leaders on how to attack this problem.


The Bots Are Here

HUMAN Security NightDragon Protecting Government Benefit Programs Automated Fraud Blog Embedded Image 2025

Bots are increasingly being used by malicious actors to hack into systems, scrape personal data, or submit fake claims for benefits. At its simplest, they can use credentials and identification information purchased or stolen on the dark web to perpetrate fraud against benefit websites. From overwhelming public benefit portals with credential stuffing attacks to manipulating identity verification systems with precision-targeted scams, bots exploit gaps in digital identity systems at a speed, precision, and scale that is incredibly hard to defend against. And with the advancements in AI, they can increasingly mimic legitimate users to bypass security measures faster than most institutions can adapt.

In fact, in 2021, the Department of Labor found that at least $87 billion of the nearly $900 billion in unemployment insurance awarded under the CARES Act in the aftermath of the COVID pandemic were paid improperly, with a significant, but indeterminable portion attributable to fraud. However, in 2023 alone, bots were responsible for 352 billion attacks targeting login portals, credential verification systems, and transaction flows across industries, according to HUMAN’s Quadrillion report.

With 20 percent of login attempts across observed systems linked to account takeover attacks, and 150 million new compromised credential pairs discovered last year, bots are evolving into the ultimate enablers of fraud. If left unchecked, they could amplify the scale of fraud exponentially.


How do we prevent this problem from evolving from merely headline-grabbing to system-crippling?

Our incoming cyber leaders must recognize bots as the major root cause of the fraud problem and refocus attention on deploying cutting-edge new tools on U.S. federal systems to defend the thousands of .gov websites the government administers. This includes deploying applications that can help protect from automated credential stuffing and brute force attempts, block bots from manipulating web applications, prevent data contamination in which bots disseminate fake information to skew metrics, and prevent the unauthorized data harvesting of public websites. 

The government must also take the lead in helping private sector entities adopt these tools. The federal government can serve as a catalyst, pushing hold-out organizations to invest in their own fraud defenses. Private businesses are looking for guidance on this issue. Bot detection and counter bot solutions deserve the same level of attention as endpoint detection, patch management, and other fundamental security controls. Proactively embedding bot mitigation into NIST frameworks, for example, will ensure government systems are prepared to defend against automated fraud at scale. Following on this, government guidance relating to how agencies establish Zero Trust architectures should also incorporate bot detection and mitigation.

Finally, we must foster stronger public-private collaboration to advance bot mitigation. Existing bodies for public-private cooperation on cybersecurity must more deliberately include bot intelligence and insight-sharing. We must evolve outdated conceptions of what constitutes cyber threat intelligence (CTI), and endeavor to collect, analyze and report bot intelligence as its own distinct, but highly important category of CTI.

As our incoming cyber leaders in the new administration plan their agenda, it is critical they understand that the root cause of large-scale fraud is not just weak digital identity management methods but AI-powered bots. Bots that undermine the delivery of services and benefits to millions. Combating fraud perpetrated by and with them is a national priority.


To learn more about how HUMAN Security and NightDragon work better together to support Government agencies in their mission to defend against bots, visit our website!


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including HUMAN Security and NightDragon, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.