3 Ways Federal Agencies Can Make the Most of Microservices

To keep up with the current state of IT modernization, more and more federal agencies are turning to microservices to help them innovate faster and create a better end-user experience. Microservices are a development architecture that breaks down applications into their core functions, or services, allowing each to be deployed independently. The utilization of microservices has traditionally been applied alongside DevOps to allow for more simultaneous development and faster, more efficient deployment of upgrades and new features.

By utilizing microservices, agencies can update code more easily without impacting other applications. And while many agencies continue to turn to microservices because of their efficiencies, perhaps the most significant shift in the development process is a cultural one. After all, this is where DevOps comes into play, which embodies a culture of openness and collaboration. To reap the full benefits of microservices, we have put together some best practices for federal IT pros to follow. Here they are:

  • Shift from a traditional monitoring mindset to one of observability. The high number of moving pieces and additional services created by microservices can add significant complexity to an agency’s IT environment and among teams. While continuous monitoring throughout the application development process is still essential, agencies cannot gain precise insights into their infrastructure without an observability solution. Observability solutions combine network, cloud, system, application, services, and database metrics into a single source of truth, so users can investigate performance analytics at a deeper level and uncover where issues may be occurring.
  • Consider a mesh architecture, and ensure it aligns with your container infrastructure. Once you start running more than ten microservices at a time, a service mesh architecture is recommended, as it provides policy-based networking and describes the behavior of the network in the face of constantly changing conditions. Service mesh architecture uses sidecar proxies to help IT security and observability efforts keep up with the complex connections between distributed apps. With a mesh architecture, DevOps teams can get quick metrics, logs, and tracing without making application code changes.
    • When deciding which service mesh to leverage, look to your agency’s infrastructure and deployment use cases to help inform your choice. For example, some infrastructures work well with Kubernetes® or Docker, but specific use cases may require a highly-capable service mesh like Istio or more straightforward tooling like Linkerd 2.0.

SolarWinds Federal Microservices Blog Embedded Image 2022

  • Build an understanding of the network’s criticality. Understanding the criticality of each workload in your agency’s IT portfolio is the first step toward establishing mutual commitments to cloud management. Some applications are mission-critical and must not fail. Others may go months without being used. While poor performance or outages for those lesser-used workloads is not desirable, the impact is isolated and limited with microservices. Federal IT pros should create scales for each application to determine the effort required to meet certain levels of criticality. Starting development with the proper observability framework and using a cloud-native approach to design scalable, independently delivered microservices can be hugely beneficial, especially when considering mission-critical activities.

Microservices can provide many benefits to agencies, particularly when applied alongside DevOps. But to reap the benefits, agencies need to take an intelligent approach to implementation and ensure they have taken steps to incorporate observability to monitor and secure their IT environments effectively.

 

For a deeper dive on SolarWinds® Hybrid Cloud Observability offering and how it can help agencies gain end-to-end IT operations visibility, click here.

The Best of What’s New in Cybersecurity

In November 2021, federal lawmakers approved dedicated funding for state and local government cybersecurity efforts. The new State and Local Cybersecurity Grant Program — included in the massive Infrastructure Investment and Jobs Act — provides $1 billion for cybersecurity improvements over four years. Then, in March of this year, President Biden signed into law the Cyber Incident Reporting for Critical Infrastructure Act of 2022 as part of the Consolidated Appropriations Act of 2022. Taken together, these laws point toward significant changes in the nation’s historically decentralized approach to cybersecurity. New cybersecurity legislation is being driven by a threat environment that seemingly grows more menacing by the day. It’s likely that state and local agencies will receive additional federal cybersecurity support going forward, along with greater federal oversight. Learn how your agency or municipality can take full advantage of the increased funding to protect against increasing challenges in Carahsoft’s Innovation in Government® report.

 

Navigating Security in a Fast-Changing Environment

GovTech June Cybersecurity Blog Embedded Image 2022“Threat actors are constantly devising new attacks and methodologies, so organizations must stay on top of trends and constantly evolve how they build and secure their software supply chain. It isn’t a ‘set it once and you’re good’ kind of thing. President Biden’s executive order on improving the nation’s cybersecurity and some bills going through Congress will help address some of the issues. Among many things, the executive order mandates service providers disclose security incidents or attacks. It’s also important to establish a community where security professionals across the nation can exchange security and threat information. You don’t want to solve these things in a vacuum. We’re stronger as a community than as individual organizations.”

Read more insights from SolarWinds’ Group Vice President of Product, Brandon Shopp.

 

User Identities in a Zero-Trust World

“State and local governments — which have become top targets of phishing, data breaches and ransomware attacks — must be able to prevent cybercriminals from accessing all endpoints, including those associated with a distributed workforce. Prior to the pandemic, employees primarily accessed databases, applications and constituent data from within the secured network perimeter of an office. Now users are connecting from their home networks or unknown networks — even cafes — that don’t have the security protections that exist within a physical office. That heightens the need for Zero Trust, which has ‘never trust, always verify’ as a main tenet.”

Read more insights from Keeper Security’s Director of Public Sector Marketing, Hanna Wong.

 

Secure Collaboration for the Work-from-Anywhere Future

“The first step is to look at your content governance model. What does that content life cycle look like from ingestion or creation to consumption and archive? Compliance must be part of that entire process. Then, it comes down to your platform and tools. Are you selecting a platform like Box, where your entire content repository is unified and ensures compliance from the point of entry to the point of disposition — all while offering a seamless user experience? Or are you signing up for a disparate and disconnected strategy where you are now responsible for tracking and making sure that different data sources are compliant? Content fragmentation, even in the cloud, can introduce unnecessary exposure and a compliance risk.”

Read more insights from Box’s Managing Director for State and Local Government, Murtaza Masood.

 

What High-Performing Security Organizations Do Differently

“State and local governments are still trying to get a handle on remote access. At the beginning of COVID, most agencies didn’t have a 1:1 ratio of devices to send home with people, so they were forced overnight into a bring-your-own-device support model and virtual desktop infrastructure (VDI) implementation. In many cases, the VDI implementation wasn’t very secure, nor was it optimal. Now agencies are asking how secure their setup is, and they have to go backward to address that, which can cause some real challenges.”

Read more insights from HPE’s Master Technologist in the Company’s Office of North America CTO, Joe Vidal, and Server Security and Management Solutions Business Manager, Allen Whipple.

 

Download the full Innovation in Government® report for more insights from these cybersecurity thought leaders and additional industry research from GovTech.

Five Benefits Federal Agencies Gain From Observability

More than ever before, federal agencies are responsible for managing increasingly complex, diverse, and distributed IT infrastructures. While traditional monitoring methods allow for insights into specific network activities, IT teams still face challenges viewing all the interdependencies between the various network, cloud, and IT functions. The overload of alerts and disjointed analytics from disparate tools make it challenging to provide the actionable insights necessary to identify and resolve mission-critical activities rapidly. Multiple tools can become cost-prohibitive to maintain and scale, creating operational risks. So, what’s the solution to all of this?

SolarWinds Benefits of Observability Blog Embedded Image 2022More agencies are implementing observability solutions, taking traditional monitoring an important step further. By using data and insights from monitoring, observability provides a holistic understanding of your infrastructure, including health and performance. With layers of data and immediately-synthesized analysis, IT pros can spot inconsistencies before they become issues. These functions give IT teams single-pane-of-glass visibility with actionable intelligence to expedite problem resolution and enable proactive management. Of course, it provides much more, and below are the top five benefits federal agencies can gain from observability.

Holistically observe end-to-end service health, security quality, and availability. Observability gives agencies deep visibility into their IT infrastructure and services so they can focus on critical issues without a flood of telemetry data to sift through. It allows agencies to make better decisions and do more, creating efficiencies and freeing time to focus on mission-critical activities.

Predict and prevent user experience degradation and service outages. Observability provides unified on-premises, hybrid, and multi-cloud visibility, giving agencies the insights needed to reduce outages, improve recovery time, and ensure service levels are where they should be across complex and distributed infrastructures. IT teams can quickly pinpoint component changes degrading service performance and more accurately predict and plan resource capacity to prevent issues and unplanned downtime.

Identify and resolve anomalies, issues, and incidents. Observability provides actionable intelligence about complex environments by visualizing data in an easy-to-understand format. With observability, agencies can identify and diagnose compliance issues and potential security threats, streamline data sets through aggregation, and bring cross-team collaboration to a single source of truth (SSOT).

Reduce compliance, threat, and data breach risks. Observability offers a comprehensive and cost-effective solution to consolidate toolsets, break down information silos, and reduce remediation time across multi-cloud, on-premises, and hybrid environments.

Offer deep service-level actionable insight to determine which components can best scale performance and capacity. Observability helps federal IT pros understand interdependencies in network infrastructure and apps with full-stack data correlation. It also allows agencies oversight into on-premises and cloud costs in one solution to help simplify cloud migration efforts. Finally, network bandwidth analysis and performance monitoring give insights into where there may be opportunities to scale performance and capacity.

 

In short, observability solutions can help fortify the mission-critical services relied on by federal agencies. For more information on how observability can help your agency achieve optimum IT service performance, compliance, and resilience, visit https://www.solarwinds.com/solutions/hybrid-cloud-observability.

Securing Containerized Applications in Government Agencies

Government organizations, like their private-sector counterparts, are adopting containerized environments at a rapid pace. Across industries, 50% of organizations using the cloud will deploy containers by 2022, says Forrester, and agencies from within the U.S. Department of Defense to the National Institutes of Health to the U.S. Department of Agriculture have embraced containers already.

There are good reasons for this shift in application development and operations. For development, containers offer advantages over “waterfall” approaches. Waterfall methodologies organize development projects in distinct linear phases. Containers support agile and DevOps processes, which emphasize automation and collaboration to build applications more iteratively and rapidly.

For operations, containers let you quickly spin up resources to scale compute power to meet new demand. And if you encounter an issue with an application component you don’t need to shut down the entire application to resolve it because they’re built on microservices. Instead, you can fix the component while the rest of the application remains functional.

But while containers simplify some aspects of IT, they can complicate others. In particular, containers introduce new cybersecurity challenges. Understanding the unique cyber-risks of containers, along with the tools and strategies for mitigating them, can help you take advantage of the benefits of containers while also keeping them secure.

SolarWinds Securing Containerized Apps Blog Embedded Image 2022Containers Are Just One Piece of the Cyber Puzzle

Containers present old and new cyber issues. For starters, container images can contain vulnerabilities. More problematic, cybercriminals can design a malicious image to look like a legitimate image. They can then upload the image to a public registry such as Docker Hub to trick admins into deploying the malicious version.

Microservices also introduce cyber-risk, because the more microservices you use, the more components communicate with one another. Your agency might run microservices across both on-premises and multi-cloud environments—placing compute in Microsoft® Azure, say, and storage in AWS need to be tied together in a secure fashion. And if you fix a problem component and redeploy it, the redeployment needs to be based on a secure, up-to-date snapshot.

With containers, infrastructure monitoring becomes more challenging. Containers call for specialized monitoring tools providing insights into more than the containers. You also need to monitor the rest of your system and network components in the context of those containers. For example, if an application stops working, you need a way to identify the source of the problem quickly and easily, whether it’s an application component, the container, or the server or network.

You can address some of these issues with tried-and-true approaches such as vulnerability scans. A security information management system (SIMS) can also collect relevant data such as log files in a central repository for analysis.

You’ll need additional security for your containers if your agency is implementing a zero-trust framework and technology providers are beginning to respond. Container platforms like Dockers and Kubernetes offer greater visibility, further enhancing security. And third-party providers proactively look for vulnerabilities in containers as they’re being deployed using security analysis tools.

Technologies around service mesh, which control how application components share data with one another, are also gaining maturity. Software-defined wide area networks (SD-WANs) enable encrypted communications across environments. They let you specify, for example, where certain containers can talk only to other certain containers or when communication can be one-way but not two-way.

An infrastructure monitoring and management platform can help you administer and secure your containers. Providing a single pane of glass to manage in both on-prem and multi-cloud environments can simplify the security complexity inherent to containers. An effective platform should enable you to:

  • Track details such as hosts, host clusters, environment dependencies, and deployments
  • Review metrics for containers, hosts, and other infrastructure elements
  • Analyze container activity in an application-stack management tool
  • Organize containers in a mapping tool for managing the physical and logical relationships among infrastructure entities
  • Display detailed data about individual containers on a single screen

Technology is Just One Piece of the Cyber Solution

But technology is only part of the solution—people are the rest.

IT functions are often structured with teams for DevOps, which handles application development and operations, and SecOps, which handles cybersecurity and operations. Part of the goal behind DevSecOps (development, security, and operations) is to bring together the brainpower of both teams. Over time, your agency should develop a technical talent pool with diverse expertise and experience to help cover all your cybersecurity bases.

In March 2021, FedRAMP released vulnerability-scanning requirements for containers. A key intention of the requirements is to promote knowledge of best practices for the safe use of clouds. Training in best security practices around containers is as essential for your developers and software engineers as it is for your security pros.

Also, look to your IT providers for their input and expertise. They should be happy to share their knowledge and experience to ensure you get the most from their cloud- and container-focused technologies and you also know how to implement them securely.

Your organization should meet the FedRAMP requirements for container security, but keep in mind the guidance doesn’t cover every detail necessary to ensure strong security for your containerized environments. After all, cyber vulnerabilities, cyber threats, and your unique cyber risks will constantly evolve. You need continuous monitoring, ongoing analysis, and continuing education for your IT team. And it’s just as important to document processes for extending a cyber-safe culture throughout your organization as you deploy more containers.

 

Visit our website for information on the SolarWinds® Server & Application Monitor solution and how it can help you monitor your containerized applications.

Why Artificial Intelligence Is Key to a High-Performing, Data-Centric Joint All-Domain Command and Control (JADC2) Strategy

Few entities collect as much data as the Department of Defense (DoD). From the Air Force to the Navy, each branch of the military draws on sensor and other intelligence data to gain actionable information in near real time.

Yet even as these branches collaborate strategically and tactically in training and theater, a lack of system and data interoperability limits intelligence-sharing efforts such as those led by the Joint All-Domain Command and Control (JADC2) program.

JADC2 is the DoD’s concept to connect sensors from all the military services—Air Force, Army, Marine Corps, Navy, and Space Force—into a single network based on a zero-trust architecture, machine learning (ML), and artificial intelligence (AI). Envisioned as a single cloud-like environment for the Joint Forces to share intelligence, surveillance, and reconnaissance data for faster decision-making, the effort has been anything but smooth sailing.

Speaking in late 2021, Army Col. Corey L. Brumsey, a member of the JADC2 cross-functional team, cited interoperability—or a lack thereof—as the biggest challenge to the team.

Solarwinds AI JADC2 Blog Embedded Image 2022It’s not surprising. Given the stove-piped and complex nature of DoD systems, getting systems to work together to facilitate data transmission quickly and seamlessly is no small feat. To tackle the challenge, the team is launching a new effort—dubbed the “Interoperability 2.0 Challenge”—that may also incorporate the Five Eyes allies, namely the U.K., Canada, Australia, and New Zealand.

But even as these partners find ways to fix JADC2’s system interoperability issues and larger allied coordination efforts, challenges remain. For warfighters to detect potential threats and improve decision-making, the Joint Forces staff’s cross-functional team must also ensure rapid data processing and analysis and high-performance connectivity across JADC2.

AI and ML technologies will power this effort in two critical ways:

  1. Cutting Through the Noise With AI and ML

The JADC2 strategy is designed to create a combined command and control system for all DoD (and potentially Five Eyes) data sources. But with more devices connected to the system, the amount of data requiring processing and analysis is unfathomable. Indeed, cutting through the noise to understand and clarify critical threats in cross-domain warfare—in the shortest possible time frame—is beyond the scope of any group of humans. Hence the JACD2 push to adopt AI-powered systems.

Though some analysts raise questions about whether it’s appropriate to reduce the amount of human involvement in military-related decisions, others see AI as “absolutely essential” to JADC2. For example, Brig. John Olson of the Space Force told a panel at the 2021 American Institute of Aeronautics and Astronautics ASCEND space conference for JADC2 to work, AI and ML are enablers that “…make us able to react, and respond, and…make sense of the information, then act upon it.” While others comment AI and ML software is a must “to help prune the data” for users.

Though humans will still have the final say, AI and ML can automate and normalize multi-domain data from disparate sources and branches of the DoD, stitch those data points together, and ask complex questions of the data—all in near real time. JADC2 also intends to use AI and ML to identify targets and recommend the optimal weapon (both traditional and cyber) to engage the target.

With these insights, analysts and operators gain a more complete and near real-time common operating picture. At the same time, commanders get the actionable intelligence they need for more informed decision-making (both tactical and strategic) about simultaneous and sequential operations across all domains.

  1. AIOps—Ensuring Observability of Data in Transit

Data is mission-critical, and JADC2 can’t lose access to it. But as information moves across a single discrete system combining domain networks, the cloud, and legacy systems—each architected through a different lens—network and application performance issues are inevitable.

To overcome this problem, JADC2 network professionals will require continuous visibility to properly understand data movement; observe the connections between devices, applications, and connections; and automatically identify bottlenecks before data availability is impacted.

AI and ML are key enablers of this. Using AIOps, which deploys AI and ML to digest and analyze large volumes of data from across the IT environment, network administrators can automate critical network monitoring and management tasks, a necessity in hybrid infrastructures and environments built to interlace data sources.

Furthermore, with AIOps-powered observability, teams can predict the unpredictable. They can anticipate network issues or security threats before they arise, detect anomalies, gain the context they need to remediate, and act ahead of performance impacts. And because AIOps relies on ML, it will continuously improve over time, learning about the JADC2 environment, providing more insight into the probable root cause of issues, and even triggering mitigation workflows so network teams can focus on continuous network optimization.

JADC2 Is Just the Beginning of AI-Powered Multi-Domain Operations

JADC2 is only the beginning of the DoD’s mission to provide secure information sharing across multiple domains. The U.S. Central Command (CENTCOM) is also working to establish a data-centric architecture to share information with more than 50 nations, leveraging AI and ML. With both efforts evolving from planning and exercise to reality, these technologies will loom large as the Joint Forces staff look to turn disparate data sets into highly available and actionable intelligence.

 

Visit our website for more information on enhancing connectivity and interoperability across systems.

Key Factors for Enhanced Database Performance

Data is the most valuable resource a federal agency can own. It’s currently being generated at an astonishing rate of 1.145 trillion MB per day, and market intelligence firm IDC predicts we’ll generate as much as 463 exabytes of data each day by 2025. This massive rise in data collection introduces new challenges in how agencies design, manage, and monitor databases to ensure they run at peak performance and can meet the mission-critical needs of end users.

Simultaneously, the technology landscape is also getting more complex. Technology professionals must navigate distributed workloads in a hybrid IT environment, manage database migrations as workloads move to the cloud, and understand database specialization spanning relational and NoSQL platforms—all on top of their day-to-day responsibilities.

Though the stakes and complexity of the role of IT professionals is increasing, the solution doesn’t have to be difficult or complex. Agencies can realize a dramatic difference through enhanced database management—a concept that is comprehensive of both database performance monitoring and DataOps.

In fact, there are several things agencies can do to enhance database management, performance, and speed, even in light of a dramatically increasing data load. One of the most important aspects is database performance tuning.

Optimizing Your Database Performance

How can you do this? There are several opportunities for database performance tuning capable of having a dramatically positive impact.

SolarWinds Database Performance Blog Embedded Image 2022Let’s start with response time analysis—the database optimization piece of the equation. Response time analysis helps database administrators (DBAs) identify and measure an end-to-end process, starting with a query request from the end user, ending with a query response, and including the time spent at each discrete step in between. This helps identify bottlenecks, pinpoint root causes, and prioritize actions based on the impact poor database performance has on end users.

Response time analysis is a pragmatic approach to tuning and optimizing database performance, allowing the database team to more easily identify issues and deliver measurable results.

Once the team has committed to implementing response time analysis, next up is ensuring indexes are implementing a logical data structure to make the data retrieval process more efficient—a key component of database performance monitoring. An indexing strategy is one of the toughest problems a DBA can face. The tendency is to index an object instead of indexing for how it’s going to be queried. This often leads to too many indexes on the underlying objects, which can cause regression. A database performance management solution can help you identify missing and duplicate indexes faster so you can improve database performance.

Other tasks the IT team can perform to help enhance database performance include the following:

  • Reallocate the computing system’s memory reserves. When there’s not enough memory available, databases are often hit hardest.
  • Ensure the team is using the latest version of MySQL or Oracle. Sometimes, keeping the database up-to-date is all it takes to improve database performance.
  • Avoid common SQL index pitfalls like coding loops and correlated SQL sub-queries.
  • Defragmenting data might help speed up the database. Make sure there’s enough disk space.
  • When preparing for cloud migrations, understand how your agency’s data environment is visually structured. A database solution can help the team compare, synchronize, script, and navigate data and schemas to drive efficiency and productivity.

Database performance tuning helps re-optimize a database system from top to bottom—and from software to hardware—to improve overall performance. The process can involve reconfiguring operating systems according to how they’re best used, deploying clusters, and working toward optimal database performance to support system function and end-user experience. No matter how many of these tasks your team implements, every little bit will help as our data demands continue to increase exponentially.

 

Try a free trial of the SolarWinds® Database Performance Analyzer to learn more about optimizing your database performance.

IT Modernization for Campus Re-entry

Many colleges and universities are poised to emerge from the pandemic stronger than they went in. In large part, they have used the last year to accelerate their adoption of online education where it makes sense, keeping the physical classroom time dedicated to experiential forms of learning. A theme among these institutions is the need to understand what the IT infrastructure can support and how well it’s holding up as institutional demands ebb and flow. A Campus Technology “pulse survey” among IT leaders and professionals found that while the impact of remote learning and work made their jobs harder rather than easier (by 11 percentage points), the outcomes have been worth the effort. Four times as many participants agreed than disagreed that their organization’s response to the pandemic was improving the way they deliver services to students, faculty and staff. Learn how your institution can continue to adapt IT infrastructure in Carahsoft’s Innovation in Education report.

 

Mastering the Art and Design of Remote Work

“On a traditional physical computing device like a workstation, PC or laptop, a GPU typically performs all the capture, encode and rendering for power complex tasks, such as 3D apps and video. NVIDIA virtual GPU technology virtualizes GPUs installed in the data center to be shared across multiple virtual machines or users. The rendering and encoding are done on the virtual machines’ host server rather than on a physical endpoint device. The basic idea is to share the GPU functionality with multiple users and give them the same experience as they’d have if they were running applications on dedicated workstations. The advantage is this: Instead of having a one-to-one connection — one GPU per computer — you get one-to-many. The physical GPU runs in a server and the vGPU software dynamically slices it up to allow multiple users to access its power (up to as many as 64 users per GPU).”

Read more insights from NVIDIA’s Senior Product Specialist, Ismet Nesicolaci.

 

Easier Identity and Access Management

“Single sign-on (SSO) has long been a boon for making the authentication process more efficient. Yet, because of their distributed structures, most institutions haven’t gone all the way with SSO. It may be that program control for the identity and access management (IAM) layer is maintained for some applications by central IT and for others by a given college or department. IT may lack the staff to keep up with the programming requirements and/or the sudden influx of new demand. Or the college or university may be working with other institutions, each operating autonomously even as they need to share people, programs and research data. Then there are the security aspects. While SSO makes for a centralized approach to application access, that access also poses a big risk: If a cybercriminal gets unauthorized access through the SSO, they will be able to access all of the associated applications. Embedding multi-factor authentication (MFA) into the login process adds a needed level of protection to authentication processes to keep accounts truly secure. But students are still stuck with multiple logins, and institutions have to try to keep up with a sprawling and complicated IAM system.”

Read more insights from Okta’s Senior CIAM Developer Specialist, Ryan Schaller.

 

Evolving with IT to Support Research

IIE Campus Tech May Campus Re-entry Modernization Blog Embedded Image 2021“While institutions have expressed continuing concern about wobbling tuition and ancillary dollars, one source of revenue remains healthy for higher education: COVID-19 research funded by federal and state programs. The full measure, from community colleges to Research 1s, are at the forefront of projects to develop vaccines; uncover the sources of coronavirus and its evolving replication patterns; create new initiatives for public health response; understand the impact of the virus on various populations; study the physical and mental health and learning effects of prolonged quarantine; and explore numerous other facets.. However, the heightened attention on campus research comes with a continuing challenge: how to keep up with IT infrastructure needs, typically assembled once the grant funding arrives. Since many of these recent grants are shortterm, turnaround time can be tight. In many cases, research teams are going from near-zero infrastructure to running as quickly as possible — and not just serving applications to users, but storing, processing and sharing astronomical amounts of data.”

Read more insights from Red Hat’s Chief Architect for Higher Education for the North America Public Sector, Damien Eversmann.

 

Your Starting Point for IT Optimization

“The university IT shop doesn’t typically head to Best Buy when it’s time to update infrastructure. Acquisitions have to go through internal planning and approval, budgeting and ordering — and it all takes time. Having visibility into usage trends enables the IT department to better plan, thereby preventing gaps in performance and operations and opening up ample time to line up the funding needed. Best-of-breed monitoring takes that a step further, pulling in information from outside sources, so the IT crew doesn’t have to wonder. SolarWinds Network Configuration Manager, for example, links up with the relevant hardware and software to notify you when a vendor has put an end-of-support notice out. If Cisco has issued an end-of-of life message for a given switch, it serves as an early indicator for you to help plan timing of replacement.”

Read more insights from SolarWinds’ Vice President of Product Strategy for Security for Compliance and Tools, Brandon Shopp.

 

Building the Virtualized Student Union

“The IT organization has been at the heart of successful pivoting as remote teaching and learning have dominated. As a result, now that campuses are starting to return to normalcy, administration will rely on IT to continue enabling the work of enhancing the student experience. That’s especially true if, as many experts predict, hybrid or blended learning will forevermore be part of the modernized college experience. Integration is a big part of the solution. Forget about forcing students to figure out the dozens of different apps and websites they need to fully partake of college. IT needs to integrate the learning management platform, digital content, student support services, health and wellness, esports, collaboration, campus calendar and student information — enfolding them into a virtual student union. This idea goes beyond the student portal, which has been around for a long time. What’s new is the idea of marrying systems that may be PC-based, on-premise-based and cloud-based into a single hub and then wrapping that in a blanket of security that’s transparent to the user. That becomes a game-changer for the student experience.”

Read more insights from VMware’s SLED Strategist, Herb Thompson; VP of State, Local, and Education, Doug Harvey; and Senior National Director for SLED Business Development, John Punzak.

 

Accelerating Student Success with AI

“As growth in undergraduate credential earning has come to a standstill over the last year, colleges and universities are seeking new ways to draw in the right candidates while also holding onto the students they have by bolstering student success efforts. Numerous institutions of higher education are finding success in strategic aspects of the academic lifecycle by embedding the use of artificial intelligence and machine learning. There are several areas where Google sees the potential for “quick wins” in student success initiatives: optimized enrollment and admission, such as automating the activities of credit transfer analysis, document analysis and personalized course planning; virtual assistance, for delivering 24/7 online tutoring and support in multiple languages answering common questions about required courses, financial aid and other topical subjects; and student engagement, like tracking engagement and predicting which students are at risk, to maximize retention.”

Read more insights from Google Cloud’s Cloud Strategic Business Executive for Higher Education and Research, Jesus Trujillo Gomez.

 

A Conversation with Jen Leasure

“As everything went online and was done with technology, institutions needed to invest in new solutions to support their researchers, their faculty, their students, their administration, in conducting their business — and with limited budgets. We know that everyone’s been having particular budget constraints, and they’re looking to maximize the benefits of these types of programs and their discounts. This type of program has been especially important during COVID. And remote and hybrid learning isn’t going away, as we know. It’s difficult to foresee a world where hybrid becomes an option instead of a requirement. Folks don’t like options taken away once they’re there. And so, the investment in these types of solutions is going to continue to support future directions. Cloud access especially has become important for institutions to support their students. That’s one area where we have seen a lot of growth in the last year.”

Read more insights from The Quilt’s President and CEO, Jen Leasure.

 

Download the full Innovation in Education report for more insights from these thought leaders and additional industry research from Campus Technology.

Gaining Insight: Data Use for Campus Success

A Campus Technology survey among readers found that while almost every college and university considered the use of data critical to institutional survival (84%), a minority of respondents believe their schools are very mature in applying data for practical uses. For example, while half of colleges (50%) have identified indicators of student success and use them regularly for decision-making, less than a third report that users can quickly and easily get the information they need (28%); have robust, secure or user-friendly tools for supporting data collection (29%); or have data experts available to guide users through their data needs (28%). In spite of the decades-long emphasis on adopting data to make better decisions, few institutions have exhibited progress towards their goals. What schools need is to have a better grasp of user experiences, which takes many forms. The practice of “data diving” on campus can have a lot of amazing outcomes. More students will show up and stick around; users’ experiences will be memorable in positive ways; employees will feel more job satisfaction, giving them pause when other opportunities arise; and innovation won’t be rushed by external forces (a.k.a. COVID-19) but introduced regularly as the normal order of operations, in response to what data is telling you. Learn how your institution can address these issues in Carahsoft’s Innovation in Education report.

The Absolutely Essential Higher Ed Superpower

“Never has education been more reliant on technology and the IT organization. As a result, colleges and universities are much more at risk from cybersecurity vulnerabilities today than ever before. At the same time, as technology dependence has grown, staffing and budget haven’t, which means IT solutions for educational institutions truly need to do more with the same or, in some cases, less than they’ve historically had. The pressure is immense. If a student can’t access an application or a resource, if a faculty member can’t get onto web conferencing, if a staffer can’t send e-mail, the institution will fail in its missions: educating students, making research discoveries, and doing everything in its power to secure the future of the world. With so much at stake, the one superpower IT teams in the education sector need to develop above all others is X-ray vision. Gaining visibility into what goes on inside your systems lets you become proactive, allowing you to see exactly where to target your time and attention and quickly troubleshoot problems for speedier response. Unless you were born on Krypton, the best way to achieve this level of visibility is to capitalize on tools that deliver the same capabilities.”

Read more insights from SolarWinds’ Group Vice President of Product Management, Brandon Shopp.

Why the Student Experience Matters (and What You Can Do About It)

IIE Campus Tech Data Use Blog Embedded Image November 2021“If a project is served by a point product, a program needs a platform. And I consider a platform to manage the student experience to be as vital to the higher ed technology stack as the SIS, the LMS, and the CRM. This is the missing link that will drive the metrics you care most about. Getting rigorous at a student-specific level about the experiences each is having is the only way to take actions to make them better — at both a campus level and at an individual level. While the three other systems provide some insight into the student experience via the operational data they generate, they mostly offer lagging indicators. They can tell you that someone hasn’t been in class for three consecutive sessions, isn’t completing assignments, or is in danger of being put on academic probation. But they won’t tell you how the student is feeling. If they’re actually engaged in teaching and learning. If your school doesn’t understand why a student is acting a certain way, it’s not addressing the root problem.”

Read more insights from Qualtrics’ Global Industry Leader in Education, Omar Garriott.

It’s Time to Re-imagine Your Front Door

“Gen Z and Gen Alpha users go online expecting constantly updated content. This doesn’t mean you need to cater to a TikTok or Instagram audience; but it’s worth asking, what if this campus website were TikTok? How would that change the way you think about site and content design? Maybe you employ more video because it can be super engaging. Maybe you share more of the experience on campus — classrooms, mock lectures, the humor found in everyday activities and interactions among students. Universities could consider emphasizing all of the different programs and services on offer. Take a page from Netflix, Hulu, Amazon Prime and others that surface multiple suggested offerings they have for you to watch that match to your interest. What would the school’s website be like if you used Netflix as the model? How would you organize the content and would it be on the homepage or a couple of layers down? How would you steer people through?”

Read more Acquia insights from Mediacurrent’s Creative Strategy Director for Product, Elliott Mower.

Seeking a Modern Search Experience

“Why shouldn’t the same search power let the IT organization gain visibility into the operations of its infrastructure? The idea is to observe the entire ecosystem by peering into logs, metrics, traces and more. That would enable the IT staffer to identify what’s running well or poorly, whether server or workstation, application or website. When something goes awry, he or she would be better positioned to resolve issues more quickly and proactively, thereby ensuring better digital experiences for users. Security information and event management (SIEM) has become a valued tool in security operation centers. The idea is to gain insights into the security state of the institution by monitoring data traffic, identifying anomalies and alerting IT for corrective action. What’s needed is a search technology outfitted with machine learning-driven detection rules for threat hunting and security analytics that are aligned to standards, such as a MITRE ATT&CK framework. Then IT can look specifically at what’s happening from a security perspective: Is it a lateral movement? Is it data exfiltration? Is it related to command and control? The faster the visibility, the faster the remediation.”

Read more insights from Elastic’s Senior Lead Solutions Architect, Jared Pane.

When Live Virtual Learning Really Works

“As you assess the caliber of the virtual learning tools your instructors are armed with, make sure they provide the functionality that facilitates a more memorable learning experience. That’s how you can play a role in helping students get and retain more from their courses.

For example, make sure there’s a level of content consistency across sections being taught by different people. You do that by using a platform where the entire presentation with all interactive tools (slides, video, audio, chat threads and exercises) can be stored in a shared system with assigned editing privileges. Also, give your instructors “backstage” controls that will help them monitor the presentation as it unfolds, so that they can understand what the students are viewing. Choose a platform that includes an engagement dashboard, to allow instructors to shift session operations in real time if engagement begins to lag. Essential tools would also include a speaker notes area and a chat, specifically to permit behind-the-scenes collaboration among presenters and moderators. Of course, integration with existing learning management systems and authoring programs is essential. So is security compliance that ensures the data generated before and during class remains private and encrypted and the sessions themselves can’t be breached by unauthorized people.”

Read more insights from Adobe Connect’s Senior Manager of Product Marketing, Vaishali Sangtani.

The Long Wait: Why It’s Time for Higher Ed to Embrace Automation

“What’s unique about education from any other kind of organization is that the typical institution has central IT, of course, but also instructional IT and research IT. If I’m in central IT and I’m standing up an HR application, my goal is to put that system in place with the expectation that it will run forevermore. IT’s job is to keep it running. But in instructional IT, I may be standing up a classroom environment that is only going to last a semester or a lab that’s going to last a couple of weeks. Then I need to tear it down and stand up a brand new one the next time that class or lab is offered. In research IT, I’ll need to spin up hundreds or thousands of nodes to process data for astronomical photography, chemical analysis or whatever the research problem is. When the processing is done and the results are generated, I stop it and scale it all back down again. There’s a temporary nature to so much of what education encompasses and the many systems it relies on. And that’s where automation can really make a big difference.”

Read more insights from Red Hat’s Chief Architect for Education in the North America Public Sector, Daniel Eversmann.

Partnering for Smarter and More Efficient Purchasing

“On the IT front, we’re getting more calls from procurement offices for solutions to support virtual learning in general and specifically, cloud storage and cybersecurity. Air filtration, another category where a pandemic uptick makes sense, isn’t traditional HVAC. These days, facilities operations are investing in more sophisticated “smart” systems that provide remote monitoring and operations, essential for settings where staff are squeezed for time and remote work is just as probable as on-campus work. Finally, there’s furniture. Because of how students will be interacting with one another, institutions are looking for innovative ways to position learners with physical distancing in mind within the classroom and in common areas. They want furniture that can easily be moved and assembled. They also want pieces with accessibility to power, for those environments where there may not be an electrical outlet on the floor or the wall. Vendors are coming up with creative applications for batteries associated with furniture and workstations.”

Read more insights from OMNIA Partners’ Vice President of Education, Alton Campbell.

 

Download the full Innovation in Education report for more insights from these data thought leaders and additional higher education industry research from Campus Tech.

Turning Vision into Reality: How Agencies Can Forever Improve

 

In the past two years, agencies have taken a hard look in the mirror. Often on short deadlines, they had to stand up new IT systems, design innovative customer experiences, collect and manage hordes of data, provide tools for a newly remote workforce, and evaluate funding and other resources. Some agencies managed with what they had; others were exceptionally ill-prepared. The immediate challenge was a health care crisis that had overwhelmed much of society. But now that we’ve entered what’s known as the post-peak phase of the pandemic, it’s time for agencies to consider, “What next?” The purpose of this guide is not to help organizations prepare for the next disaster. The purpose is to go beyond that — to explore how agencies can take a broader, more overarching and continuous approach to self-improvement. Download the guide to read more about how to institute continuous modernization to exceed your goals.

 

Digital Transformation Starts with Strategy

“For many people, the first and only interactions they have with a government agency are through its website, and good first impressions can go a long way. It’s not just having an exciting color palette, cool graphics and boxes that flip over when you hover your cursor on them. It’s about building a site, a platform, that appeals to and serves the public and is intuitive, quick and secure. It needs to highlight the work an agency does, the services it offers consumers and the resources it makes available.”

Read more insights from Mobomo’s Chief Executive Officer, Brian Lacey.

 

Videoconferencing: Modernizing How Employees Connect and Collaborate

“At the intersection of all the types of reforms we cover in this guide — people, technology, innovations and budgets — lies one that has reimagined what it means to communicate: videoconferencing. Indeed, when agency offices temporarily closed nearly two years ago, employees who knew little about their laptop cameras suddenly became webinar proficionados. They scheduled video meetings, learned to read body language from the chest up, and got a peek into coworkers’ home lives. And many agencies discovered that video technology not only made remote work a viable long-term option, but it allowed organizations to expand their customer services in a forward-looking, energized way — akin to what the private sector often provides.”

Read more insights from Zoom’s SLG Industry Marketing Manager, Elijo “Leo” Martinez.

 

How to Cross the Analytic Divide and Democratize Data

“In one of America’s largest counties, a public health agency struggled with collecting and interpreting COVID-19 test results quickly and accurately because of data quality issues requiring hours of manual review. Analytic automation made a difference. This technology unified processes across analytics workflows by analyzing data quality and format before notifying relevant parties about potential compliance issues. Ultimately, analytic automation saved the agency five full-time equivalent employees manually reviewing data quality and notifying reporting labs about errors in this information. Reducing the amount of manual labor also accelerated the time needed to map COVID-19’s spread and address related public health challenges.”

Read more insights from Alteryx’s Director of Solutions Marketing, Public Sector, Andy MacIsaac.

 

IIG GovLoop Modernization Guide Blog Embedded Image 2022Are People at the Center of Your Modernization Efforts?

“Agencies have to be mindful of the narrative that people believe about the nature of government work. They must be skilled at cutting through the noise and using language that speaks to the heart of what government does and why that work is critical. ‘Government matters, and we have seen that very dramatically for the past two years,’ Heimbrock said. ‘Not only is government’s ability to respond to crises the difference between people living and dying, but our government institutions are under attack.’ Agencies can’t afford to be stymied by bureaucratic entanglements and dated technologies, which are steep prices of not paying attention to modernization.”

Read more insights from Qualtrics’ Chief Industry Advisor for Government, Sydney Heimbrock, Ph.D.

 

Making a Case for Continuous Improvement

“Home improvement shows are something of a metaphor for government modernization. You can superficially update an old home for quick sale and profit, or you can do more intensive and long-term improvements that require additional time, talent and, of course, money. And as outdated as the home may look, it’s worth remembering it probably was impressive in its day — kind of like the bygone technology that still supports many government agencies. That’s the parallel Brandon Shopp with SolarWinds drew when asked about the need for continuous agency modernization. ‘Technology is evolving constantly,’ he said, ‘and so unless you want to end up with something like a house that looks very dated and old, you need to stay on top of things.’”

Read more insights from SolarWinds’ Group Vice President of Product, Brandon Shopp.

 

USAID Learns New Tricks of Training Trade

“Officials at the U.S. Agency for International Development (USAID) were on a path to harmonizing numerous data-related training when COVID-19 made virtual work a necessity. For USAID, this proved the perfect opportunity to roll out a training curriculum that worked for employees who were working remotely. Before the pandemic, USAID leaned heavily on classroom-based instruction. In exploring options for virtual training, it recognized an opportunity to rethink instructional design, said Julie Warner Packett, a Data Scientist at USAID who helps lead training on data use and governance.”

Read more insights from USAID’s Data Scientist, Julie Warner Packett.

 

A Federal Vision for Enterprisewide IT

“The state of Connecticut has launched a new “Information Technology Optimization Process” to replace the state’s disparate approach to agency IT. The yearlong initiative aims to deliver coordinated, modern solutions for agencies and the public alike — and recognizes that nearly 50% of the state’s IT workforce is older than 55. The new strategy has three overarching goals to improve state operations now and into the future. First, the plan aims to optimize existing technology by completely rethinking the structure of Connecticut’ IT delivery system. Second, the plan will accelerate efforts to deliver more digital government services. Using enterprise technology, officials aim to hide the “seams” between agency operations and user interactions. And third, the state will enhance its cybersecurity protections.”

Read more insights from OPM’s Chief Information Officer, Guy Cavallo.

 

Empowering Frontline Employees to Lead a Culture of Innovation

“Within the Veterans Affairs Department (VA), the Veterans Health Administration’s Innovators Network (iNET) stands out as a leader for several reasons. High on that list is the reality that innovation is just as much a mindset as it is concrete actions, and Allison Amrhein, Director of Operations for iNET, has the kind of growth mindset that’s needed to sustain and expand new ways of working. The program launched in 2015 in response to VA’s annual employee survey, which found that some employees did not feel encouraged to try new things at work. Today, the program operates across 34 VHA sites, but all sites may participate.”­

Read more insights from iNet Director of Operations, Allison Amrhein.

 

Wayne County Is Making Funding Last

“After Superstorm Sandy in 2012, New York City received Community Development Block Grant funding from the federal government to help rebuild storm-ravaged neighborhoods. Nearly a decade later, many of those projects — and the contracts that support them — are still going strong, said Rachel Laiserin, Chief Financial Officer of the city’s Department of Design and Construction. The key to those projects’ success has been a commitment to including contracting officers, procurement staff, legal teams and finance team members early in the process and maintaining a long-term perspective.”

Read more insights from Wayne County Michigan’s Chief Financial Officer, Hughey Newsome.

 

Download the full GovLoop Guide for more insights from these modernization thought leaders and additional government interviews, historical perspectives and industry research on the future of modernization.

How Automation Is Securing the Largest DoD Closed Network

 

For the Army National Guard (ANG), getting information in near-real time is imperative. Each Army National Guard soldier must be able to securely access data and other IT services wherever their duty takes them. To make this happen at scale is a significant undertaking, so the ANG has built a formidable network—the DoDIN-A(NG)—that connects its user base of 450,000 people spanning 11 time zones. The network, previously known as GuardNet, is now one of the largest closed networks in the world.

Securing and ensuring the uptime of the network, while maintaining compliance, is a massive challenge. But thanks to the power of automation, it’s a challenge IT leaders have met head on. Let’s look at three best practices the Army National Guard is employing to secure, manage, and monitor its unique and dynamic network environment.

Ensuring compliance on a large scale

A key aspect of managing risk in Department of Defense (DoD) environments is compliance with Security Technical Implementation Guides or STIGs. Each STIG contains rules on security hardening and maintenance processes for a myriad of networks and IT systems with which all DoD IT assets must comply. Monitoring network configurations against these compliance policies across the massive DoDIN-A(NG) infrastructure is a painstaking process. This isn’t just a compliance issue. Any configuration changes in the network can lead to security breaches, outages, and slowdowns.

To mitigate this risk and ensure compliance, ANG depends on automation.

Configuration drift is inevitable, but ANG has deployed a monitoring best practice to automatically detect any deviation from a baseline configuration and proactively notify network administrators in near-real time. They can then drill deeper for more information such as who made the configuration change, what changed, and any related performance impact.

Automation also streamlines the process of configuration updates across the entire infrastructure. Instead of pushing updates to one device at a time, administrators can roll out global configuration updates to selected devices in the DoDIN-A(NG) environment—a huge time saver.

SolarWinds Automation DoD Blog Embedded Image 2022Achieving true continuous monitoring

Continuous network monitoring is an integral part of NIST’s Risk Management Framework for federal information systems and is intended to move security monitoring and auditing away from a point-in-time “one and done” mentality.

Because threat actors are constantly probing networks for vulnerabilities, ANG employs continuous monitoring across the DoDIN-A(NG) network to automatically identify and remediate areas of risk such as policy changes on devices, non-compliant patches, FISMA compliance violations, and more—all in near-real time. If anything strays from the norm, automated alerts ensure no vulnerability goes unchecked.

Because Command Cyber Readiness Inspections (CCRI) and STIG auditors want documented evidence of continued compliance, ANG’s monitoring capabilities also ensure data is collected and stored from across the network making it easy to generate compliance reports.

Unparalleled global network visibility

Knowing what’s going on with all the network devices on DoDIN-A(NG) involves staying on top of millions of moving parts across geographically dispersed environments. To help network administrators know what’s up, what’s down, and what’s not performing as expected, the ANG has adopted a holistic, single-pane-of-glass view of the entire network—known as OCULUS.

Easily customizable to meet the needs of service owners, OCULUS’ intuitive, consolidated map-based views allow the ANG’s Network and Security Operations Center to visualize network health, identify rogue devices, and troubleshoot performance issues across the entire tech stack.

This unique approach to network monitoring proved an important enabler of the ANG’s shift to a work-from-home policy during the pandemic. Using OCULUS, administrators can display and monitor the performance of ANG’s VPN remote access services across the globe. OCULUS provides automatic visibility down to the customer level including the names of who’s connected, the length of the connection, data transmitted, and more—while being able to see the health of the domain and troubleshoot possible issues.

The striking visual impact of the system also provides a persuasive display of performance to senior management and aids in advocacy for funding.

Applying lessons learned across the DoD

At the end of the day, saving time, realizing efficiencies, eliminating human error, and simplifying compliance is the end goal of any IT leader within the DoD. As unique as the ANG network is, by leveraging these same best practices—notably automation—other defense organizations will be better equipped to manage and secure the complex networks needed to execute their missions, without burdening their finite resources.

 

Visit our website for more information on one the largest closed network and how the DoD is using automation to support the security of data and other sensitive information.