Insights from SOF Week 2023

By maintaining effective collaboration and innovation, the U.S. furthers its quality defense. This year’s SOF Week conference was held May 8-11 in Tampa, Florida. Organized by the Global Special Operations Forces Foundation (GSOF) and the United States Special Operations Command (USSOCOM), the event offered attendees an exhibition hall and extensive networking and educational programming to discuss advanced physical and digital security measures within defense operations.

The Importance of People

The Marine Forces Special Operations Command is initiating a new program called Cognitive Raider. This initiative’s goal is to operate parallel to the Marine Corps by making a difference on the battlefield through a robust workforce. There are several traits the Cognitive Raider initiative is looking for in applicants. Individuals must be prepared to secure assets against adversaries and be able to operate, not only as an individual, but also as a part of a team. Other vital traits are professionalism, dependability and modesty in relation to their achievements. The Marine Forces deliberately select candidates who display character and are prepared to learn special skills that build the organization up for success.

As the military aims to advance along with the dynamic evolution of technology, they must prepare for significant and unpredictable changes. Agencies may need to repurpose existing technology and investments to gain results in new areas that were previously considered low priority projects.

Carahsoft SOF Week Recap Blog Embedded Image 2023Artificial Intelligence Driving Innovation

In the digital age, and in the U.S. specifically, the economic ecosystem is digitally connected. This makes cybersecurity vital to every part of daily life. Bad actors can utilize AI’s abilities to hack software before defensive tools have been put in place; however, there are ways to mitigate these challenges.

AI technology drives efficient capability by improving agency understanding of technology and by accelerating decision-making. While humans can only make a few decisions a minute, AI can make hundreds of thousands of precise calculations and execute accordingly. This makes AI helpful in performing penetration tests to identify security weakness for offensive cyber operations. In finding these weaknesses, agencies can get ahead in the cybersecurity battle against threats.

Innovation in U.S. Central Command

Innovation is a vital part of the national defense sphere, and emerging technology can be leveraged to drive agency growth. This means employees must be properly prepared to use new software. To achieve this, agencies need to implement mechanisms and processes that encourage employees to enact change.

Team collaboration can help agencies reach grounded conclusions. Having tech partners is vital, as agencies can swap information on their respective expertise to help each other accomplish their goals and optimize processes. Schuyler Moore, the Chief Technology Officer for U.S. Central Command said she collaborates with other team members “…consistently to scan and ask folks about what processes are working, and what good ideas [they] have that might improve on how we do things.”

To best support timely tech updates and modernization, agencies should begin by shifting the organizational structure to create new pipelines and entities to sustain long-term innovation. In addition, agencies should prioritize projects in correlation with the shifting agency needs. By utilizing recurring exercises and group conversations, organizations can coordinate employee efforts and set expectations on priorities and goals.

Collaboration around new technology drives important innovation for national security. By facilitating the sharing of these ideas, SOF Week has spurred on new defense developments and shared knowledge.

 

To learn more about the topics discussed at SOF Week, view Francis Rose’s full Fed Gov Today episode co-sponsored by Carahsoft.

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at SOF Week 2023.*

Palantir Announces Availability of Foundry on Microsoft Azure

Amid global economic uncertainty, access to integrated, protected, and trusted data and analytics is more vital than ever when it comes to creating business value. To further enable transformative outcomes, Palantir is pleased to partner with Microsoft in making Palantir Foundry available on Microsoft Azure, empowering existing and new customers to more effectively apply data and analytics in their operational decision-making.

Through this new collaboration, organizations will be able to quickly deploy Palantir Foundry — our ontology-powered operating system for the modern enterprise — as well as being able to unlock further value in Azure Data Services with Microsoft’s cloud-scale analytics and AI solutions.

As part of this relationship, our Foundry platform is available on Azure, enabling customers to deploy our software at speed, while benefiting from Azure’s trusted and secure infrastructure, as well as its global commercial footprint.

Availability on the Azure Marketplace will enable seamless purchasing and invoicing, with customers able to use their existing Microsoft Azure Consumption Commitment (MACC) to purchase a Foundry license and infrastructure costs.

Foundry’s single view ontology can layer on top of Azure Data Services, where they can then use investments for faster time to value, by better unlocking insights, and predicting and simulating outcomes for more data-driven decision making.

Palantir Foundry on Microsoft Azure Blog Embedded Image 2023

The platform will also integrate with native Azure Data Services for enterprise data management on Microsoft Azure, such as Azure Data Lake, Azure Synapse Analytics, Microsoft Power BI, Microsoft Dynamics 365, Microsoft Teams, and Microsoft Industry Clouds. This means customers will be able to further build on their existing IT investments in Azure Data Services through Palantir’s software-defined data integration (SDDI) to products like Azure Synapse Analytics, Azure Data Lake Storage, Azure AI and Azure Machine Learning, alongside others.

“We’re pleased to partner with Palantir to bring Foundry to Microsoft Azure. Organizations around the world will be able to make their data more actionable by using Palantir’s platform for data-driven operations and decision making, powered by Azure’s cloud-scale analytics and comprehensive AI services.” — Deb Cupp, President, Microsoft North America

Better Together with Palantir Foundry and Azure Data Services

Our new relationship with Microsoft will also see us go to market together in joint opportunities across industries like energy and renewables, retail and CPG, as well as other cross-industry sustainability and ESG efforts, where Microsoft customers can enhance their existing digital transformation efforts in Azure Data Services:

  • Energy and Renewables: Foundry enables customers to integrate data at speed and scale from remote sensors and Azure IoT Hub, apply this data to drive up the efficiency of assets, from offshore oil to onshore wind.
  • Retail and CPG: The platform enables organizations to bring near-instant visibility into demand and the ability to adapt their promotions, inventory, and operations in real time.
  • Sustainability and ESG: We’re helping organizations in their net zero transition by creating a common carbon ontology to empower front line decision makers to adjust their work to meet emissions targets.
  • Healthcare and Life Sciences: Foundry is used across the healthcare and life sciences value chain, from drug discovery and development, through to manufacturing, marketing, and sales. Integrate with Azure Health Data Services to manage protected health information.

We are also working together to accelerate time to value for customers in these industries any many more, by consolidating SAP and other ERPs using Palantir HyperAuto, helping them to create a more integrated data landscape. Palantir HyperAuto can help customers accelerate their journey to SAP on Azure and quickly surface insights in just hours.

Partnership in Action

Additional Palantir Foundry capabilities that can be deployed at speed via Azure include those from customers like the connected vehicle company Wejo. Wejo is a proud Palantir partner, optimizing Foundry’s capabilities, and a global leader in Smart Mobility for Good™ cloud and software solutions for connected, electric, and autonomous vehicle data.

Their data comes from over 92 billion vehicle journeys and consist of more than 19.5 trillion data points to data that provide businesses and organizations across a variety of industries the power to innovate, drive growth, transform communities, and save lives.

“We want to help reduce the 1.3 million deaths that happen each year on the road and the additional 8 million due to emissions with smart mobility for good products and services. As part of the Foundry platform, we are excited that Palantir customers with Azure will be able to more rapidly drive integrated, protected, and trusted data and analytics from Wejo for smart mobility initiatives and business value.” — Sarah Larner, Executive Vice President of Strategy and Innovation at Wejo

We look forward to working with Microsoft to broaden Foundry’s availability, enabling clients across industries to better leverage their existing investments for improved operational outcomes.

Those interested in learning more about Palantir and Microsoft’s relationship can visit the Palantir website or get started today via the Azure Marketplace.

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the terms of the partnership and the expected benefits of the software platform and solutions. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Impact Study: Accelerating Interoperability with Palantir Foundry” to learn more about how Palantir Technologies can support your organization.

Updates from Palantir Edge AI in Space

In April 2022, Palantir launched its Edge AI solution into space onboard Satellogic’s NewSat-27 as part of the SpaceX Transporter-4 mission. We’re excited to provide an update of our on-orbit imagery processing efforts. Between April and July, we performed various hardware and software tests in-orbit, and over the past few months we have been receiving some exciting results from our direct tasking and on-orbit processing pipelines onboard NewSat-27.

Where We Stand

As of November 2022, we have successfully demonstrated the capability for customers to task the satellite with multiple captures, resulting in over 100 images from NewSat 27’s multispectral camera.

We had our most recent live image capture and onboard processing test on October 30th over Tartus, Syria. Let’s run through how we handled these images starting from the raw capture in-orbit all the way to results on the ground, utilizing Edge AI in space:

Raw images captured by the satellite consist of a single channel comprising four different ‘bands’ of information — these represent a specific wavelength of light. Palantir Edge AI then orchestrated our onboard imagery preprocessing services to convert batches of raw images into standard, three-channel RGB images. By processing images into a standardized format that our models expect, we can improve accuracy and create more confident results for our users. As part of this specific capture, we received 44 images that we processed into six RGB images.

Palantir Edge AI in Space Blog Embedded Image 2023

After pre-processing was completed, we then ran AI models onboard the satellite. For this particular capture, Edge AI ran our in-house Palantir Omni model to identify buildings in the images. We received 210 building detections, or ‘inferences’, from the model. For each inference, our post-processing services created PNG thumbnails and computed geodetic coordinates by using the satellite telemetry and the onboard global elevation datasets. The outputs were then bundled and secured using various onboard cryptographic mechanisms, so we could validate the data once it was received on the ground.

In our initial on-orbit tests, we discovered an edge-case bug in our pre-processing algorithm. To remedy the issue, we uplinked a small software patch to the satellite that modified how we converted these individual images into RGB images. Once our patch was uplinked, we were able to update our software onboard to account for this new case within seven minutes. With the upgrade infrastructure in-place, we can continuously refine and augment our in-orbit software and algorithms.

Notably, in this live capture instance, we were to demonstrate that software capacity for customers to process all 44 frames within 7 minutes. In our previous post, we discussed how we had strict time constraints for each individual processing run of Edge AI. Even when we accounted for the update, our end-to-end processing time was comfortably within the thresholds that we had initially targeted. For even larger captures, our software features a built-in checkpointing system for resuming processing in the event that we have to halt processing.

What’s Next?

While this previous version of our Omni model was geared towards identifying buildings of interest and focused on the onboard integration with the satellite, our next generation of in-house models can identify more specialized object classes, such as ships. These models are already running on the ground as we test their performance. We ran this same capture through one of our newer models and were able to identify various ships near the port of Tartus in Syria with high confidence. We will be sending this new model up to the satellite in our next upgrade cycle. This will allow us to demonstrate Edge AI’s ability to continuously update and manage models while in flight, in order to optimize inference results based on areas of interest.

Figure 1: Ships off the coast of Tartus, Syria. Detections come from Palantir’s new in-house ML models on imagery collected as part of our Tartus capture.

We have also integrated our Edge AI outputs with Palantir MetaConstellation. MetaConstellation provides end-to-end software around satellite imaging, including an operational UI for image analysis. It allows users to annotate imagery with features and easily compare multiple images from different vendors and sensors over a given area of interest.

Our outputs from the AIP Satellite — either the combined image with detections, or just the PNG thumbnails — can be viewed directly within MetaConstellation. This means that in future deployments we could be able to directly downlink from an Edge AI-equipped satellite to a tactical instance of MetaConstellation in the field, allowing for detections and imagery to be sent to operational users within minutes.

Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022.

Figure 2: Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022.

Our Ongoing Commitment

We are continuing to invest in our on-orbit capabilities and are currently focused on hardware-backed security mechanisms, upgraded model capabilities, and our in-house georegistration algorithm, which should dramatically increase the accuracy of our model inferences. We are also planning to introduce new communication options to facilitate direct downlink for data, which will allow Palantir to get inferences into the hands of our customers faster than ever before.

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the expected benefits and uses of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Resilient and Effective Space Capabilities” to learn more about how Palantir Technologies can support your organization.

Enabling Responsible AI in Palantir Foundry

Editor’s Notes: The following is a collaboration between authors from Palantir’s Product Development and Privacy & Civil Liberties (PCL) teams. It outlines how our latest model management capabilities incorporate the principles of responsible artificial intelligence so that Palantir Foundry users can effectively solve their most challenging problems.

At Palantir, we’re proud to build mission-critical software for Artificial Intelligence (AI) and Machine Learning (ML). Foundry — our operating system for the modern organization — provides the infrastructure for users to develop, evaluate, deploy, and maintain AI/ML models to achieve their desired organizational outcomes.

From stabilizing consumer goods supply chains, to optimizing airplane manufacturing processes, and monitoring public health outbreaks across the globe, Foundry’s interoperable and extensible architecture has enabled data science teams worldwide to readily collaborate with their business and operational teams, enabling all stakeholders to create data-driven impact.

Palantir Responsible AI in Foundry Blog Embedded Image 2023

As we discussed in a previous data science blog post, using AI/ML for these important use cases demands software that spans the entire model lifecycle. Foundry’s first-class security and data quality tools enable users to develop AI/ML models, and by establishing a trustworthy data foundation, our software offers the connectivity and dynamic feedback loops that these teams need in order to sustain the effective use of models in practice.

Further to this, developing capabilities that facilitate the responsible use of artificial intelligence is an indispensable part of building industry-leading AI/ML capabilities. Here, we’ll share more about what responsible AI means at Palantir, and how Foundry’s latest model management and ModelOps capabilities enable organizations to address their most challenging problems.

Responsible AI at Palantir

At its core, our AI/ML product strategy centers around developing software that enables responsible AI use in both collaborative and operational settings. We believe that the term has many dimensions and includes considerations around AI safety, reliability, explainability, and governance. We’ve publicly advocated for a focused, problem-driven approach as well as the importance of robust data governance to AI/ML in multiple forums.

We believe that the tenets of responsible AI are not just limited to model development and use but have considerations throughout the entire model lifecycle. For example, developing reliable AI/ML solutions requires tools for the management and curation of high-quality data. These considerations extend beyond model deployment alone and include how end-users interact with their AI outputs and how they can use feedback loops for iteration, monitoring, and long-term maintenance.

Incorporating responsible AI principles in our software is also a core part of our commitment to privacy and civil liberties. Building this kind of software means recognizing that AI is not the solution to every problem and that a model for one problem will not always be a solution to others. A model’s intended use should be clearly and transparently scoped to specific business or operational problems.

Moreover, the challenges of using AI for mission-critical problems span a variety of domains and require expertise from a diverse breadth of disciplines. Building AI solutions should therefore be an interdisciplinary process where engineers, domain-experts, data scientists, compliance teams, and other relevant stakeholders work together to ensure the solution represents the specialized demands and requirements of the intended field of application. The values of responsible AI shape how we build our software, and in turn, they enable our customers to use AI/ML solutions in Foundry for their most critical problems.

Model Management in Foundry

Building on the platform’s robust security and data governance tools, Foundry’s model management capabilities are designed to encourage users to incorporate responsible AI principles throughout a model’s lifecycle. We have recently released product capabilities that improve the testing and evaluation ecosystem through no-code and low-code interfaces. We encourage you to read more about these here.

Problem-first modeling

In Foundry, orienting around the “operational problem” that models are trying to solve is at the heart of this new model management infrastructure. Foundry offers many tools for a data-first and exploratory approach to model experimentation, but for mission-critical use-cases, AI/ML applications need to be scoped to a specific problem. We have deliberately built modeling objectives to focus model development, evaluation, and deployment around well-defined problems.

The Modeling Objectives application enables users to define a problem, develop candidate models as solutions to these challenges, perform large-scale testing and evaluation, deploy models in many modalities to both staging and production applications, and then monitor them to enable faster iteration.

Specifying the modeling problem from the outset enables collaborators to better understand — and test for — the application and context for which the models are intended. This also provides greater insight into inadvertent reuse or repurposing of models. Modeling objectives provide a flexible yet structured framework that presents an opportunity to streamline model development and deployment by collecting key datasets, identifying stakeholders, and creating a testing and evaluation plan before their development begins.

These objectives also transparently communicate state about a particular AI/ML solution — from model development to testing, to deployment and further post-deployment actions like monitoring and upgrades. This enables users to be more intentional, responsible, and effective in how they use AI to address their organization’s operational challenges.

Deep integrations for security and governance

Data protection, governance, and security are core components of Palantir Foundry and are especially important for AI/ML. AI solutions must be traceable, auditable, and governable in order to be used effectively and responsibly. To facilitate this, Foundry’s model management infrastructure integrates deeply with the platform’s robust capabilities for versioning, branching, lineage, and access control.

Users can submit a model version to an objective and propose that model as a candidate solution for the problem defined in that objective. When submitting a model, users are encouraged to fill out metadata about the submission which becomes part of its permanent record. Project stakeholders and collaborators can use this to better understand the details of each submission and create a system of record that catalogs all future models for a particular modeling problem. With Data Lineage, they can also quickly see the provenance of every model that is submitted to an objective, revealing not only the models themselves, but also their training and testing data and what sources those datasets originally came from.

Foundry’s model management infrastructure natively integrates with the platform’s security primitives for access controls. This enables multiple model developers, evaluators, and other stakeholders to work together on the same modeling problem, while maintaining strict security and governance controls.

Robust testing and evaluation capabilities

Testing and evaluation (T&E) is one of the most critical steps in any model’s lifecycle. During T&E, subject matter experts, data scientists, and other business stakeholders determine whether a model is both effective and efficient for any given modeling problem. For example, models may need to be evaluated quantitatively and qualitatively, assessed for bias and fairness concerns, and checked against organizational requirements before they can be deployed to applications in production environments. That’s why we have released a new suite of capabilities to facilitate more effective and thorough T& in Foundry.

Foundry now offers evaluation libraries for common AI/ML problems as a part of the Modeling Objectives application. The availability and native integration of these libraries within Foundry’s model management infrastructure enable users to quickly produce well-known, quantitative metrics in a point-and-click fashion for common modeling problems, all without having to dive into any technical implementation.

We’ve also included a framework for users to write their own custom evaluation libraries. Libraries authored in this framework benefit from the same UI-driven workflow and integration with modeling objectives. This extends the power of the integrated evaluation framework to more advanced modeling problems or context-specific use cases.

Building on the evaluation library integrations, we’ve also added the ability to easily evaluate models across subsets of data. This lets users quickly and exhaustively compute metrics to identify areas of model weakness that might otherwise go undetected if only computing aggregate metrics. Evaluating models on subsets can more easily surface bias or fairness concerns that affect only a portion of the model’s expected data distribution. Users can also configure their T&E workflows to run automatically on all candidate models proposed for a problem in order to build a T&E procedure that is both systematic and consistent.

We also recognize that not all T&E procedures are quantitative. Therefore, checks in modeling objectives help keep track of certain pre-release tasks that might need to get done as part of the T&E process before a model can be released.

Looking ahead

Modeling objectives and the T&E suite are just some of the latest capabilities to encourage responsible AI in Foundry, and we continue to invest in new capabilities for effective model management. From the tools that facilitate robust model evaluation across domains, to mechanisms for seamless model release and rollback in production settings, our model management offering will always focus on empowering our customers to use their AI/ML solutions effectively, easily, and responsibly for their organization’s most challenging problems.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Palantir Named a Leader in AI/ML Platforms” to learn more about how Palantir Technologies can support your organization.

Supporting the Student Journey Through Digital Transformation

What Does it Mean to ‘Go Digital’?

Digital transformation is a critical topic for higher education institutions globally to help them become more innovative, agile and resilient to support their students. Keys to adopting digital can be categorized into four areas—pandemic, prediction, personalization and performance. The pandemic proved the need for reliable digital resilience so that schools can quickly pivot to online learning, meaning more flexibility, scalability and agility. Anticipating touch points for the general student journey from applications to graduation and alumni status allows institutions to better predict unique education tracks, and through data collection, create personalized experiences for students and faculty. With the right tools in place, both students and staff can have automated task management and digital performance throughout their higher education.

Delivering a Seamless Digital Experience

These capabilities and more are aspects of the education experience students have come to know and expect from their campuses. With the understanding of why digital transformation is important, here are three takeaways institutions can explore to deliver improved experiences and increase the overall quality of student engagement.

  • Adopting Cloud-based Solutions: The pandemic necessitated change across the entire education system to remote and hybrid learning environments. Moving to the cloud allows organizations to become more scalable and agile, ensuring students can access everything they need to be successful within one engagement system.
  • Utilize Artificial Intelligence Chat and SMS Bots: Whether through a website or mobile app, predictive technology like chat bots can assist students in completing specific touch points of their student journey. By anticipating what students are currently aiming to accomplish, providing helpful information with the click of a button and giving quick and easy direction to what is most relevant for them, an AI chat or SMS function can track and engage each of those touch points for institutions to best support their students daily.
  • Prioritize Student Digital Security: Before students arrive on campus, they often must create an account for submitting their college applications. Once they are immersed in the university’s various online learning tolls and processes, they typically must make multiple accounts with numerous different passwords. Implementing security measures such as multi-factor authentication and other 2-step security methods ensures only the right student is accessing their personal information and data.

Genesys Student Journey and Digital Transformation Blog Embedded Image 2023Integrating and Examining Data to Enhance Student Engagement

Implementing new strategies and technologies often comes with a significant amount of transition for any campus’ community, but starting with small integrations and building upon each success can slow the pressures of digital transformation. An institution that understands what capabilities and goals each of its department has allows it to create more successful implementation plans for new solutions. Change management, like valuable training and guidance for staff, plays an integral role in ensuring efficient progression of solution integrations into those individual departments. In addition, institutions must remain engaged with staff after new changes are incorporated to understand their pain points and strategize opportunities for fine-tuning.

No matter what stage students are at in their journey through higher education, securely and efficiently integrating their data into new technologies across campus empowers institutions to better understand individual learning tracks. Institutions should examine a student’s qualities and data from a holistic point of view to best engage with and support them, instead of attempting to piece each departments information together for a less comprehensive perspective.

Analyzing student data and activity also motivates institutions to revisit their digital operations and presence to find areas for improvement. It is imperative that websites, learning tools and accesses are functioning quickly and reliably to best serve the students utilizing them. For example, an institution may consider that lower application rates are due to how many students abandoned their application submission process after factors like an unsuccessful login, inability to create an account, errors when submitting, long wait times for tech support, etc. Understanding these barriers enables institutions to promptly address them and streamline the process for any new applicant.

Empowering Higher Education for Success

Increasing student engagement with a multitude of efficiently integrated solutions gives institutions the opportunity to better understand what their students need to be successful through their educational journey. Though there is much more to digital transformation, these key takeaways allow higher education professionals to strategically plan technology and solution implementations to improve their students’ experiences.

 

Together, Genesys and GTS are hosting a series of webinars to educate attendees on the most reliable and efficient solutions for their student experience and engagement challenges. Join these cloud, digital and AI technology experts for part 3 of their webinar series and learn how your organization can support the student journey.

DoDIIS Takeaways: Future DoD and IC Initiatives for AI, ML and the Cloud

This blog series focuses on the Department of Defense (DoD) and Intelligence Community (IC) initiatives for 2023 and beyond. Part one covered future plans regarding IT workforce development and retention, partnerships, interoperability and data management. Part Two continues the discussion of the intertwining initiatives and technologies in AI, ML and cloud computing to provide a more complete picture of the current DoD and IC landscape in connection with their vision for the future.

While data is the lifeblood for the digital transformation, artificial intelligence (AI) and machine learning (ML) are what make digesting the information possible. Cloud allows for this data to be hyperscaled, more agile and more efficient for operations. All of these elements and technologies work together to propel the DoD and IC to the next level and achieve mission goals.

Carahsoft DoDIIS AI ML and Cloud Part 2 Blog Embedded Image 2023AI and ML

To properly understand AI and ML’s role in the future of the DoD and IC, some standard definitions must be established. While the private sector mostly utilizes AI for emergency response, healthcare, finance, agriculture and human resources, the military’s most common uses include cyber defense, swarming, vulnerability scanning and data filtration. This creates a stark difference in understanding and terms. For the purposes of this blog, the terms AI and ML will reflect the terms used during the live DoDIIS speeches and discussions.

With AI and ML, one of the biggest hurdles for the IC is explainability. Before new data can be incorporated from other sources, existing data must be processed. CTOs and Directors of the CIA, DIA, National Media Exploitation Center (NMEC) and Virtualitics explained that if current data holdings are not sorted and understood during the data cataloging processes, it will be difficult to utilize AI and understand the results later. Data governance and data strategy are foundational to this effort. All parties involved also need to understand the ethical implications of AI and have a strong grasp of data analysis and machine learning to harness all of these technologies’ true powers. Other safeguards must be put in place to properly introduce the use of AI and ML within their intended contexts. AI testing and evaluation (T&E) is different than for other tech, since AI capabilities should not be set and left without monitoring and a way to update a model in the field. Instead, the models should continue to be supervised over time by system creators and end users across academia, industry and government to preserve accuracy and high precision. The baseline within the hierarchy of needs is ensuring quality data results, which requires clear understanding of the algorithmic approaches being employed for the models. Vendor technology that provides clear AI explainability is particularly sought after in the DoD and IC since it can be used to back tactical life or death decisions. One solution the DoD is pursuing to address this challenge is the machine-as-a-teammate (MaaT) capability which automates data transformation to significantly increase velocity and precision while remaining explainable.

The DoD has begun focusing heavily on ethical AI frameworks including starting toolkits to assess pipeline or model bias and building a Responsible AI (RAI) foundation to ensure responsible, equitable, traceable, reliable and governed use of data. The DoD hopes industry will continue to adopt RAI principles ahead of future requirements and expand on practical ways to attain these best practices. In addition, the DoD established an AI Council to discuss aligning their RAI framework with AI regulations in other European countries as they seek to integrate systems and open the door for efficient data sharing.

Through initializing use of AI and ML, the DoD and IC have already discovered several benefits. AI has offered enhanced workflows and reduced burden on analysts, advanced filtering techniques on large data sets, open-source scanning for improved product reports and optimized data rates for information transfer. DoD ML pilots achieved 100x increase in quality review and 10x increase in pre-decision error/anomaly detection, among other successes. DoD and IC leaders look to AI as gateway to better identify vulnerabilities in military systems, improve the identification of targets or locations and increase accuracy and speed of retrieving battle damage assessments. While the technology exists to perform these tasks, the policies and permissions are not yet complete to fully implement AI and ML.

Handling the massive quantities of data is a huge undertaking; however, processing the information through AI and ML has proven the worth of the endeavor tenfold and delivered clear mission impact. By focusing on the infrastructure first, the DoD and IC can leverage AI and ML for maximum impact to let machines and humans each do what they do best and then team up to solve the problems in between.

While there are some risks to implementing AI completely such as data set accuracy, vulnerabilities to adversarial influence, legal ramifications and expectations of data use tech, DoD and IC officials confidently endorse the transition to incorporating more AI. They recommend several key steps such as creating a common international policy that addresses ethical concerns, technological advancement and dual use; defining AI for policy given the dynamic and changing nature of technology; and identifying definitions and strategies around non-lethal options, hardening systems and mission enhancement. The DIA’s AI strategy aims to achieve AI readiness in the near term, AI competitiveness in the mid-term and AI dominance in the long-term.

The Cloud

According to Dr. Raj G. Iyer, former CIO for Information Technology Reform, Office of the Secretary of the Army, cloud is an absolute necessity to move large amounts of data across the globe. The concept of data-centricity shifting within the Army from theory to doctrine, has precipitated other essential changes including the migration to cloud. Dr. Iyer stated that the new data goals are no longer owned by just “tech folks”, but by every warfighter, which places a new level of priority on technology like cloud. The new Army initiative includes achieving a distributed command and control (C2) structure for the Army to provide more mobility and less centralization both with C2 and the data. This will be attained through the adoption of its Hybrid Cloud of the Future to hide data “in plain sight” and avoid systems that are uniquely military in nature. When the military leverages a commercial platform, it can process data in a way where adversaries cannot differentiate sensitive information from other commercial processes.

Across the rest of the DoD and IC, agencies vary in their level of cloud migration. For the NGA, business applications and analytics are already in the cloud, the next step is to move to a hybrid multicloud with resources that need to be on hardware available at Joint Regional Edge Nodes. The NSA hopes to avoid a lift-and-shift approach, and instead be precise with their cloud investments through initiatives such as Hybrid Cloud Compute, Eagle Crossing, and a Human Capital Management System. DISA has brought cloud programs together for the DoD under their Host and Compute Center (HACC) through the Joint Warfighting Cloud Capability (JWCC) contract.

For agencies which have not migrated, the DoD and IC recommend preparing for cloud deployment and utilizing this time before switching to cloud to eliminate bad practices that exist on-prem and focus on relevance, resourcing and complete system readiness. As other technologies and strategies take effect, DoD and IC officials reminded of the importance of prioritizing cloud first, cloud native and Zero Trust baked in throughout every aspect regardless of cloud migration stage.

Some challenges DoD and IC officials presented to industry were how to maintain service if an outage occurs in regional data centers from a classified perspective and how to maintain and optimize the network from a unified comm perspective considering its sensitivity to latency. Overall, leaders inquired how to preserve reliability and redundancy to overcome potential distrust of the cloud. As the DoD and IC collaborate with industry to innovate and resolve these issues, it continues to unlock new doors of potential. Dr. Iyer stated that the network is no longer an enabling function, and these digital technologies are now changing how the DoD and IC fundamentally view warfighting.

As the DoD and IC seek to accomplish these IT goals and prepare the way for future modernization, industry, academia and other government agencies must come together to solve current challenges, innovate new solutions and support mission initiatives. Government leaders noted the importance of these modernization efforts and that the technologies and strategies developed in the next 5-10 years will be the foundation of operations for the next generation.

 

Check out our Fast Facts and Future Initiatives of the DoD and IC Resource for more information and key insights for the IT industry.

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at DoDIIS 2022.*

3 Ways DoD Can Strengthen Network Security and Resilience

In October 2022, CISA (Cybersecurity and Infrastructure Security Agency) revealed that multiple hackers had compromised a defense industrial base organization, gaining long-term access to the environment and exfiltrating sensitive data. And those threats are increasing. Since, 2015 the DoD has experienced over 12,000 cyber incidents.

SolarWinds DoD Network Security and Resilience Blog Preview Embedded 2023Strong, resilient next-generation networks that protect sensitive data and DoD missions and functions have never been more critical. But, with a complex interconnected information environment, how can federal IT teams strengthen cybersecurity and become proactive instead of reactive? Army leaders have spent much time discussing resilient next-generation networking, but action needs to be taken soon.

To achieve greater network resilience, here are three steps that federal IT leaders can take to prepare for an unpredictable future and safeguard its networks – and those of its contractors – from malicious cyber activity.

  1. Progress the DoD’s “defend forward” strategy

The DoD’s “defend forward” strategy is nothing new. First outlined in the 2018 DoD Cyber Strategy, the initiative is designed to “disrupt malicious cyber activity at its source.” This refers to any device, network, organization, or adversary nation that poses a threat to U.S. networks and institutions or is actively attacking them.

Notably, the strategy shifts DoD and U.S. Cyber Command’s cybersecurity program from reactive to proactive. Rather than detect and remediate threats as they arise, defend forward actively seeks out threats and eliminates them.

U.S. Cyber Command restated its pledge to “defend forward” in October 2022, but it’s principles and standards must be extended across the defense industrial base – the networks and systems that contribute to U.S. military advantages.

Government contractors are held accountable for their cybersecurity practices and choices, but for true resilience, DoD security leaders must establish new standards for information sharing with their private sector counterparts.

In addition to standing by DoD’s pledge to share indications and warnings of malicious cyber activity, DoD must continue to move beyond transactional vendor relationships. Toll-free numbers are not enough for federal CISOs – they need a dedicated, trusted, point of contact within each defense contractor. Someone with whom they can have frequent and honest conversations, conduct deliberate planning, and oversee collaborative training that enables mutually supporting cyber activities.

  1. Embrace AIOps: The next big thing in networking

Powered by artificial intelligence (AI) and machine learning, AIOps is a relatively new approach to network monitoring that boosts resilience by reducing the time it takes to discover issues, detect anomalies, and gives network engineers the context they need to remediate – before a threat materializes.

AIOps-powered observability works by automating the complex task of collecting and analyzing network data across the vast DoD network infrastructure and turning that data into actionable intelligence. With this insight, teams can proactively address network or cyber issues and even predict certain situations – such as signs of network intrusion. A key advantage of AIOps is that it observes remedial action taken and uses these observations to automatically respond to future problems without the need for IT’s involvement – thereby ensuring a more resilient, autonomous network.

  1. Layer in multipath monitoring

Enterprise networks have traditionally been comprised of multiple hub and spoke topologies with linear routing paths and clearly defined traffic flows. But hybrid IT, hyperconverged infrastructure, and modern networking have created complex multipath network environments – any given packet can take any number of different routes, all of which are changing at any moment.

Unfortunately, these multipath topographies can’t easily be visualized using traditional network monitoring tools. There’s simply not enough time in the day to diagram the network, let alone proactively monitor the application traffic and hardware links that comprise it.

The answer lies in finding a network performance monitoring tool that combines multipath monitoring with traditional infrastructure monitoring for greater visibility into network security.  Having this insight will allow federal network pros to proactively manage multiple networks, identify issues, and fix them before they get out of hand.

A smarter and more collaborative defense

Network resiliency can be achieved at scale, but it will take a concerted effort. Through greater collaboration between the DoD and private sector, as well as the adoption AIOps-powered observability, the DoD will be better prepared to manage and secure increasingly complex, dynamic military network environments.

 

To learn more about SolarWinds’ AIOps-powered Hybrid Cloud Observability Solution, click here.