Securing the Digital Workplace: Microsoft 365 Identity Management for Public Sector Leaders

Zero Trust is a critical focus for public sector organizations as they navigate today’s evolving digital workplace and cybersecurity landscape. But one issue is emerging as increasingly troublesome: insider threats.

The 2022 Cost of Inside Threats: Global Report found incidents involving insider threats surged 44% over the past two years. While some of these threats may be malicious insiders, seeking to misuse their authorized access for personal gain or harm, many are the result of cybercriminals exploiting vulnerabilities in identities to enter your environment. These criminals use tactics like compromised credentials – the leading cause of data breaches – as well as phishing scams and social engineering to impersonate identities and gain unauthorized access.

To effectively counter these increasingly sophisticated threats, organizations must strengthen identity management. When executed properly, identity management not only enhances the security of your digital workplace but enables a Zero Trust strategy.

Let’s discuss what identity management is, how to build a comprehensive strategy in Microsoft 365, and how it can fortify your Zero Trust deployment.

What is Identity Management?

AvePoint Identity Management Blog Embedded Image 2023

Identity management establishes and manages the digital identities of anyone entering your environment – from employees and contractors to guest users. Identities could refer to people, but they could also be services or devices entering your environment.

Identity management enables organizations to implement robust access controls, granting privileges based on roles – which is why identity management is an integral piece of Zero Trust. Without it, you will have no way to verify users and devices are who they say they are, let alone establish proper privileges and access, which are key Zero Trust principles.

When done effectively, identity management provides the right access to the right individuals at the right time for the right reason. This process not only improves your security posture, but can streamline user access, reduce administrative overhead, and help you better meet your compliance obligations.

Building Identity Management in Microsoft 365

When building your identity management strategy in Microsoft 365, remember these three basic elements: identify, authenticate, and authorize.

Here’s how to get started:

  • Identify: The backbone of identity management in Microsoft 365 is Azure Activity Directory (Azure AD). Azure AD provides a cloud identity for users, groups, and resources. It is where you build out your users’ identities and control access to internal and external resources – like your intranet or even Microsoft Teams. The solution will recognize users (based on Microsoft’s powerful machine learning and AI’s understanding of typical user and tenant behavior) and flag risks that fall outside of normal behavior, triggering the next steps of the process.
  • Authenticate: Multi-factor authentication (MFA) is today’s gold standard for authenticating identities. There are a variety of ways to do this, from smart cards to one-time passwords, that add layers of protection to your security. Microsoft’s Authenticator App helps implement MFA across your applications in a convenient and easy way for users, allowing them to verify their and their devices’ identities from their phones.
  • Authorize: It’s critical to grant access privileges based on the conditions specific to your organization. Conditional Access policies take a two-phased approach: first, it collects information about the person (their device, IP address, etc.) and then enforces any policies you have in place. This could mean if it detects a new device, it may enforce multi-factor authentication (MFA) or request the user sign in again. It could also prohibit access under certain conditions, like if a user is attempting access from a mobile device. These policies provide granular control over access while reducing the risk of authorized access.

By following this framework, you can easily begin using the powerful tools Microsoft offers to build your identity management strategy, ensuring only authorized individuals have access to critical systems.

Three Ways to Take a More Proactive Approach to Identity Management

Once you’ve taken the initial steps to start building your identity management approach, take it to the next level to enhance your security:

  • Right-size your policies: Strict, one-size-fits-all rules can hinder productivity; if security is in the way of getting the job done, users will find a way around it. Customizing your policies to specific users, workspaces, or even content creates a more tailored approach to access control, striking a balance between security and productivity.
  • Implement lifecycles: Identities should not permanently exist in your environment. People switch jobs or upgrade their devices. Establish a process to evaluate and recertificate identities – whether users (both external and internal) or devices – to ensure they still require access to your content and workspaces.
  • Monitor your environment: Even with the best-laid security plans, things can still fall through the cracks. That’s why it’s critical to monitor your environment – including users, devices, locations, and behavior – to identify any anomalies or suspicious activities that should be addressed.

These strategies can help you build a more proactive identity management approach that actively reduces risks and attack surfaces, allowing you to go beyond verifying identity to create a secure and efficient digital workplace.

Build a Secure Digital Workplace with Zero Trust

While identity management is an important aspect of building your secure digital workplace, ensuring only authorized individuals have access to your systems, it is not enough to protect your data or the workspaces where it lives in today’s ever-evolving cyber threat landscape.

Public sector organizations must embrace a comprehensive Zero Trust security framework to effectively build a secure digital workplace. To do so, you must combine identity management best practices with other robust security measures, like role-based access controls, workspace governance policies, lifecycle management processes, and risk assessments. Together, these strategies can enhance the protection of your digital environment and minimize your risk of data breach or unauthorized access.

Download the free AvePoint guide, “How to Achieve Zero Trust Standards Without Limiting Collaboration in Microsoft 365,” for more information about protecting your digital collaboration workspaces with a Zero Trust framework.

FedRAMP Rev. 5 Baselines are Here, Now What?

The FedRAMP Joint Authorization Board (JAB) has given the green light to update to FedRAMP Rev. 5. With this revision, FedRAMP baselines are now updated in line with the National Institute of Standards and Technology’s (NIST) SP 800-53 Rev. 5 Catalog of Security and Privacy Controls for Information Systems and Organizations and SP 800-53B Control Baselines for Information Systems and Organizations. This transformation brings opportunities and challenges for all stakeholders involved, including Cloud Service Providers (CSP), Third Party Assessment Organizations (3PAOs), and Federal Agencies. But worry not – with RegScale, we have your back! Let’s dive in and understand the impact and how to prepare for the coming changes.

Decoding the Transition

The transition has been in the works for a very long time, and FedRAMP has updated many of their controls to accurately reflect updates in technology since Rev. 4 was published in 2015. FedRAMP Rev. 5 brings with it significant updates to the security controls to meet emerging threats, including new families such as supply chain risk management, and places a greater emphasis on privacy controls. FedRAMP continues to strongly encourage package submission in NIST Open Security Controls Assessment Language (OSCAL) format to accelerate review and approval processes. To aid with a clear comprehension of the updates, FedRAMP has also released a Rev. 4 to Rev. 5 Baseline Comparison Summary. There are more than 250 controls with significant changes, including several whole new families of controls.

In the coming weeks, FedRAMP plans to release a series of updated OSCAL baseline profiles, resolved profile catalogs, System Security Plan (SSP), Security Assessment Plan (SAP), Security Assessment Report (SAR), and Plans of Action and Milestones (POA&;ampM) templates as well as supporting guides for each of these.

What is OSCAL, You Ask?

RegScale FedRAMP Rev. 5 Baselines Blog Embedded Image 2023

OSCAL is a set of standards for digitizing the authorization package through common machine-readable formats developed by NIST in conjunction with the FedRAMP PMO and industry. NIST defines it as a “set of hierarchical, formatted, XML- JSON- and YAML-based formats that provide a standardized representation for different categories of security information pertaining to the publication, implementation, and assessment of security controls.” OSCAL makes it easier to validate the quality of your FedRAMP packages and expedites the review of those packages.

The Impact on CSPs

FedRAMP has published the CSP Transition Plan, providing a comprehensive roadmap and tool for CSPs to identify the scope of the Rev. 5 controls that require testing and offering support for everyone based on their stage in the FedRAMP authorization process. Timelines for the full transition range from immediate to 12-18 months. You should find a technology partner to assist you regardless of your FedRAMP stage so that you can quickly and completely adapt from Rev. 4 to Rev. 5 baselines as well as update, review, and submit your packages in both human-readable (Word, Excel) and machine-readable (OSCAL) formats.

If you are a CSP just getting started with your FedRAMP journey…

As of May 30, 2023, CSPs in the “planning” stage of FedRAMP authorization must adopt the new Rev. 5 baseline in their controls documentation and testing and submit their packages in the updated FedRAMP templates as they become available. You are in the planning phase if you are:

  • Applying for FedRAMP or are in the readiness review process
  • Have not partnered with a federal agency prior to May 30, 2023
  • Have not contracted with a 3PAO for a Rev. 4 assessment prior to May 30, 2023
  • Have a JAB prioritization but have not begun an assessment after the release of the Rev. 5 baselines and templates

If you are a CSP in the “Initiation” phase

CSPs in the initiation phase will complete an Authority to Operate (ATO) using the Rev. 4 baseline and templates. By the latest of the issuance of your ATO or September 1, 2023, you will identify the delta between your Rev. 4 implementation and the Rev. 5 requirements, develop plans to address the differences, and document those plans in the SSP and POA&;ampM. You are in the initiation phase if any of the following apply prior to May 30, 2023:

  • Prioritized for the JAB and are under contract with a 3PAO or in 3PAO assessment
  • Have been assessed and are working toward P-ATO package submission
  • Kicked off the JAB P-ATO review process
  • Partnered with a federal agency and are:
    • Currently under contract with a 3PAO
    • Undergoing a 3PAO assessment
    • Have been assessed and have submitted the package for Agency ATO review

If you are a Fully Authorized CSP

You are in the “continuous monitoring” phase if you are a CSP with a current FedRAMP authorization. By September 1, 2023, you need to identify the delta between your current Rev. 4 implementation and the Rev. 5 requirement, develop plans to address the differences and document those plans in the SSP and POA&;ampM. By October 2, 2023; you should update plans based on any shared controls.

If your latest assessment was completed between January 2 and July 3, 2023, you have a maximum of one year from the date of the last assessment to complete all implementation and testing activities for Rev. 5. If your annual assessment is scheduled between July 3 and December 15, 2023, you will need to complete all implementation and testing activities no later than your next, scheduled annual assessment in 2023/2024.

A Complete Technology and Transition Partner

The transition to FedRAMP Rev. 5 is not just about meeting the new requirements but doing so in the most efficient and seamless manner. You should focus on your core business while technology like RegScale handles the intricacies of the compliance transition.

Beyond compliance documentation, RegScale serves as a comprehensive FedRAMP compliance technology and transition partner. Our platform assists with mapping your security controls against FedRAMP and NIST SP 800-53 baselines for Rev. 4 and Rev. 5, supports gap analysis, provides remediation support, and enables continuous monitoring and improvement. The platform currently includes FedRAMP support and tools to develop human-readable and OSCAL-formatted content for Catalogs, Profiles, SSPs, Components, SAPs, SARs, POAMs and Asset Inventory. To help eliminate the friction and confusion of where to begin with OSCAL, RegScale provides an intuitive Graphical User Interface (GUI) to build artifacts using our wizards and then easily export them as valid OSCAL. By automating the creation of audit-ready documentation and allowing direct submission to the FedRAMP Project Management Office (PMO) through OSCAL and/or Word/Excel templates, RegScale provides a seamless transition experience to Rev. 5, reducing complexities and saving you valuable time and resources.

In closing, it is crucial for all CSPs and stakeholders to review the new mandates and the CSP Transition Plan and begin planning to address the updated templates. Let RegScale help make the shift to FedRAMP Rev. 5 a streamlined, efficient, and effective process with minimum costs and business disruptions.

This post originally appeared on Regscale.com and is re-published with permission.

View our webinar to learn more about the low-cost approaches for handling the transition to Rev 5.

Four Lessons I Learned from My Company’s Response to the SUNBURST Attack

Saturday, December 12, 2020, is a day I’ll never forget. That was the day I learned nation-state threat actors had exploited our software in what would later be known as SUNBURST. Because it’s been written about thousands of times before, I won’t rehash the particulars of the event itself here. Instead, I’d like to share four lessons I learned about how to respond to a large-scale cyberattack.

1. The first days: Preparation helps control the chaos

I often refer to the days immediately following December 12, 2020, as “controlled chaos.” The chaos portion is self-explanatory, but what about the “controlled” part?

Simply put, we were in control the entire time, no matter how chaotic things seemed, because we’d prepared for such an incident. We ran tabletop exercises, planned for different scenarios, mapped out hypothetical intrusions, tested our response methods, and looked for and plugged potential security holes. We also built an incident response team comprised of representatives from across the company. It included members from our security, legal, marketing, IT, and engineering teams, and our board of directors.

As you plan your threat response, consider the following:

  • Do you have a cybersecurity incident response playbook?
  • Have you performed tabletop exercises and run various attack scenarios?
  • Do you have the right people on the incident response team—a good mix of strategic and tactical expertise?
  • Do you have ways to contact people, even on the weekend (or during a pandemic)?
  • Do you have a list of backup contacts in case someone isn’t available?
  • Do you have alternative communication methods established in case you cannot trust your existing ones?

2. The initial weeks: Separating teams creates an agile and efficient response

SolarWinds Attack Response Blog Embedded Image 2023

We quickly learned we needed to split our team into different groups for an agile and efficient response. Thus, one big team became multiple smaller teams, each overseen by leaders within their respective organizations (i.e., the legal team was led by our general counsel, the engineering team by our head of engineering, and so forth). These teams would work independently, then reconvene each evening to share what they learned, discuss solutions and ideas, and so on.

Having different teams allowed individuals to focus on each facet of the response. For example, engineering could focus on how the attack affected our build while IT investigated how the attackers got in. The communications team created responses for customers, partners, and the press, and what ultimately became the government affairs team devised a plan to contact various government agencies.

We also learned organizing these teams was impossible without a third-party “quarterback.” So, we brought in an external organization to coordinate our teams’ work. They set up meetings and ensured everyone was on the same page and information was being shared.

As you coordinate your teams, ask:

  • Do we have a plan in place to get teams together?
  • Do we have a third-party “security helper” on call or retainer? (This is often a good insurance policy)
  • Do we have enough teams to cover every aspect of our business?

3. The following weeks and months: Unbiased partners help amplify the truth

At the time, there was a lot of misinformation floating around. We were being outnumbered, out-marketed, and out-communicated. And unfortunately, social media made misinformation spread like wildfire—and has helped it be equally hard to extinguish.

To help, we partnered with reputable and experienced organizations like the Cybersecurity and Infrastructure Agency (CISA), Krebs Stamos Group, and others. The organizations performed forensics while amplifying the truth about the attack, helping people understand this was not just an isolated incident.

Amplifying the truth was the only agenda our partners had. Sadly, that’s not the norm. I discovered many organizations out there want to promote their brand or have ulterior motives. Fortunately, the organizations we worked with had no such baggage.

Indeed, they allowed us to focus on ensuring our customers were in the right state. We wanted to be there to answer their questions, assure them, and, most of all, make sure they were secure and protected. Our partners helped us block out the noise so we could focus on helping our customers.

To summarize:

  • Bring in the correct partners and add new partners as necessary
  • Watch out for hidden agendas
  • Prioritize what’s most important to you (For us, our customers were our top priority)
  • Don’t spend time responding to every inaccuracy; it will only distract you from your priorities
  • Stay focused

4. The final months: Going above and beyond leads to an exemplary outcome

As the months wore on, I remember a colleague telling me, “If you’re going to come out of this, you have to be special. It won’t be enough just to fix the issue. You need to really go above and beyond.”

As it turns out, we fixed the issue—but did much more than that. We found the source for SUNBURST and made it publicly available. We testified before the U.S. House and Senate. We implemented assistance programs to help our customers. We held briefings with the FBI and other global law enforcement agencies.

We ensured the world knew what we were doing and why we were doing it. In being transparent, we were helping others understand what we went through so they could better protect themselves. It’s not enough to be transparent, of course. To get through it and come out stronger, we needed to have products and services people love and enjoy using, which leads me to three final recommendations:

  • Be open and honest throughout the entire process
  • Communicate early and often—not just to your customers, partners, and employees but to the world
  • Make the type of products you would want them to use, and make them Secure by Design

The months have turned into years. The tenets of transparency and humility have served us well. The SUNBURST incident has turned into a catalyst for good. Supply chain security is now front of mind for many. Executive orders and cyber security strategies are leading us towards attestation for software security. Executive and boardroom conversations have security as a necessary topic, and the security defenders of the world are being looked upon for guidance in managing cyber risk.

The investigation into SUNBURST formally concluded in May 2021—six months after the attack was first uncovered. But I like to think our response to the attack will live on for much longer. Because what started as a dark day in December 2020 made us a stronger, more resilient, and better company. I hope the lessons I learned can help you do the same.

Contact our team today to learn more about how SolarWinds can support your organization’s software and cybersecurity mission.

Ransomware Protection for Kubernetes Data in the Public Sector

Kubernetes is a powerful platform for deploying and managing containerized applications in the cloud. It offers many benefits such as scalability, portability, resilience and automation. However, Kubernetes also poses some challenges when it comes to data protection and security, especially in the public sector where sensitive data and compliance regulations are involved. That’s why we are excited to continue our strategic partnership with Carahsoft Technology Corp., the leading government IT solutions provider, to deliver Kasten K10 by Veeam, the market-leading Kubernetes data protection solution, to public sector customers across the U.S.

In this blog post, we will explore some of the common issues that public sector organizations face when using Kubernetes, and how Kasten K10 by Veeam can help them overcome these challenges with a simple, secure and scalable solution for Kubernetes data protection.

The challenges of Kubernetes Data Protection in the Public Sector

One of the main challenges of Kubernetes data protection in the public sector is the complexity and diversity of the Kubernetes environment. Kubernetes clusters can span multiple clouds, regions and zones, and contain hundreds or thousands of applications and microservices. Each application may have its own data sources, dependencies and configurations, which need to be backed up and restored consistently and reliably.

Veeam Ransomware Protection Blog Embedded Image 2023

Another challenge is the security and compliance of the Kubernetes data. Public sector organizations often deal with sensitive data such as personal information, health records, financial transactions or national security secrets. These data need to be protected from unauthorized access, modification or deletion, as well as from external threats such as ransomware attacks. Moreover, public sector organizations need to comply with various regulations and operate in secure environments, which requires cluster deployments in compliant hybrid environments spanning examples like AWS GovCloud and Red Hat OpenShift.

A third challenge is the scalability and performance of the Kubernetes data protection solution. As Kubernetes clusters grow in size and complexity, so does the amount of data that needs to be backed up and restored. Public sector organizations need a solution that can handle large volumes of data without compromising the availability or performance of the Kubernetes applications. They also need a solution that can scale up or down as needed, without requiring manual intervention or complex configuration changes.

The Solution: Kasten K10 by Veeam

Kasten K10 by Veeam is a purpose-built solution for Kubernetes data protection that addresses all these challenges and more. Kasten K10 is designed to simplify and automate the backup and recovery of Kubernetes applications and their data across any environment. It offers the following features and benefits for public sector organizations:

  • Application-centric approach: Kasten K10 treats each Kubernetes application as a unit of backup and recovery, rather than individual containers or volumes. This ensures that the application state and dependencies are preserved across backups and restores, regardless of where they are running or how they are configured.
  • Policy-driven automation: Kasten K10 allows public sector organizations to define backup policies based on application metadata such as labels, annotations, namespaces or clusters. These policies can specify the frequency, retention, location, encryption and compression of the backups, as well as any custom actions or hooks that need to be executed before or after the backup. Kasten K10 then automatically applies these policies to the matching applications, eliminating the need for manual backups or scripts.
  • Secure and compliant data protection: Kasten K10 encrypts all backup data at rest and in transit using AES-256 encryption keys that are stored in a secure key management system. Kasten K10 also supports role-based access control (RBAC) and audit logging to ensure that only authorized users can access or modify the backup data. Additionally, Kasten K10 provides ransomware protection by creating immutable backups that cannot be overwritten or deleted by malicious actors.
  • Scalable and performant architecture: Kasten K10 leverages a distributed architecture that scales with the Kubernetes cluster. It uses parallelism and deduplication to optimize the backup, restore performance and reduce the storage footprint. It also supports incremental backups and restores to minimize the network bandwidth and application downtime.
  • Application portability: Kasten K10 enables public sector organizations to ensure application portability across diverse Kubernetes environments by using Transform Sets. Transform Sets are a set of rules that can modify the application configuration during backup or restore, such as changing namespaces, labels, annotations, storage classes, or secrets. This allows public sector organizations to migrate their applications from one cluster to another, or from one cloud to another, without breaking their functionality or security.

Next Steps

We hope this blog post provided valuable insights into how Kasten K10 by Veeam can help you protect your Kubernetes data in the public sector. If you want to learn more, here are some next steps you can take:

Watch this video to see Kasten K10 in action and learn how it can simplify and automate your Kubernetes data protection workflows: https://youtu.be/gu3J6ZeWwK8

Try the full-featured and FREE edition of Kasten K10 today with this super-quick installation in less than 10 minutes: https://www.kasten.io/free-kubernetes

Don’t miss this opportunity to take your Kubernetes data protection to the next level with Kasten K10 by Veeam and Carahsoft. We look forward to hearing from you soon! Download our full Gorilla Guide to Securing Cloud Native Applications on Kubernetes.

Empowering Public Sector Technical Teams With Generative AI in a Secure Collaboration Platform

Recent advances in generative artificial intelligence (AI) – with its seemingly limitless potential use cases – have captured the public imagination. And they’re just as compelling to government agencies and the military. Organizations across the public and private sectors are racing to identify the most effective applications of the technology and to implement robust and secure solutions enabled by generative AI.

For instance, generative AI can be a powerful assistant to technical and operational teams such as those involved in application development and incident response. The technology can help teams gain real-time insights, bring to light solutions to unexpected problems, and help make fast, data-driven decisions.

It’s with those advantages in mind that Mattermost partnered with Ask Sage to integrate the Ask Sage GPT solution with the Mattermost secure collaboration platform. The result is secure, AI-enhanced collaboration for technical teams in the U.S. public sector.

Real-time Insights, Natural-language Format

Mattermost is a secure, workflow-centric collaboration platform for technical and operational teams that need to meet nation-state-level security and trust requirements. Available self-hosted or in the cloud, Mattermost integrates team messaging, audio and screen share, technical tools, workflow automation, and project management in an open-source solution.

Mattermost Generative AI Blog Embedded Image 2023

Ask Sage is a GPT-powered platform provider that specializes in enabling secure access to Generative AI capabilities for both government and commercial teams. With a wide range of use cases, including summarization, coding, code review, code improvement, RFP writing, responding and evaluation, and report writing, Ask Sage is built on cutting-edge AI technologies such as Azure OpenAI GPT, Cohere, Google Bard, and various open-source LLMs. The solution can ingest custom datasets, tap into APIs, and connect to data lakes for real-time data and insights in a natural-language format.

Ask Sage can quickly and automatically process large amounts of structured and unstructured data – including government-related data such as laws, Federal Acquisition Regulation (FAR), Defense Federal Acquisition Regulation Supplement (DFARS), DoD Controlled Unclassified Information (CUI), and DoD policy and governance content. Outputs include summaries, translations, sentiment analysis, deep insights, and coding.

Integration of Ask Sage with Mattermost provides technical teams with secure, real-time access to generative AI to enhance collaboration, operational productivity, and decision quality. Government and contractor teams can now securely leverage the power of OpenAI and collaborate within a single, seamless interface.

Real-time Insights, Natural-language Format

With this strategic integration, Mattermost equips technical teams to leverage generative AI to accelerate processes, increase output, and improve outcomes. It’s ideal for government teams that write code, manage RFPs, analyze large data sets, or develop and translate intelligence reports.

Ask Sage offers rapid data analysis and summarization to help teams gain new insights as circumstances evolve. Team members spend less time and effort on manual research and analysis, giving them more time to focus on higher-priority decision-making and strategic tasks.

Users can improve the accuracy and depth of Ask Sage results by uploading relevant data –which is labeled by classification level, encrypted, and separated from the OpenAI models. Once uploaded, the data can be accessed only by authorized users through granular access controls within Mattermost.

Collaboration Purpose-built for Public Sector

Mattermost is well-suited to technical public sector teams, because it’s available as an on-prem, self-hosted deployment. That means teams can collaborate securely with lower risk of compromise. It’s also an open-source solution, so organizations can tailor security settings to protect information at impact levels up to IL6 for DoD Secret data. That’s protection that general-purpose, cloud-based productivity and instant-message tools can’t match.

The platform allows teams to create as many topic- or project-specific communication channels as they need. These channels allow users to centralize conversations, data, and tools – including Ask Sage – in the right context. That keeps team members focused and productive, without the need to continually context-switch.

Another useful Mattermost feature is built-in, customizable playbooks – essentially digital checklists – that help team members consistently take the right actions at the right times. Mattermost playbooks can now include generative AI to further automate and accelerate project workflows and incident response.

Leveraging Mattermost’s secure collaboration platform combined with Ask Sage’s generative AI capabilities can revolutionize the way government teams work together, manage technical projects, and respond to mission-critical situations. As interest in OpenAI GPT and similar platforms grows, this strategic integration is a gamechanger in enabling U.S. government and military organizations to securely benefit from generative AI.

Speak with a member of our team today and learn more about Mattermost at www.mattermost.com.

Speed Your Agency’s Software Deployments in 6 Easy Steps

Slow, bottlenecked, and often archaic release methods challenge most government agency software delivery teams. But enterprise feature management can help your agency achieve faster releases with less risk.

Enterprise feature management provides teams with total control over application features, fine-grain release targeting, and detailed audit logs. It starts with feature flags, a powerful tool that allows your development teams to turn features on or off without requiring a code change or deployment. They are a modern solution to traditional hard-coded boolean flags custom-built for each app. With an enterprise feature management platform, you can use a pre-set feature flag enterprise framework to define and operate a simple and seamless experience. This delivers a host of benefits that, among others, dramatically streamlines and accelerates software delivery. It also empowers teams to roll out new functionality gradually and selectively rather than all at once. And, your agency can “dark launch” a feature in production, reducing dependencies on expensive and custom staging environments.

Here are six steps that government agencies can take to get started with LaunchDarkly Federal, the only FedRAMP-authorized feature management platform. These steps will help you understand how to use feature management for high-speed, low-risk software releases of legacy and new applications:

1. Put in place the LaunchDarkly SDK to enable feature flagging

LaunchDarkly’s Software Development Kits (SDKs) allow your developers to implement and share feature flags quickly and easily across software applications. They provide an easy way to connect new and existing applications to the LaunchDarkly SaaS platform. Simply include your programming language-specific LaunchDarkly SDK into your application to get started. The SDK initializes to a specific environment, manages default values and targeting contexts, handles any connectivity issues, and listens for feature status and rule changes. SDKs provide the support for real-time application updates without the need to deploy new code.

2. Identify your environment(s)

In traditional release motions, government agencies identify and set up numerous development, testing, and production environments. Not only is each environment often expensive, but running a release through so many gates can be a significant challenge for resource-strapped teams. It is almost impossible to simulate a production level environment in staging and so when you release to production, you are testing in production anyways. Why not do it safely with granular targeting to reduce risk? With an enterprise feature management solution, you can reduce the number of environments and focus more on safely and securely testing in production.

3. Target, or even micro-target, your release

The next step is determining exactly where you will release individual features, and when. With feature flags, your development teams can release features in a highly customized way. By creating targeting rules, teams can easily target individual releases to a subset of users, resources, or even infrastructure, before making them widely available to all end-users. It’s possible to even micro-target a single user.

Targeting makes it simple to progressively release a new feature to a QA team or to project sponsors for feedback. The granular control over features and release targeting that LaunchDarkly Federal provides will enable more control than traditional blue/green deployments alone.

4. Flip a switch, and release whenever you want

With enterprise feature management, your development teams can separate deployment and release processes. Engineering teams can deploy code, and non-engineering teams can trigger the release with a simple flip of the switch. Decoupling these processes reduces the risk of failure and allows teams to release new features quickly and efficiently. Your development teams can keep progressing on their software development projects and release new features at the best time for their program or department. And, enterprise feature management also allows your project and program teams to develop, test, and deploy features using custom workflows with enterprise-level management capabilities.

By using low-risk continuous integration/continuous development (CI/CD) development processes with incident resolution times of less than 200ms, teams can improve developer productivity and reduce the time it takes to release new features to production.

5. Quickly disable features if issues or errors occur

In the event of an issue or error, teams need to be able to quickly disable features to avoid any issues affecting the application in production. Issues could range from something major such as security vulnerabilities to minor usability and cosmetic problems. With traditional processes, a team would have to roll back to a previous release losing everything they just deployed or take down an entire application to address issues or errors. However, with enterprise feature management solutions, teams can quickly disable the individual problematic feature leaving the rest of the application unchanged. Instead of the lengthy and cumbersome rollback and redeployment processes, this limits the impact to the application with zero downtime. DevSecOps teams would then typically perform a “patch forward” for the fix.

6. Track the release with detailed analytics

Using analytics, monitoring tools, and processes helps guarantee that your software meets government guidelines and agency policies. Using enterprise feature management, your agency can gather detailed audit logs and analytics to inform your decision-making and improve software delivery processes across your mission-critical programs.

Following these six simple steps can help you shrink your agency’s release time from years and months, to days and hours, just like it did for the Centers for Medicare (CMS). Using LaunchDarkly and the six steps above, CMS went from one launch once per quarter, to completing six launches within a single day to support a global rollout.

Feature management is a powerful DevSecOps tool that can truly accelerate the delivery of transformative software. With detailed control over features, release targeting, and detailed audit logs, your agency can reduce risk and deliver software at the speed of the commercial world.

Download our eBook to learn more about LaunchDarkly, and view our our public sector webinar to learn more about DevSecOps best practices.

4 Steps to Applying Zero Trust to Content Security

As organizations adopt zero trust architectures, there’s one key area that seems to be overlooked: the content layer. And yet, security vulnerabilities at this layer pose significant, and extremely common threats. In fact, research reveals that a large portion of companies share sensitive content with over 2,500 third parties and use multiple tools for content communications.

Given the vulnerable nature of content exchange, it’s important to extend zero trust principles right down to the emails, documents, and files that we all share every day. But there are reasons why organizations do not do this regularly. For example, enforcing access rights can be tricky, especially in large organizations or companies with significant turnover. Tracking and monitoring every file type is impossible, as is adequately classifying every type of content.

Forcepoint Kiteworks Collaboration Zero Trust Blog Embedded Image 2024

Forcepoint’s new partnership with Kiteworks, a leader in data privacy and compliance for sensitive content communications, changes everything. Together, we’ve developed the industry’s most powerful solution for true zero trust security at the content layer. It combines Forcepoint’s Content Disarm & Reconstruction (CDR) and Data Loss Prevention (DLP) solutions with Kiteworks’ Private Content Network (PCN).

This combination allows organizations to take a highly effective four-step approach to zero trust content security by:

  1. Making all content untrusted by default – Applying zero trust at the content layer entails assuming that all data is malicious until proven otherwise. Ensuring content is secure and delivered safely requires deconstructing—and reconstructing—the information that’s being sent. Forcepoint’s Zero Trust CDR extracts information from files, verifies that the information is secure, and builds new, functional files to carry the information to its ultimate destination.
  2. Enforcing least-privilege content access – Least-privilege access management is a core tenet of zero trust security; our solution extends this practice to the content layer. It applies access control for applications to all content assets and allows organizations to assess who is sending, sharing, receiving, viewing, altering, or saving content. Companies can also monitor from where and to that content is being sent.
  3. Monitoring content for potential vulnerabilities – Most organizations employ some form of network monitoring and have done so for years. Effective content monitoring employs the same principles of complete, real-time visibility and unified control. Our joint solution consolidates content communication channels for easy management and closely monitors each asset to ensure content is free of vulnerabilities.
  4. Integrating policy management tracking and controls for data loss prevention – Tracking and monitoring content collaboration and communications is essential to prevent sensitive content from falling into the wrong hands. Our solution allows organizations to discover, classify, monitor, and protect data, track and control sensitive content, and audit user behavior—mitigating data loss.

This “trust no content” approach addresses all content security gaps. It provides organizations with assurances that the content their users are reading, sharing, and using is well-protected and free of malware.

Moreover, it makes implementing and managing zero trust content security an easy, frictionless experience for both administrators and users alike. Admins have everything they need to manage content security from a central location, and users will not experience any delays or inhibitions in their ability to collaborate or communicate.

Contact a member of our team today to learn more about Forcepoint’s and Kiteworks’ new solution and schedule a demo to start taking the steps necessary to bring zero trust security to your content.

How Palantir Meets IL6 Security Requirements with Apollo

Building secure software requires robust delivery and management processes, with the ability to quickly detect and fix issues, discover new vulnerabilities, and deploy patches. This is especially difficult when services are run in restricted, air-gapped environments or remote locations, and was the main reason we built Palantir Apollo.

With Apollo, we are able to patch, update, or make changes to a service in 3.5 minutes on average and have significantly reduced the time required to remediate production issues, from hours to under 5 minutes.

For 20 years, Palantir has worked alongside partners in the defense and intelligence spaces. We have encoded our learnings for managing software in national security contexts. In October 2022, Palantir received an Impact Level 6 (IL6) provisional authorization (PA) from the Defense Information Systems Agency (DISA) for our federal cloud service offering.

IL6 accreditation is a powerful endorsement, recognizing that Palantir has met DISA’s rigorous security and compliance standards and making it easier for U.S. Government entities to use Palantir products for some of their most sensitive work.

The road to IL6 accreditation can be challenging and costly. In this blog post, we share how we designed a consistent, cross-network deployment model using Palantir Apollo’s built-in features and controls in order to satisfy the requirements for operating in IL6 environments.

What are FedRAMP, IL5, and IL6?

With the rise of cloud computing in the government, DISA defined the operating standards for software providers seeking to offer their services in government cloud environments. These standards are meant to ensure that providers demonstrate best practices when securing the sensitive work happening in their products.

DISA’s standards are based on a framework that measures risk in a provider’s holistic cloud offering. Providers must demonstrate both their products and their operating strategy are deployed with safety controls aligned to various levels of data sensitivity. In general, more controls mean less risk in a provider’s offering, making it eligible to handle data at higher sensitivity levels.

Palantir IL6 Security Requirements with Apollo Blog Embedded Image 2023

Impact Levels (ILs) are defined in DISA’s Cloud Computing SRG as Department of Defense (DoD)-developed categories for leveraging cloud computing based on the “potential impact should the confidentiality or the integrity of the information be compromised.” There are currently four defined ILs (2, 4, 5, and 6), with IL6 being the highest and the only IL covering potentially classified data that “could be expected to have a serious adverse effect on organizational operations” (the SRG is available for download as a .zip from here).

Defining these standards allows DISA to enable a “Do Once, Use Many” approach to software accreditation that was pioneered with the FedRAMP program. For commercial providers, IL6 authorization means government agencies can fast track use of their services in place of having to run lengthy and bespoke audit and accreditation processes. The DoD maintains a Cloud Service Catalog that lists offerings that have already been granted PAs, making it easy for potential user groups to pick vetted products.

NIST and the Risk Management Framework

The DoD bases its security evaluations on the National Institute of Standards and Technology’s (NIST) Risk Management Framework (RMF), which outlines a generic process used widely across the U.S. Government to evaluate IT systems.

The RMF provides guidance for identifying which security controls exist in a system so that the RMF user can assess the system and determine if it meets the users’ needs, like the set of requirements DISA established for IL6.

Controls are descriptive and focus on whole system characteristics, including those of the organization that created and operates the system. For example, the Remote Access (AC-17) control is defined as:

The organization:

  • Establishes and documents usage restrictions, configuration/connection requirements, and implementation guidance for each type of remote access allowed;
  • Authorizes remote access to the information system prior to allowing such connections.

Because of how controls are defined, a primary aspect of the IL6 authorization process is demonstrating how a system behaves to match control descriptions.

Demonstrating NIST Controls with Apollo

Apollo was designed with many of the NIST controls in mind, which made it easier for us to assemble and demonstrate an IL6-eligible offering using Apollo’s out-of-the box features.

Below we share how Apollo allows us to address six of the twenty NIST Control Families (categories of risk management controls) that are major themes in the hundreds of controls adopted as IL6 requirements.

System and Services Acquisition (SA) and Supply Chain Risk Management (SR)

The System and Services Acquisition (SA) family and related Supply Chain Risk Management (SR) family (created in Revision 5 of the RMF guidelines) cover the controls and processes that verify the integrity of the components of a system. These measures ensure that component parts have been vetted and evaluated, and that the system has safeguards in place as it inevitably evolves, including if a new component is added or a version is upgraded.

In a software context, modern applications are now composed of hundreds of individual software libraries, many of which come from the open source community. Securing a system’s software supply chain requires knowing when new vulnerabilities are found in code that’s running in the system, which happens nearly every day.

Apollo helped us address SA and SR controls because it has container vulnerability scanning built directly into it.

Figure 1: The security scan status appears for each Release on the Product page for an open-source distribution of Redis

When a new Product Release becomes available, Apollo automatically scans the Release to see if it’s subject to any of the vulnerabilities in public security catalogs, like MITRE’s Common Vulnerabilities and Exposure’s (CVE) List.

If Apollo finds that a Release has known vulnerabilities, it alerts the team at Palantir responsible for developing the Product in order to make sure a team member updates the code to patch the issue. Additionally, our information security teams use vulnerability severity to define criteria for what can be deployed while still keeping our system within IL6 requirements.

Figure 2: An Apollo scan of an open-source distribution of Redi shows active CVEs

Scanning for these weak spots in our system is now an automatic part of Apollo and a crucial element in making sure our IL6 services remain secure. Without it, mapping newly discovered security findings to where they’re used in a software platform is an arduous, manual process that’s intractable as the complexity of a platform grows, and would make it difficult or impossible to accurately estimate the security of a system’s components.

Configuration Management (CM)

The Configuration Management (CM) group covers the safety controls that exist in the system for validating and applying changes to production environments.

CM controls include the existence of review and approval steps when changing configuration, as well as the ability within the system for administrators to assign approval authority to different users based on what kind of change is proposed.

Apollo maintains a YML-based configuration file for each individual microservice within its configuration management service. Any proposed configuration change creates a Change Request (CR), which then has to be reviewed by the owner of the product or environment.

Changes within our IL6 environments are sent to Palantir’s centralized team of operations personnel, Baseline, which verifies that the Change won’t cause disruptions and approves the new configuration to be applied by Apollo. In development and testing environments, Product teams are responsible for approving changes. Because each service has its own configuration, it’s possible to fine-tune an approval flow for whatever’s most appropriate for an individual product or environment.

Figure 3: An example Change Request to remove a Product from an Environment

A history of changes is saved and made available for each service, where you can see who approved a CR and when, which also addresses Audit and Accountability (AU) controls.

When a change is made, Apollo first validates it and then applies it during configured maintenance windows, which helps to avoid the human error that’s common in managing service configuration, like introducing an untested typo that interrupts production services. This added stability has made our systems easier to manage and, consequentially, easier to keep secure.

Incident Response (IR)

The Incident Response (IR) control family pertains to how effectively an organization can respond to incidents in their software, including when its system comes under attack from bad actors.

A crucial aspect to meeting IR goals is being able to quickly patch a system, quarantine only the affected parts of the system, and restore services as quickly as is safely possible.

A major feature that Apollo brings to our response process is the ability to quickly ship code updates across network lines. If a product owner needs to patch a service, they simply need to make a code change. From there, a release is generated, and Apollo prepares an export for IL6 that is applied automatically once it’s transferred by our Network Operations Center (NOC) team according to IL6 security protocols. Apollo performs the upgrade without intervention, which removes expensive coordination steps between the product owner and the NOC.

Figure 4: How Apollo works across network lines to an air-gapped deployment

Additionally, Apollo allows us to save Templates of our Environments that contain configuration that is separate from the infrastructure itself. This has made it easy for us to take a “cattle, not pets” approach to underlying infrastructure. With secrets and other configuration decoupled from the Kubernetes cluster or VMs that run the services, we can easily reapply them onto new infrastructure should an incident ever pop up, making it simple to isolate and replace nodes of a service.

Figure 5: Templates make it easy to manage Environments that all use the same baseline

Contingency Planning (CP)

Contingency Planning (CP) controls demonstrate preparedness should service instability arise that would otherwise interrupt services. This includes the human component of training personnel to respond appropriately, as well as automatic controls that kick in when problems are detected.

We address the CP family by using Apollo’s in-platform monitoring and alerting, which allows product or environment owners to define alerting thresholds based on an open standard metric types, including Prometheus’s metrics format.

Figure 6: Monitors configured for all of the Products in an Environment make it easy to track the health of software components

Apollo monitors our IL6 services and routes alerts to members of our NOC team through an embedded alert inbox. Alerts are automatically linked to relevant service logging and any associated Apollo activity, which has drastically sped up the remediation process when services or infrastructure experience unexpected issues. The NOC is able to address alerts by following runbooks prepared for and linked to within alerts. When needed, alerts are triaged to teams that own the product for more input.

Because we’ve standardized our monitors in Apollo, we’ve been able to create straightforward protocols and processes for responding to incidents, which means we are able to action contingency plans quicker and ensure our systems remain secure.

Access Control (AC)

The Access Control (AC) control family describes the measures in a system for managing accounts and ensuring accounts are only given the appropriate levels of permissions to perform actions in the system.

Robustly addressing AC controls includes having a flexible system where individual actions can be granted based on what a user needs to be able to do within a specific context.

In Apollo, every action and API has an associated role, which can be assigned to individual users or Apollo Teams, which are managed within Apollo and can be mirrored from an SSO provider.

Roles necessary to operating environments (e.g. approving the installation of a new component) are granted to our Baseline team, and are restricted as needed to a smaller group of environment owners based on an environment’s compliance requirements. Team management is reserved for administrators, and roles that include product lifecycle actions (e.g. recalling a product release) are given to development teams.

Figure 7: Products and Environments have configurable ownership that ensures the right team is monitoring their resources

Having a single system to divide responsibilities by functional areas means that our access control system is consistent and easy to understand. Further, being able to be granularly assign roles to perform different actions makes it possible to meet the principle of least privilege system access that underpins AC controls.

Conclusion

The bar to operate with IL6 information is rightfully a high one. We know obtaining IL6 authorization can feel like a long process — however, we believe this should not prevent the best technology from being available to the U.S. Government. It’s with that belief that we built Apollo, which became the foundation for how we deploy to all of our highly secure and regulated environments, including FedRAMP, IL5, and IL6.

Additionally, we recently started a new program, FedStart, where we partner with organizations just starting their accreditation journey to bring their technology to these environments. If you’re interested in working together, reach out to us at fedstart@palantir.com for more information.

Get in touch if you want to learn more about how Apollo can help you deploy to any kind of air-gapped environment, and check out the Apollo Content Hub for white papers and other case studies.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Solution Overview: Palantir—Apollo” to learn more about how Palantir Technologies can support your organization.

Palantir Announces Availability of Foundry on Microsoft Azure

Amid global economic uncertainty, access to integrated, protected, and trusted data and analytics is more vital than ever when it comes to creating business value. To further enable transformative outcomes, Palantir is pleased to partner with Microsoft in making Palantir Foundry available on Microsoft Azure, empowering existing and new customers to more effectively apply data and analytics in their operational decision-making.

Through this new collaboration, organizations will be able to quickly deploy Palantir Foundry — our ontology-powered operating system for the modern enterprise — as well as being able to unlock further value in Azure Data Services with Microsoft’s cloud-scale analytics and AI solutions.

As part of this relationship, our Foundry platform is available on Azure, enabling customers to deploy our software at speed, while benefiting from Azure’s trusted and secure infrastructure, as well as its global commercial footprint.

Availability on the Azure Marketplace will enable seamless purchasing and invoicing, with customers able to use their existing Microsoft Azure Consumption Commitment (MACC) to purchase a Foundry license and infrastructure costs.

Foundry’s single view ontology can layer on top of Azure Data Services, where they can then use investments for faster time to value, by better unlocking insights, and predicting and simulating outcomes for more data-driven decision making.

Palantir Foundry on Microsoft Azure Blog Embedded Image 2023

The platform will also integrate with native Azure Data Services for enterprise data management on Microsoft Azure, such as Azure Data Lake, Azure Synapse Analytics, Microsoft Power BI, Microsoft Dynamics 365, Microsoft Teams, and Microsoft Industry Clouds. This means customers will be able to further build on their existing IT investments in Azure Data Services through Palantir’s software-defined data integration (SDDI) to products like Azure Synapse Analytics, Azure Data Lake Storage, Azure AI and Azure Machine Learning, alongside others.

“We’re pleased to partner with Palantir to bring Foundry to Microsoft Azure. Organizations around the world will be able to make their data more actionable by using Palantir’s platform for data-driven operations and decision making, powered by Azure’s cloud-scale analytics and comprehensive AI services.” — Deb Cupp, President, Microsoft North America

Better Together with Palantir Foundry and Azure Data Services

Our new relationship with Microsoft will also see us go to market together in joint opportunities across industries like energy and renewables, retail and CPG, as well as other cross-industry sustainability and ESG efforts, where Microsoft customers can enhance their existing digital transformation efforts in Azure Data Services:

  • Energy and Renewables: Foundry enables customers to integrate data at speed and scale from remote sensors and Azure IoT Hub, apply this data to drive up the efficiency of assets, from offshore oil to onshore wind.
  • Retail and CPG: The platform enables organizations to bring near-instant visibility into demand and the ability to adapt their promotions, inventory, and operations in real time.
  • Sustainability and ESG: We’re helping organizations in their net zero transition by creating a common carbon ontology to empower front line decision makers to adjust their work to meet emissions targets.
  • Healthcare and Life Sciences: Foundry is used across the healthcare and life sciences value chain, from drug discovery and development, through to manufacturing, marketing, and sales. Integrate with Azure Health Data Services to manage protected health information.

We are also working together to accelerate time to value for customers in these industries any many more, by consolidating SAP and other ERPs using Palantir HyperAuto, helping them to create a more integrated data landscape. Palantir HyperAuto can help customers accelerate their journey to SAP on Azure and quickly surface insights in just hours.

Partnership in Action

Additional Palantir Foundry capabilities that can be deployed at speed via Azure include those from customers like the connected vehicle company Wejo. Wejo is a proud Palantir partner, optimizing Foundry’s capabilities, and a global leader in Smart Mobility for Good™ cloud and software solutions for connected, electric, and autonomous vehicle data.

Their data comes from over 92 billion vehicle journeys and consist of more than 19.5 trillion data points to data that provide businesses and organizations across a variety of industries the power to innovate, drive growth, transform communities, and save lives.

“We want to help reduce the 1.3 million deaths that happen each year on the road and the additional 8 million due to emissions with smart mobility for good products and services. As part of the Foundry platform, we are excited that Palantir customers with Azure will be able to more rapidly drive integrated, protected, and trusted data and analytics from Wejo for smart mobility initiatives and business value.” — Sarah Larner, Executive Vice President of Strategy and Innovation at Wejo

We look forward to working with Microsoft to broaden Foundry’s availability, enabling clients across industries to better leverage their existing investments for improved operational outcomes.

Those interested in learning more about Palantir and Microsoft’s relationship can visit the Palantir website or get started today via the Azure Marketplace.

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the terms of the partnership and the expected benefits of the software platform and solutions. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Impact Study: Accelerating Interoperability with Palantir Foundry” to learn more about how Palantir Technologies can support your organization.

Updates from Palantir Edge AI in Space

In April 2022, Palantir launched its Edge AI solution into space onboard Satellogic’s NewSat-27 as part of the SpaceX Transporter-4 mission. We’re excited to provide an update of our on-orbit imagery processing efforts. Between April and July, we performed various hardware and software tests in-orbit, and over the past few months we have been receiving some exciting results from our direct tasking and on-orbit processing pipelines onboard NewSat-27.

Where We Stand

As of November 2022, we have successfully demonstrated the capability for customers to task the satellite with multiple captures, resulting in over 100 images from NewSat 27’s multispectral camera.

We had our most recent live image capture and onboard processing test on October 30th over Tartus, Syria. Let’s run through how we handled these images starting from the raw capture in-orbit all the way to results on the ground, utilizing Edge AI in space:

Raw images captured by the satellite consist of a single channel comprising four different ‘bands’ of information — these represent a specific wavelength of light. Palantir Edge AI then orchestrated our onboard imagery preprocessing services to convert batches of raw images into standard, three-channel RGB images. By processing images into a standardized format that our models expect, we can improve accuracy and create more confident results for our users. As part of this specific capture, we received 44 images that we processed into six RGB images.

Palantir Edge AI in Space Blog Embedded Image 2023

After pre-processing was completed, we then ran AI models onboard the satellite. For this particular capture, Edge AI ran our in-house Palantir Omni model to identify buildings in the images. We received 210 building detections, or ‘inferences’, from the model. For each inference, our post-processing services created PNG thumbnails and computed geodetic coordinates by using the satellite telemetry and the onboard global elevation datasets. The outputs were then bundled and secured using various onboard cryptographic mechanisms, so we could validate the data once it was received on the ground.

In our initial on-orbit tests, we discovered an edge-case bug in our pre-processing algorithm. To remedy the issue, we uplinked a small software patch to the satellite that modified how we converted these individual images into RGB images. Once our patch was uplinked, we were able to update our software onboard to account for this new case within seven minutes. With the upgrade infrastructure in-place, we can continuously refine and augment our in-orbit software and algorithms.

Notably, in this live capture instance, we were to demonstrate that software capacity for customers to process all 44 frames within 7 minutes. In our previous post, we discussed how we had strict time constraints for each individual processing run of Edge AI. Even when we accounted for the update, our end-to-end processing time was comfortably within the thresholds that we had initially targeted. For even larger captures, our software features a built-in checkpointing system for resuming processing in the event that we have to halt processing.

What’s Next?

While this previous version of our Omni model was geared towards identifying buildings of interest and focused on the onboard integration with the satellite, our next generation of in-house models can identify more specialized object classes, such as ships. These models are already running on the ground as we test their performance. We ran this same capture through one of our newer models and were able to identify various ships near the port of Tartus in Syria with high confidence. We will be sending this new model up to the satellite in our next upgrade cycle. This will allow us to demonstrate Edge AI’s ability to continuously update and manage models while in flight, in order to optimize inference results based on areas of interest.

Figure 1: Ships off the coast of Tartus, Syria. Detections come from Palantir’s new in-house ML models on imagery collected as part of our Tartus capture.

We have also integrated our Edge AI outputs with Palantir MetaConstellation. MetaConstellation provides end-to-end software around satellite imaging, including an operational UI for image analysis. It allows users to annotate imagery with features and easily compare multiple images from different vendors and sensors over a given area of interest.

Our outputs from the AIP Satellite — either the combined image with detections, or just the PNG thumbnails — can be viewed directly within MetaConstellation. This means that in future deployments we could be able to directly downlink from an Edge AI-equipped satellite to a tactical instance of MetaConstellation in the field, allowing for detections and imagery to be sent to operational users within minutes.

Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022.

Figure 2: Palantir MetaConstellation makes imagery analysis readily accessible to users. Here, we compare imagery from our Tartus capture on October 30, 2022 with images that we had previously collected on September 17, 2022.

Our Ongoing Commitment

We are continuing to invest in our on-orbit capabilities and are currently focused on hardware-backed security mechanisms, upgraded model capabilities, and our in-house georegistration algorithm, which should dramatically increase the accuracy of our model inferences. We are also planning to introduce new communication options to facilitate direct downlink for data, which will allow Palantir to get inferences into the hands of our customers faster than ever before.

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the expected benefits and uses of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Resilient and Effective Space Capabilities” to learn more about how Palantir Technologies can support your organization.