Palantir Announces Availability of Foundry on Microsoft Azure

Amid global economic uncertainty, access to integrated, protected, and trusted data and analytics is more vital than ever when it comes to creating business value. To further enable transformative outcomes, Palantir is pleased to partner with Microsoft in making Palantir Foundry available on Microsoft Azure, empowering existing and new customers to more effectively apply data and analytics in their operational decision-making.

Through this new collaboration, organizations will be able to quickly deploy Palantir Foundry — our ontology-powered operating system for the modern enterprise — as well as being able to unlock further value in Azure Data Services with Microsoft’s cloud-scale analytics and AI solutions.

As part of this relationship, our Foundry platform is available on Azure, enabling customers to deploy our software at speed, while benefiting from Azure’s trusted and secure infrastructure, as well as its global commercial footprint.

Availability on the Azure Marketplace will enable seamless purchasing and invoicing, with customers able to use their existing Microsoft Azure Consumption Commitment (MACC) to purchase a Foundry license and infrastructure costs.

Foundry’s single view ontology can layer on top of Azure Data Services, where they can then use investments for faster time to value, by better unlocking insights, and predicting and simulating outcomes for more data-driven decision making.

Palantir Foundry on Microsoft Azure Blog Embedded Image 2023

The platform will also integrate with native Azure Data Services for enterprise data management on Microsoft Azure, such as Azure Data Lake, Azure Synapse Analytics, Microsoft Power BI, Microsoft Dynamics 365, Microsoft Teams, and Microsoft Industry Clouds. This means customers will be able to further build on their existing IT investments in Azure Data Services through Palantir’s software-defined data integration (SDDI) to products like Azure Synapse Analytics, Azure Data Lake Storage, Azure AI and Azure Machine Learning, alongside others.

“We’re pleased to partner with Palantir to bring Foundry to Microsoft Azure. Organizations around the world will be able to make their data more actionable by using Palantir’s platform for data-driven operations and decision making, powered by Azure’s cloud-scale analytics and comprehensive AI services.” — Deb Cupp, President, Microsoft North America

Better Together with Palantir Foundry and Azure Data Services

Our new relationship with Microsoft will also see us go to market together in joint opportunities across industries like energy and renewables, retail and CPG, as well as other cross-industry sustainability and ESG efforts, where Microsoft customers can enhance their existing digital transformation efforts in Azure Data Services:

  • Energy and Renewables: Foundry enables customers to integrate data at speed and scale from remote sensors and Azure IoT Hub, apply this data to drive up the efficiency of assets, from offshore oil to onshore wind.
  • Retail and CPG: The platform enables organizations to bring near-instant visibility into demand and the ability to adapt their promotions, inventory, and operations in real time.
  • Sustainability and ESG: We’re helping organizations in their net zero transition by creating a common carbon ontology to empower front line decision makers to adjust their work to meet emissions targets.
  • Healthcare and Life Sciences: Foundry is used across the healthcare and life sciences value chain, from drug discovery and development, through to manufacturing, marketing, and sales. Integrate with Azure Health Data Services to manage protected health information.

We are also working together to accelerate time to value for customers in these industries any many more, by consolidating SAP and other ERPs using Palantir HyperAuto, helping them to create a more integrated data landscape. Palantir HyperAuto can help customers accelerate their journey to SAP on Azure and quickly surface insights in just hours.

Partnership in Action

Additional Palantir Foundry capabilities that can be deployed at speed via Azure include those from customers like the connected vehicle company Wejo. Wejo is a proud Palantir partner, optimizing Foundry’s capabilities, and a global leader in Smart Mobility for Good™ cloud and software solutions for connected, electric, and autonomous vehicle data.

Their data comes from over 92 billion vehicle journeys and consist of more than 19.5 trillion data points to data that provide businesses and organizations across a variety of industries the power to innovate, drive growth, transform communities, and save lives.

“We want to help reduce the 1.3 million deaths that happen each year on the road and the additional 8 million due to emissions with smart mobility for good products and services. As part of the Foundry platform, we are excited that Palantir customers with Azure will be able to more rapidly drive integrated, protected, and trusted data and analytics from Wejo for smart mobility initiatives and business value.” — Sarah Larner, Executive Vice President of Strategy and Innovation at Wejo

We look forward to working with Microsoft to broaden Foundry’s availability, enabling clients across industries to better leverage their existing investments for improved operational outcomes.

Those interested in learning more about Palantir and Microsoft’s relationship can visit the Palantir website or get started today via the Azure Marketplace.

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the terms of the partnership and the expected benefits of the software platform and solutions. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir’s control. These risks and uncertainties include Palantir’s ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms’ reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Impact Study: Accelerating Interoperability with Palantir Foundry” to learn more about how Palantir Technologies can support your organization.

Enabling Responsible AI in Palantir Foundry

Editor’s Notes: The following is a collaboration between authors from Palantir’s Product Development and Privacy & Civil Liberties (PCL) teams. It outlines how our latest model management capabilities incorporate the principles of responsible artificial intelligence so that Palantir Foundry users can effectively solve their most challenging problems.

At Palantir, we’re proud to build mission-critical software for Artificial Intelligence (AI) and Machine Learning (ML). Foundry — our operating system for the modern organization — provides the infrastructure for users to develop, evaluate, deploy, and maintain AI/ML models to achieve their desired organizational outcomes.

From stabilizing consumer goods supply chains, to optimizing airplane manufacturing processes, and monitoring public health outbreaks across the globe, Foundry’s interoperable and extensible architecture has enabled data science teams worldwide to readily collaborate with their business and operational teams, enabling all stakeholders to create data-driven impact.

Palantir Responsible AI in Foundry Blog Embedded Image 2023

As we discussed in a previous data science blog post, using AI/ML for these important use cases demands software that spans the entire model lifecycle. Foundry’s first-class security and data quality tools enable users to develop AI/ML models, and by establishing a trustworthy data foundation, our software offers the connectivity and dynamic feedback loops that these teams need in order to sustain the effective use of models in practice.

Further to this, developing capabilities that facilitate the responsible use of artificial intelligence is an indispensable part of building industry-leading AI/ML capabilities. Here, we’ll share more about what responsible AI means at Palantir, and how Foundry’s latest model management and ModelOps capabilities enable organizations to address their most challenging problems.

Responsible AI at Palantir

At its core, our AI/ML product strategy centers around developing software that enables responsible AI use in both collaborative and operational settings. We believe that the term has many dimensions and includes considerations around AI safety, reliability, explainability, and governance. We’ve publicly advocated for a focused, problem-driven approach as well as the importance of robust data governance to AI/ML in multiple forums.

We believe that the tenets of responsible AI are not just limited to model development and use but have considerations throughout the entire model lifecycle. For example, developing reliable AI/ML solutions requires tools for the management and curation of high-quality data. These considerations extend beyond model deployment alone and include how end-users interact with their AI outputs and how they can use feedback loops for iteration, monitoring, and long-term maintenance.

Incorporating responsible AI principles in our software is also a core part of our commitment to privacy and civil liberties. Building this kind of software means recognizing that AI is not the solution to every problem and that a model for one problem will not always be a solution to others. A model’s intended use should be clearly and transparently scoped to specific business or operational problems.

Moreover, the challenges of using AI for mission-critical problems span a variety of domains and require expertise from a diverse breadth of disciplines. Building AI solutions should therefore be an interdisciplinary process where engineers, domain-experts, data scientists, compliance teams, and other relevant stakeholders work together to ensure the solution represents the specialized demands and requirements of the intended field of application. The values of responsible AI shape how we build our software, and in turn, they enable our customers to use AI/ML solutions in Foundry for their most critical problems.

Model Management in Foundry

Building on the platform’s robust security and data governance tools, Foundry’s model management capabilities are designed to encourage users to incorporate responsible AI principles throughout a model’s lifecycle. We have recently released product capabilities that improve the testing and evaluation ecosystem through no-code and low-code interfaces. We encourage you to read more about these here.

Problem-first modeling

In Foundry, orienting around the “operational problem” that models are trying to solve is at the heart of this new model management infrastructure. Foundry offers many tools for a data-first and exploratory approach to model experimentation, but for mission-critical use-cases, AI/ML applications need to be scoped to a specific problem. We have deliberately built modeling objectives to focus model development, evaluation, and deployment around well-defined problems.

The Modeling Objectives application enables users to define a problem, develop candidate models as solutions to these challenges, perform large-scale testing and evaluation, deploy models in many modalities to both staging and production applications, and then monitor them to enable faster iteration.

Specifying the modeling problem from the outset enables collaborators to better understand — and test for — the application and context for which the models are intended. This also provides greater insight into inadvertent reuse or repurposing of models. Modeling objectives provide a flexible yet structured framework that presents an opportunity to streamline model development and deployment by collecting key datasets, identifying stakeholders, and creating a testing and evaluation plan before their development begins.

These objectives also transparently communicate state about a particular AI/ML solution — from model development to testing, to deployment and further post-deployment actions like monitoring and upgrades. This enables users to be more intentional, responsible, and effective in how they use AI to address their organization’s operational challenges.

Deep integrations for security and governance

Data protection, governance, and security are core components of Palantir Foundry and are especially important for AI/ML. AI solutions must be traceable, auditable, and governable in order to be used effectively and responsibly. To facilitate this, Foundry’s model management infrastructure integrates deeply with the platform’s robust capabilities for versioning, branching, lineage, and access control.

Users can submit a model version to an objective and propose that model as a candidate solution for the problem defined in that objective. When submitting a model, users are encouraged to fill out metadata about the submission which becomes part of its permanent record. Project stakeholders and collaborators can use this to better understand the details of each submission and create a system of record that catalogs all future models for a particular modeling problem. With Data Lineage, they can also quickly see the provenance of every model that is submitted to an objective, revealing not only the models themselves, but also their training and testing data and what sources those datasets originally came from.

Foundry’s model management infrastructure natively integrates with the platform’s security primitives for access controls. This enables multiple model developers, evaluators, and other stakeholders to work together on the same modeling problem, while maintaining strict security and governance controls.

Robust testing and evaluation capabilities

Testing and evaluation (T&E) is one of the most critical steps in any model’s lifecycle. During T&E, subject matter experts, data scientists, and other business stakeholders determine whether a model is both effective and efficient for any given modeling problem. For example, models may need to be evaluated quantitatively and qualitatively, assessed for bias and fairness concerns, and checked against organizational requirements before they can be deployed to applications in production environments. That’s why we have released a new suite of capabilities to facilitate more effective and thorough T& in Foundry.

Foundry now offers evaluation libraries for common AI/ML problems as a part of the Modeling Objectives application. The availability and native integration of these libraries within Foundry’s model management infrastructure enable users to quickly produce well-known, quantitative metrics in a point-and-click fashion for common modeling problems, all without having to dive into any technical implementation.

We’ve also included a framework for users to write their own custom evaluation libraries. Libraries authored in this framework benefit from the same UI-driven workflow and integration with modeling objectives. This extends the power of the integrated evaluation framework to more advanced modeling problems or context-specific use cases.

Building on the evaluation library integrations, we’ve also added the ability to easily evaluate models across subsets of data. This lets users quickly and exhaustively compute metrics to identify areas of model weakness that might otherwise go undetected if only computing aggregate metrics. Evaluating models on subsets can more easily surface bias or fairness concerns that affect only a portion of the model’s expected data distribution. Users can also configure their T&E workflows to run automatically on all candidate models proposed for a problem in order to build a T&E procedure that is both systematic and consistent.

We also recognize that not all T&E procedures are quantitative. Therefore, checks in modeling objectives help keep track of certain pre-release tasks that might need to get done as part of the T&E process before a model can be released.

Looking ahead

Modeling objectives and the T&E suite are just some of the latest capabilities to encourage responsible AI in Foundry, and we continue to invest in new capabilities for effective model management. From the tools that facilitate robust model evaluation across domains, to mechanisms for seamless model release and rollback in production settings, our model management offering will always focus on empowering our customers to use their AI/ML solutions effectively, easily, and responsibly for their organization’s most challenging problems.

This post originally appeared on Palantir.com and is re-published with permission.

Download our Resource, “Palantir Named a Leader in AI/ML Platforms” to learn more about how Palantir Technologies can support your organization.

Safeguarding Mission-Critical Data: Veeam’s Unwavering Commitment to Data Protection and Secure Products for Government Customers

Protecting customer data

In today’s digital landscape, data security is of utmost importance. At Veeam Software (Veeam), we recognize the significance of safeguarding our customers’ sensitive information. As part of our ongoing commitment to security, we are actively pursuing Common Criteria and Department of Defense Information Network Approved Product List (DoDIN APL) certifications. In addition, we are fully compliant with Cybersecurity Maturity Model Certification v2 level 1 (awaiting validation) and engage in Independent Verification & Validation (IV&V). We have also successfully completed FIPS 140-2, SOC type 2 level 1, ISO 27001 certifications and are implementing the Secure Software Development Framework (SSDF) to fortify our security measures further. This update provides an in-depth understanding of these certifications and our dedication to maintaining the highest data protection standards.

Common Criteria certification and DoDIN APL

Common Criteria is an internationally recognized standard for evaluating the security of information technology products. It involves a comprehensive evaluation process, testing our software against rigorous security requirements. By pursuing Common Criteria certification, our goal is to provide our customers assurance that our products adhere to the highest security standards acknowledged by over 30 countries worldwide.

In parallel, we are also pursuing the DoDIN APL certification, which is specifically relevant for our customers operating within the Department of Defense (DoD) ecosystem. This certification ensures that our products meet the stringent security requirements set by the Defense Information Systems Agency (DISA), thereby enhancing the protection of data within the DoDIN framework.

CMMC v2 Compliance

Veeam Safeguarding Mission-Critical Data Blog Embedded Image 2023

The Cybersecurity Maturity Model Certification (CMMC) is an integral part of our commitment to ensuring the security of our customers’ data. CMMC v2 is the latest version of this unified standard designed to assess the cybersecurity posture of the defense industrial base (DIB). Compliance with CMMC v2 signifies that our security practices align with the stringent requirements defined by the Department of Defense (DoD). By adhering to these standards, we assure our customers within the defense sector that their data is safeguarded with the utmost care and resilience.

Independent Verification & Validation (IV&V)

To reinforce our security measures, we have engaged in Independent Verification & Validation (IV&V). This process involves a third-party organization conducting thorough testing and evaluation of our software. The independent nature of IV&V ensures an unbiased assessment of our security controls, offering an additional layer of confidence in our commitment to protecting valuable customer data.

FIPS 140-2, SOC type 2 level 1 and soon 2 and ISO 27001 certifications

Veeam has successfully completed several vital certifications that further fortify our security posture. FIPS 140-2 is a U.S. government standard that verifies the security requirements of cryptographic modules. This certification ensures that our encryption methods meet the highest standards and provide robust data protection.

SOC type 2 level 1 certification demonstrates our dedication to maintaining the security, availability, processing integrity, confidentiality and privacy of data. We are actively working towards achieving SOC type 2 level 2 certification, enabling us to demonstrate even greater control efficacy and maturity across our systems and processes.

Additionally, Veeam’s compliance with the ISO 27001 standard underscores our commitment to establishing and maintaining a comprehensive information security management system (ISMS). This certification validates that our security practices align with globally recognized best practices, ensuring customer data remains safe and secure.

Implementation of the Secure Software Development Framework (SSDF)

As part of our continuous improvement efforts, Veeam is in the process of implementing the Secure Software Development Framework (SSDF). This framework provides guidance on designing, developing and testing software to ensure adherence to specific security standards. The SSDF allows us to integrate robust security practices into our software development lifecycle, ensuring we proactively address security concerns at every stage of the development process and build products with security in mind from the ground up. By incorporating the SSDF into our development processes, we enhance the security of our software and reinforce our commitment to delivering robust and secure solutions.

At Veeam, our customer’s data security is our top priority. We are committed to maintaining the highest levels of protection for mission-critical data. Pursuing Common Criteria and DoDIN APL certifications, complying with CMMC v2, engaging in Independent Verification & Validation, completing FIPS 140-2, SOC type 2 level 1 and soon 2, ISO 27001 certifications and implementing the Secure Software Development Framework (SSDF) all demonstrate our unwavering dedication to data security.

By undergoing these certifications and implementing industry-leading security measures, we ensure that customer data remains secure, regardless of the sector. We will continue to evolve and improve our security practices to stay ahead of emerging threats and provide customers the peace of mind they deserve.

When customers choose Veeam and the Veeam Data Platform, they can rest assured they have selected a trusted partner committed to securing their data and the data of their customers, end-users and partners. We value the trust we have built with our government customers and will continue to deliver the highest level of data protection possible to ensure mission continuity.

Contact a member of our team today and learn more about how Veeam can support your mission-critical data initiatives.

7 Key Takeaways from HIMSS23

In April, over 40,000 global health professionals converged in Chicago for the highly anticipated HIMSS23 Global Health Conference & Exhibition. Over the course of five days, healthcare, government and technology leaders discussed everything from wearable medical devices and artificial intelligence (AI) to cybersecurity and compliance. Here are some highlights and key themes from the conference.

  1. Change is happening quickly: The buzz around ChatGPT offers a perfect illustration of just how quickly AI has become part of our everyday lives. There are many applications for AI in the healthcare space as well. In procedure rooms, cameras with AI can ensure processes are being followed, and thereby helping avoid malpractice. One key question circulating at the conference was: how can regulations be put in place to protect patients and practitioners’ privacy as this new technology starts to be implemented?

 

  1. Carahsoft HIMSS 23 Blog Embedded Image 2023The cloud is here to stay: Underpinning many new technologies is the cloud. As more healthcare organizations use hybrid and multi-cloud environments, compliance becomes increasingly complicated and important. This is particularly true considering regulations and data protection laws are constantly changing. One benefit is there is a lot of overlap between compliance requirements. Looking for these common requirements (i.e. encrypting sensitive data) can help organizations navigate the seemingly complex world of compliance.

 

  1. Data presents a paradox: Data holds tremendous potential to transform healthcare operations, but the promise of data-informed decision-making must be balanced with both the data overload felt by those on the front lines, and the preservation of patient privacy. Electronic health records (EHRs) have made the lives of doctors and nurses easier in many ways, but they have also required workers to document much more granular information to meet regulation and reimbursement requirements. As such, many workers are skeptical of health IT’s ability to alleviate burnout. Integrating data into the culture of the organization is the best way to ensure everyone is capturing the proper data and maximizing new technology investments.

 

  1. Pursue interoperability: Not just having the data, but sharing that information is also crucial. By improving access to clinical data across institutions, we can discover new therapies, lower medical costs and improve patient care; however, interoperability also requires compliance and due diligence. At HIMSS23, panelists from the National Institute of Standards and Technology (NIST) described how next-generation database access control can facilitate data-sharing without moving large volumes of data. This promotes interoperability while preserving local protection policies. Additionally, panelists from the Centers for Medicare and Medicaid Services (CMS) emphasized the importance of Fast Healthcare Interoperability Resources (FHIR) standards.

 

  1. Care is expanding beyond hospital walls: Increasingly, wearable technology is becoming a staple of healthcare, as it can help with monitoring everything from glucose levels to physical activity, in addition to supporting weight control and disease prevention. More than anything, wearables offer the opportunity to continue patient care outside the walls of the hospital, which reduces the cost of care. The data collected by wearable technology holds tremendous potential for analysis at both a patient level and the population level.

 

  1. Cybersecurity must be top-of-mind: While wearables have many benefits, they must be used with cybersecurity in mind. A continuous glucose monitor that connects to the internet and patient portal, for example, could put all patient data at risk if the device is compromised. That’s why an Institute of Electrical Engineers (IEEE) working group has developed a framework with Trust, Identity, Privacy, Protection, Safety, Security principles (TIPPSS) for keeping devices with sensors safe. The goal is to make TIPPSS the standard for clinical Internet of Things (IoT) devices first, then for other solutions.

 

  1. Privacy: Patient privacy was also a leading theme at HIMSS23. When working with AI, algorithms must be trained on large volumes of data. At the conference, panelists discussed how healthcare providers and tech companies can balance using this protected health information (PHI) to improve AI while still adhering to privacy laws like HIPAA. Data de-identification is one approach to get the most out of large volumes of data while maintaining patient privacy.

Overall a common thread throughout HIMSS23 was balance. Healthcare providers and tech companies must balance the promises of technology with due diligence, while working in partnership to develop innovative solutions. From data standards to data privacy, it is crucial to collaborate with the government to lay the right foundation for using these cutting-edge technologies.

 

Visit our Healthcare Solutions Portfolio to learn more about HIMSS 2023 and how Carahsoft can support your organization’s healthcare technology goals and initiatives.

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at HIMSS 2023.*

AvePoint Adds Governance, Management, Data Protection and Migration Support for Microsoft Power Platform

Carahsoft partner AvePoint Public Sector recently announced its support for the governance, management, migration and data protection of Microsoft Power Platform environments. As more organizations adopt Power Platform to automate processes, build digital solutions, analyze data and create virtual agents, IT leaders need strategies that support their unique governance, security and compliance requirements.

AvePoint’s support for Power Platform helps organizations:

  • Provide scalable management and governance: Access management and risk assessments allow organizations to quickly drive impactful collaboration and sustainable Power Platform adoption. Best practices and productivity can be achieved through automated governance and policies, enforcing proper control of data access and functionality.
  • Protect critical workspaces, apps and flows: AvePoint’s automated backup for Power BI workspaces, Power Apps and flows makes it seamless to avoid accidental data deletion, user error or ransomware. This way, organizations can ensure they’re protected, compliant and prepared for business continuity when using Power Platform.
  • Seamlessly migrate data: Building on AvePoint’s award-winning migration capabilities, organizations can now migrate apps from an environment within the same tenant or between tenants – giving organizations more opportunities to successfully use Power Platform.
AvePoint and Microsoft Integration Blog Embedded Image 2023

Some organizations are already taking advantage of the AvePoint’s Power Platform support. “AvePoint’s support for Power Platform has helped us empower employees to safely build solutions that will enhance their work,” Mike Fettner, Principal Office 365 Engineering at Regeneron, said. “As an organization, this allows us to continue taking smart risks because we know robust governance solutions will put the right guardrails in place, and data protection will ensure none of our data or workflows are lost.”

Register today to join AvePoint and Microsoft for Power Platform Workshop: A Framework to Manage and Govern Power Platform at Scale, coming to a city near you later this Spring.

Connecting Customers with AvePoint and Industry Solutions

It has never been easier to count on Carahsoft and AvePoint. We can help your agency with:

  • Quick quote turnaround and smart spending
  • Industry-expert cloud computing product recommendations
  • 24/7 live assistance to get you up and running faster

Contact a member of the Carahsoft and AvePoint Public Sector team today and discover how we can support your organization.

The Open Source Revolution in Government

Open source technology accounts for a significant portion of most modern applications, with some estimates going as high as 90%, and it is the foundation of many mainstream technologies. Its strength lies in the fact that a vibrant ecosystem of developers contribute to and continually improve the underlying code, which keeps the software dynamic and responsive to changing needs. Enterprise open source software further augments these community-driven projects by providing enterprise-grade support and scalability, while retaining the innovation and flexibility driven by the open source development model. By providing the best of both worlds, such solutions represent a powerful arsenal of tools for addressing government’s most pressing challenges. In a recent pulse survey of FCW readers, 93% of respondents said they were using open source technology. And more than half of respondents to FCW’s survey see open source as an integral resource for strengthening cybersecurity. That number reflects a positive trend toward a better understanding of open source software’s intrinsic approach to security. The power of enterprise open source technologies lies in a combination of collaboration, transparency and industry expertise. As agencies expand their use of such technologies, they maximize their ability to achieve mission success in the most secure, agile and innovative way possible. Learn how the combined power of community-driven innovation and industry-leading technical support is expanding the government’s capacity for transformation in Carahsoft’s Innovation in Government® report.

 

Why Open Source is a Mission-Critical Foundation  

IIG FCW Open Source Revolution November Blog Embedded Image 2022“Open source transforms the way agencies manage hybrid and multi-cloud environments. The most critical technology in the cloud, across all providers, is Linux. Everything is built on top of that foundation — both the infrastructure of the cloud and cloud offerings. Given the right partner, the promise of Linux is that it provides a consistent technology layer for agencies across all footprints, including multiple cloud providers, on-premises data centers and edge environments. From that foundation, agencies and their partners can build portable architectures that leverage other open source technologies. Portability gives organizations the ability to use the same architectures, underlying technologies, monitoring and security solutions, and human skills to manage mission-critical capabilities across all footprints.”

Read more insights from Christopher Smith, Vice President and General Manager of the North America Public Sector at Red Hat.

 

How Open Source is Expanding its Mission Reach

“The real power of open source technologies was revealed when they cracked the code on being highly powered, mission-specific, distributed systems. That’s how we are able to get insights out of data by being able to hold it and query it. Today, open source innovation is being accelerated by the cloud, and the conversation is still changing, with people now demanding that their open source companies be cloud-first platforms. Along the way, the open source technologies that start in the community and then receive a boost of commercial innovation have matured. The most powerful ones are expanding their ability to address more of the government’s mission needs. They are staying interoperable and keeping the data interchange non-proprietary, which is important for government agencies.”

Read more insights from David Erickson, Senior Director of Solutions Architecture at Elastic.

 

The Open Source Community’s Commitment to Security  

“A central tenet of software development is visibility and traceability from start to finish so that a developer can follow the code through development, testing, building and security compliance, and then into the final production environment. Along the way, there are some key activities that boost collaboration and positive outcomes, starting with early code previews, where developers can spin up an application for stakeholders to review. Other activities include documented code reviews by peers to ensure the code is well written and efficient. In addition, DevOps components such as open source, infrastructure as code, Kubernetes as a deployment mechanism, automated testing, and better platforms and capabilities have helped developers move away from building ecosystems and instead focus on innovation.”

Read more insights from Joel Krooswyk, Federal CTO at GitLab.

 

The Limitless Potential of an Open Source Database

“One of the most important elements of any database migration is ensuring that proper planning and due diligence have been performed to ensure a smooth and successful deployment. In addition, there are some key considerations agencies should keep in mind when moving to open source databases. It is essential to start with a clear understanding of the business case and objectives for adopting an open source approach. Agencies also need to decide how the database should function and what it should do to support their digital transformation. Then they must choose the optimal method to deploy the database.”

Read more insights from Jeremy A. Wilson, CTO of the North America Public Sector at EDB.

 

Modernizing Digital Services with Open Source

“A composable, open source digital experience platform (DXP) enables agencies to overcome those challenges. Open source technology is continuously contributed to by a community of developers to reflect a wide array of needs across organizations in varying industries and of varying sizes. A composable approach allows agencies to assemble a number of solutions for a fast, efficient system that is tailored to their needs. When agencies combine a composable DXP with open source technology, they have access to best-of-breed software and the ability to customize the assembly to suit their requirements. An enterprise DXP will enable agencies to achieve a 360-degree view of how constituents are engaging with their digital services and gain valuable data to understand how to enhance their experience. Finally, a composable, open source DXP provides a proactive approach to protecting against security and compliance vulnerabilities.”

Read more insights from Tami Pearlstein, Senior Product Marketing Manager at Acquia.

 

Creating Secure Open Source Repositories

“Protecting the software supply chain requires looking at every single thing that might come into an agency’s environment. To understand that level of visibility, I like to use the analogy of a refrigerator. All the ingredients necessary to make a cake or pie are in the refrigerator. We know they are of good quality, and other teams can use them instead of having to find their own. At Sonatype, our software equivalent of a refrigerator is the Nexus Repository Manager. A second aspect of our offering, called Lifecycle, allows us to evaluate the open source components in repositories at every stage of the software development life cycle. One piece of software can download a thousand other components. How do we know if one of those components is malicious?”

Read more insights from Maury Cupitt, Regional Vice President of Sales Engineering at Sonatype.

 

Better Data Flows for a Better Customer Experience

“A more responsive and personalized customer experience isn’t much different from the initial problem set that gave birth to Apache Kafka. When people interact with agencies, they want those agencies to know who they are and how they’ve interacted in the past. They don’t want to be asked for their Social Security number three times on the same phone call. They also expect that the information or service they receive will be the same whether they are accessing it over the phone, via a mobile app and on a website. To elevate the quality of their service, agencies must be able to stream information in a low-friction way so different systems are consistent with one another and up-to-date at all times, regardless of the communication channel an individual uses. President Joe Biden’s executive order about transforming the federal customer experience is based on this capability. The most successful companies across industries have figured out how to do it, and for the most part, they’ve done it with open source software.”

Read more insights from Jason Schick, General Manager of Confluent US Public Sector.

 

An Open Source Approach to Data Analytics

“For the past 40 years, agencies have used data warehouses to collect and analyze their data. Although those warehouses worked well, they were limited in what they could do. For instance, they could only handle structured data, but by some estimates, 90% of agencies’ data is unstructured and in the form of text, images, audio, video and the like. Furthermore, proprietary data warehouses can show agencies what has happened in the past but can’t predict what might happen in the future. To achieve the government’s goal of evidence-based decision-making, agencies need to be able to tap into all their data and predict what might come next.”

Read more insights from Howard Levenson, Regional Vice President at Databricks.

 

Download the full Innovation in Government® report for more insights from these open source thought leaders and additional industry research from FCW.

Overcoming Data Challenges With Virtualization

Despite the variation in their individual mandates, all regulatory agencies have one main objective: to protect the public. However, there are hurdles to this goal. There are heavy costs associated with data warehousing, as large projects require extensive telecommunication and server space. This can be both expensive and time-consuming. Luckily, by implementing data virtualization tools, agencies can overcome these constraints and provide more effective services.

What is Data Virtualization?

Data virtualization is an approach to data management that helps organizations accelerate the turnaround time for converting data into digestible information. These data sources can range from a variety of locations, including distributions and data stores and any documents, emails or spreadsheets an agency has. With such a wide array of data, accessing and understanding all vital information can be both time-consuming and overwhelming. Data virtualization is necessary to streamline access to the answers and information agencies and users require.

Thentia Data Virtualization Blog Embedded Image 2022How It Works

Data virtualization software begins by creating a layer over or around all existing data sources in an organization. Through its complementary interface, the software outputs the needed information. This process saves an abundance of time that is otherwise spent reading labels and searching for a single piece of information.

Another major benefit is that data virtualization software creates a layer of abstraction between the data source and what the user ultimately sees. The software arranges heterogeneous data from all the different sources across an organization, and then quickly presents it to the user. By properly interacting with the data sources, data virtualization software ensures that all data sources are correctly represented. This way, users can receive sufficient context behind the information they are accessing.

Boons that Enhance Virtualizing Servers

Typically, data virtualization exists between the user and their vast array of data sources. Virtualizing tools have several benefits. They:

  • Reduce the processing time and cost
  • Provide the same opportunity to accomplish a variety of goals and objectives
  • Reduce expenses associated with data integration

In addition to these numerous advantages, virtualizing servers have the same security benefits that any other IT system has. For one, data servers exist on a single network, and are isolated from potential threats. Servers have network isolation and segmentation to prevent the unnecessary cross of information. With granular access control, users can implement micro-segmentation to further this boon. Lastly, by maintaining updates and new security patches, virtualizing servers can stay up to date with the latest cybersecurity practices. For a professional licensing agency, it is always beneficial and necessary to take steps to secure their software. Additional steps don’t need to be taken to protect virtualizing servers.

Choosing the right data virtualization software

The process of implementing data virtualization can be daunting at first. As each organization differs in the types of information it collects and how that information is categorized, data virtualization will also differ. However, there are a few elements that regulatory agencies should consider. First, regulators should determine the setup/layout of their existing organization structure. Questions to consider include:

  • What existing technology is owned?
  • What systems are being worked with?
  • What are the agency’s needs?
  • What are the agency’s top priorities?

All these factors contribute to how data virtualization is implemented. Once the respective regulator reaches a higher end of technological maturity, it should begin looking into fully implementing data virtualization. With the proper virtualization software, regulators can swiftly sift through information.

Data virtualization servers reduce time, resources and cost for regulators

For a variety of agencies, data virtualization can greatly streamline and improve their access to information. By transforming manual systems into a digital, accessible process, virtualization servers reduce time, resources and cost for regulators in their ongoing work to best utilize data to aid the public.

To learn more about Thentia’s data virtualization solutions, visit our website.

EdTech Talks: Putting the Student Experience First

Co-contributors:
Michael Mast, Business Development Manager, Technology, E&I
Ken Chapman, VP of Learning Innovation Advocacy, D2L
Lisa Neu, Enterprise Account Executive for Higher Education, Talend
Bruce Ottomano, Director of Business Development, Passerelle
Rob Curtin, Director, Higher Education, Microsoft
Jesus Trujillo Gomez, Senior Strategic Business Executive, Google
Kate Parker, Vice President, Higher Ed Content Services, LearningMate
Angela Vann, Learning Design, Strategist, LearningMate
Trevor Kelly, Principal Solution Consultant, Genesys
Khalil Yazdi, Resident EdTech CIO, Carahsoft

 

Today, educators and institutions have the unique challenge of leveraging modern digital technology to enhance and personalize the student experience. To build a student-centric relationship between an institution and its student body, educators must take the student’s view into perspective and question whether they are gaining a cohesive yet diverse experience engaging with their school. At EdTech Talks 2022, speakers provided recommendations to improve the student experience by mitigating these challenges and realizing the importance of integrating, analyzing and using data effectively and responsibly.

What’s Possible in the “New Now”: Why the Student Experience Matters

The 2022 Top 10 IT Issues stated, “Successfully moving along the path from vision to sustainability involves recognizing that no institution can be successful and sustainable without placing students’ success at the center, which includes understanding how and why to equitably incorporate technology into learning and the student experience.” Panelists articulated what this means to them and the role of technology in the student experience through various insights:

  • A one size fits all experience for students does not work. Through technology and data, institutions can shift the approach to education design and delivery at scale to extend the abilities of educators to reach and inspire students.
  • Institutions should have a platform that facilitates identifying a holistic view of the student body to allow staff to create personalized pathways for success with their students. Additionally, utilizing data analytics in a comprehensive view can distill valuable recommendations for students to remain on track.
  • With a plethora of student data collected, the challenge becomes unlocking the value of that data to pinpoint those personalized experiences and increase retention rates. In the higher education space, there is a pattern to collected data that a university can use to improve a student’s education journey and network relationship after graduation.
  • Students currently have higher expectations of their institutions’ technology, lifestyle and learning goals and brand experience. Schools need to continuously let their data guide changes for improved engagement and growth of their community.

EdTech Talks Student Experience Blog Embedded Image 2022Utilizing Core IT Systems to Build a Better Student Experience

Infrastructure platforms such as learning management systems (LMS) should enable students and instructors to have an efficient, organized and interoperable experience. By adopting a single hub space to support connectivity among community members, administrators can ease accessibility issues, learn from instructor best practices, understand the most successful methods for creating consistency and give students the ability to personalize their experience on their pathway to successful learning.

These data observations within educational IT systems promote a better comprehension of how to drive diverse learner engagement. With most assignments, content and homework now migrated to cloud environments and digital workspaces, instructors have practical insights into student engagement and work patterns. For example, teachers and professors can see who students collaborate with, how often they work on a particular document, how many edits are made before final review, etc. Core system data analysis can allow institutions to empower students in their experiences whether in the classroom or a remote learning environment.

Maximizing Quality Data

Finding and comprehending intelligent data regarding all students across a campus is essential to creating a rich learning environment, but this process is also taxing. Institutions should plan to implement these four steps to aid in data progress:

  • Effectively assemble student information and other systems data
  • Master data to ensure trusted results
  • Utilize a third party to enrich the quality of data and gain a broader view of students or alumni
  • Analyze relevant findings and implement actionable steps across campus

When addressing these elements, data overload can also be an issue. Data governance is critical for finding, securing and applying the right data to discover important insights about schools’ communities. Institutions should not allow everyone access to raw data because the sheer quantity of information can create confusion and only knowledgeable users will be able to fully discern relevant data. Instead, data stewards and analysts should present specialized views of the appropriate data to individual audiences, giving that data new quality.

When developing a holistic view of students, data isn’t the only solution. Sometimes the most effective technique to understanding each individual in a data-driven culture is to create spaces to audibly listen to their views, note their academic and personal reflections, create multiple points of calibrating their reactions and feedback, etc. Not only do students feel more heard, but also the educational changes that derive from these interactions create a strong positive outcome.

Addressing Learning Loss, Accessibility and Student Success

Fueled by the pandemic, the U.S. has seen a dramatic increase in learning loss. Math and reading scores have plummeted and students have been severely impacted by the challenges that come with the hybrid learning environment. Additionally, the significant shortage of teaching staff and, therefore, consistent instructional content and operations in the K-12 domain, has negatively impacted engagement and learning success.

Technology solutions should meet the needs of students rather than hinder their ability to learn. Institutions can benefit from an integrated digital platform through which its instructors can proactively analyze data about their students. This way, schools support various types of learners to maximize their achievement in the classroom and help reduce the variability of remote learning. Online learning is here to stay, and it is the responsibility of schools to provide the updated digital infrastructure and accessibility, as well as first-rate IT support and communication needed for this new era of education.

 

Contributing experts from E&I Cooperative Services, Talend, D2L, Passerelle, Microsoft, Google, LearningMate and Genesys can support your student experience initiatives by breaking down barriers to student success and steering digital transformation efforts in the right direction. Visit Carahsoft’s EdTech Talks 2022 resource center to view their on-demand recordings and learn more about the featured education technology providers.

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at Carahsoft’s EdTech Talk Series 2022.*

Data Integration with Siren

Siren is an intelligent investigative platform that quickly and efficiently finds connections between disparate data sources. These rapid connections are facilitated through an associative data model that defines how the data within your eco-system are interconnected.

Siren Law Enforcement Data Blog Embedded Image 2022

In today’s investigations landscape, it is often difficult to bring together the data that investigators and crime analysts use. Law enforcement entities often have a large number of systems including on-site data sources such as records management systems, computer-aided dispatch systems and jail management systems. Some of their sources are web-based data like open-source data providers. Additional data needed in an investigation may come from ad-hoc subpoena data (e.g. Call Detail Records) and case-specific forensic data from mobile devices (e.g., Cellebrite). At times, they need data from external data providers (e.g., CLEAR, Accurint), body-worn cameras (e.g., Axon), and license plate readers (e.g., Flock, Vigilant) to get sufficient context for their investigation. Siren effectively brings all these data sources together by connecting directly to the on-site data sources, connecting to any number of external data sources using web services and loading data through the user interface.

Below is a high-level depiction of how this data all relates in a law enforcement data model:

The Siren Platform seamlessly relates a variety of data sources together and provides a single view of the consolidated information. Given an investigation where the only known information is the name of a person, Siren can relate this person to all information accessible to the platform such as their cell phone and vehicles, this can verify the person using a web integration to a service such as Thompson Reuters CLEAR.

Below is a depiction of a screen showing internal records combined with a verification of the identity through Thompson Reuters CLEAR:

The ability to bring together key information for an investigation in a central view alleviates context switching multiple logins to go from system to system, and manual correlation on paper, Word document, or Excel spreadsheet and effort.

Download Siren’s Law Enforcement Datasheet and contact us today to request a demo.

3 Strategies for Effectively Enforcing the Principle of Least Privilege

The days of “trust but verify” are long gone. Respondents displayed heightened cybersecurity concerns exacerbated by an expanded attack surface created in large part by remote work.

As such, many have moved on from trust but verify to a zero-trust approach highlighted by adherence to the principle of least privilege (PoLP). With PoLP, users are granted access to the tools, technologies, and data they need to do their jobs and no more. Seventy percent of survey respondents indicated they’re already implementing PoLP or will implement it within the next year.

The question then becomes how to effectively enforce it.

Maintaining appropriate levels of access and control isn’t an easy task and can’t be accomplished manually. After all, people’s jobs change regularly, new employees enter the workforce, and security policies are continually being updated.

Plus, the sheer number of remote workers is helping to increase the potential attack surface. This includes the federal government, where the General Accounting Office estimates 80% of work has been done remotely over the past couple of years.

To better manage the situation, administrators should consider employing three strategies:

SolarWinds Principles of Least Privilege Blog Embedded Image 2022Automatically Monitor and Control User Access Rights

Tracking who has access to what data, who’s attempted to access certain files, or when said files were accessed is a full-time job. A better approach is to control access via an access rights management (ARM) system allowing for automated user account creation, modification, or deletion and designed to assign access rights based on users’ roles.

An ARM system can also automatically notify administrators when an individual attempts to access information they’re not privy to. This helps prevent unauthorized access from the inside and helps detect malicious accounts that aren’t part of the access rights list.

Monitor and Audit Administrative Changes

Permission rights changes aren’t always authorized, so it’s important for administrators to continually monitor and audit all administrative changes based on a set of security policies.

For example, a team might establish policies around who’s authorized to change files or permissions or when those changes can occur. These correlation rules are benchmarks and tell a system when something is amiss.

If a system is equipped with automated file integrity monitoring (FIM), it can compare network activity to those benchmarks. Anomalous or inappropriate activity can be flagged, leading to the system automatically blocking access and issuing an alert to allow administrators to respond to suspicious activity quickly and appropriately.

Administrators should also routinely audit privileged account log data. Running a report post-event helps forensically decipher what happened, and running reports every few weeks can help ensure users’ privileges are correct and up-to-date.

Continually Evaluate User Privileges

People’s jobs change all the time. Employees leave organizations, new employees are onboarded, and many people shift roles or get promoted. In each case, access privileges must be adjusted if an agency is to maintain a strong security posture.

Consider what could happen if a person leaves their position at an agency, but their user credentials remain active long after their departure. They could share those files with others, perhaps individuals willing to pay enormous sums of money for classified information. Or, if they’re disgruntled, they could create havoc by simply manipulating or deleting information.

Whatever the case, it’s prudent for administrators to regularly evaluate who has access to what. They must remove users who no longer need access to data and adjust permissions so those who have been promoted have access to information and can do their jobs effectively. Doing so allows for better security and unimpeded productivity—a winning combination in the post “trust but verify” world.

 

See how the privileged account management tool SolarWinds offers can help you enforce user access management, and sign up for a free demo.