Planning for Success: Applying Joint Cost and Schedule Risk Analysis in the Acquisition Process as PPM

It’s Important to Have a Good Plan

Planning is at the core of being able to execute a project, program, or portfolio of programs. Within defense planning systems, the process that defines and determines the resources needed for these efforts is known by the acronym PPBE.

PPBE stands for Planning, Programming, Budgeting, and Execution. It is the annual process that the Department of Defense (DoD) uses to allocate resources and manage its budget.

The various phases of PPBE are detailed and specific.

During the planning phase, the top leaders and various agencies of the military and civilian portions of the government identify the threats that we need to face. This includes the military strategy of the United States, which originates within the National Command Structure.

The programming phase includes building a five-year plan, called the Programming Objective Memorandum (the POM) that boils down the general objectives of the national military strategy into specifics. There are three layers in this process.

The ‘translation layer’ takes the high-level strategy and turns this input into specific requirements, like the type and quantity of various aircraft and the equipment and fuels they depend on. It also summarizes capabilities that need to be delivered, which cross-references systems that must be interoperable or operate together in order to achieve objectives within a theatre of operations. Force multipliers in joint and combined operations are important considerations in this layer.

The ‘Five-Year Look’ layer, also known as the Future-Years Defense Program, or FYDP, was introduced in the 1960s to cover the entire development cycle of the new system(s), like a ship, aircraft, or armored vehicle. Today, it often doesn’t. It differs from the budget because it is focused on the lifecycle of the investment, not simply the next year.

During the budgeting phase, the Services and Agencies determine how to fund the requirements within current or projected budget constraints. Programs are assessed by their ability to deliver capability and execute the programs on time. Project and program managers must show that the monies allocated to their efforts will achieve the project objectives.

Finally, the execution phase occurs once monies are received through the Congressional appropriations process. DoD evaluates whether projects and programs are meeting their baseline requirements.

The translation from the overall planning into the programming phase in PPBE is of utmost importance. This is borne out by multiple studies, including those noted in the CSIS Root Cause Analysis, this RAND Major Defense Program study, and from the final report of the Commission on PPBE Reform, among others. What these studies show, when taken as a whole, is that 40% of cost overruns are caused by inaccurate cost estimates that are based on optimism bias, oftentimes lacking insight by not incorporating new technologies that will allow for analysis of plan confidence.

Presently, the PPBE process does not capture the confidence level used to set the budget. Policy within some of the Services is to set budget numbers at 55% confidence, but there is no discussion on confidence in the related schedule that drives costs. This condition creates an environment where arbitrary adjustments are made when it is clear that there will be cost and schedule overruns against an optimistic baseline. For example, while some range is provided by setting threshold and objective cost, schedule and performance baselines, absent risk-informed data, these are usually adjusted by adding 10% for cost and six months for schedule, ignoring what estimating models are predicting.

Beginning in 2025, the Department of Defense expanded its focus on PPBE as noted here and here. In furtherance of this initial guidance, Secretary Hegseth issued further guidance on acquisition on November 7th, 2025, including the guidance on “Transforming the Defense Acquisition System into the Warfighting Acquisition System to Accelerate Fielding of Urgently Needed Capabilities to Our Warriors.”

Within this guidance, as well as in the revised DoD PPBE Reform Implementation Plan issued by the Department in January 2025, clearly states the need to use a data-driven approach in improving the confidence in cost and schedule estimates.

Putting the “P” in Joint Cost and Schedule Estimates

In late 2025, as one means of aligning with the new DoD policies promulgated under acquisition reform, the Assistant Secretary of the Navy for Research, Development and Acquisition (ASN(RDA)) directed all project managers to prioritize program schedule achievement when developing cost estimates. As such, this guidance requires that schedule confidence levels be set at 65% or higher. The process to achieve these confidence levels is known as the Joint Cost and Schedule Confidence Level or JCL.

A JCL goes far beyond a simple Schedule Risk Analysis (SRA). The purpose of the former is to combine cost, schedule and risk to determine the mathematical probability that a project’s cost will be equal or less than the targeted cost, and the schedule will be equal to or less than the targeted finish date. The latter, while a valuable process in tightening schedule confidence, treats the schedule as an independent variable without recognizing the obvious linkage of cost and schedule.

The old adage, “time is money” applies. A project could have an SRA confidence level of 80%, but even with a similar confidence limit for cost calculated independently will yield a lower confidence when the risk factors of both of these elements are combined.

The Navy is not alone in this integrating cost and schedule confidence levels. NASA has a long history of applying JCL at the 70% confidence level to ensure that their projects are executable. For example, the guidance includes:

  • Setting the Agency Commitment Baseline (ABC) at the 70% joint confidence level.
  • Making JCL mandatory for all projects with a life-cycle cost of greater than $250M.
  • While budgeting to the 70th percentile, NASA allows flexibility to budgeting to lower confidence levels relying on the “portfolio effect” to manage risk across multiple missions.
  • Any JCL confidence levels set below 70% must be fully documented.

The Government Accountability Office (GAO) provides the overarching methodology used by many federal agencies as best practice when validating their JCL results. Among these are:

JCL is not a One-Trick Pony: PPM and Execution

During the early stages of planning, establishing a confidence level instills discipline and improves the processed in POM and baseline establishment. Even when there are only high-level schedules or an integrated master plan (IMP), it is still possible to do the work to establish the interrelationships of the high-level planning timelines and the associated estimated costs. Eventually, though, as the process proceeds, our projects will be definitized and produce our critical path method (CPM) integrated master schedules.

This is the constant: things will change.

In the past, the JCL process was not fully embraced for two good reasons: first, the JCL process relied on manual efforts that required a great deal of time and effort; the second factor was that JCL, though a government process, was siloed and provided through a limited number of consulting companies.

In the first case, because technology was limited in dealing with what used to be called “Big Data,” which is not so big anymore, there was some art applied to taking the Integrated Master Schedule (IMS) and summarizing it. This reduced not only the amount of data utilized, it also undermined the validity of the process, given its opaqueness. It also took weeks to conduct. Improvements were defeated by the exclusiveness of a few selected companies to utilize their “expertise” in summarizing schedules.

In 2023, SNA Software LLC (SNA), which produces its PPM Power Platform, and Intaver Institute (Intaver), which is the manufacturer of RiskyProject, teamed to automate the JCL process for NASA. The keys to automating this process are as follows:

  1. The capability found in SNA’s COTS data transformer applications that transform third-party files into a non-proprietary open data format and reflect that openness in linked-open data tables. This includes data from Microsoft Project, Deltek Open Plan Professional, and Oracle P6, as well as cost applications that provide the time-phased project baseline. As with all solutions today with multiple means of visualization, curated and validated data is the first essential key.
  2. Rapid set-up to ensure that cost and schedule data are properly aligned and integrated in the IMP or IMS. After initial set-up the process is optimized to be iterative with the IMS process collecting changes are they occur.
  3. Robust COTS analytical functionality within a risk analysis solution that reflects deep knowledge in JCL leveraging the latest software technologies. This includes the ability to apply uncertainties, risk drivers, and other factors to the schedule.
  4. A COTS modular open-systems low-code/no-code power platform that integrates JCL data with other PPM indicators to inform all aspects of projects and portfolios with risk.

The end result is an automated JCL process that, once initially set up, can be run as many times as necessary into and through the execution phase of the PPBE. The following advantages are realized:

  1. The automated JCL is executed against the entire network schedule, and the outputs are in calculating the scatter plots to determine whether the project is executable. This plotting can then be further analyzed using a combination of linear regression and 3D plots.
  2. Any level of confidence can be selected for the cost and schedule ranges.
  3. The process provides independent estimates at complete beyond focused earned value performance since it is an integrated probability matrix.
  4. The timeline from inputs, after initial setup, is run in hours in lieu of direct labor over several weeks.
  5. Standardization of unified risk language. The JCL merges “silos” into an integrated metric and look, allowing different organizational entities to communicate program health in a standardized format.

Putting the “P” in Portfolio Management

With the establishment of Portfolio Acquisition Executives (PAEs) within the DoD/Services, JCL becomes a necessary capability across project management. These organizations, responsible for portfolio outcomes, have authority over cost, schedule, and performance (read technical performance) trade-offs to prioritize time-to-field and mission outcomes.

In support of this new model, JCL delivers the ability for decision-makers to apply the following:

  • Dynamic Resource Allocation. Provides a statistical basis for moving funds between program. For example, if a high-priority program has a JCL-85 (indicating it is likely over-resourced), the PAE may choose to reallocate budget to apply to risk on a program operating with a lower JCL confidence score.
  • PPBE alignment of budgets with actual risk profiles of capability portfolios.
  • Risk-based Portfolio Balancing. Managing the portfolio effects will allow management to ensure that aggregated risk doesn’t exceed the organization’s tolerance. For example, one portfolio should not have all high-risk, low-confidence programs under one management.
  • Aggregated Confidence. Allows managers to calculate the probability of the entire portfolio succeeding within total budget, which is more efficient than selecting an arbitrary confidence level for all programs.
  • Cross-Program Dependency Management. Modern defense portfolios involve “systems of systems,” where a program output is another program’s input. For example, during the early planning process assuming that there is an Integrated Master Plan (IMP), the delay in one program might trigger a high-risk failure risk in an entire aircraft or ship portfolio.
  • Strategic Decision Support. The JCL provides a useful capability in determining divestment triggers, such as when mitigation efforts are unsuccessful and quantitative evidence exists that suggests program cancellation.

Mission Threads and Capability Assessments

Strategic Mission Support includes rolling up further up the chain to strategic decision support that includes mission threads and capabilities assessments.

A mission thread is a sequence of end-to-end activities required to achieve an objective. When assessing an operational theatre’s objectives, the JCL rolls up concurrency risk across systems within the threads.

The roll-up of concurrency risk allows for a realistic determination of synchronized fielding. For example, if a mission thread requires three separate programs (a satellite, a ground station, and an aircraft package) to be operational by 2030, but the JCL for the satellite shows a 50% probability of a two-year delay, the entire mission thread is invalidated.

In addition, timelines are stress-tested. The JCL allows planners to see where optimistic schedules in one component create a high probability of failure for joint capabilities.

Capabilities-based assessments (CBA), identifies gaps in current capabilities. The JCL contributes by providing a reality check on proposed solutions. The handling of risk in these cases can determine whether the capabilities gap will persist, or the effects of technical or funding volatility. Where material gaps exist, strategies to handle gaps by pivoting to training or doctrine updates to bridge the gap in the interim.

Conclusions and Observations

Risk-informed decision-making is essential when dealing with large systems, since single-point estimates and siloed information streams have proven to be ineffective and inefficient. Furthermore, for systems acquisition, the locus of information begins with the project plan and then, in detail, reflected in the schedule. When properly created, this is the binding virtual time-phased system that links systems engineering performance, resource allocation and management, and produces the project structure to deliver realistic cost performance and measurement.

By combining and integrating PPM, systems engineering, and Integrated Project Management (IPM) data flows, the JCL provides the basis to inform each level of the acquisition ecosystem. From the project level through program management, portfolio decision-making and finally feeding into our strategic acquisition analysis.

At the highest level of mission and capabilities assessments, this model provides a complete picture of the acquisition system’s delivery to the warfighter. Being aware of the risks and establishing the common threads across and through the acquisition ecosystem delivers economy of force to the warfighter.

For more information on how software is delivering commercial-off-the-shelf (COTS) technology to specialized national security acquisition requirements using the JCL, visit https://sna-software.com and https://intaver.com.

Solid Like a Rock: The Modern Power Platform, Modular Open Systems, and PPM – A Use Case

My team and I were recently approached by an organization querying us about our team’s experience in the area of systems integration, with Critical Path Method (CPM) scheduling at the center. Doing so is a foundational part of PPM, but many practitioners miss the subtleties of achieving integration in a manner that properly establishes interrelationships across relevant cross-domain datasets which create valid, actionable intelligence within this domain.

The core of success involves applying a coherent and comprehensive automated solution to the set of processes and practices to prioritize, plan, execute, monitor, and govern multiple projects and programs and their associated data. This is known as project and portfolio management (PPM).

When constructing a large, complex project or group of projects, we begin with the project concept, project objectives, framing assumptions, stakeholder identification and read-in, and the identification of risks. This progression then extends to produce success criteria (within the context of key performance indicators or KPIs), the integrated master plan (IMP), the work breakdown structure (WBS), identification of resources within the plan, and finally the integrated master schedule (IMS). Earned value management (EVM), which may or may not apply, will then follow as an assessment of the value of the work being accomplished based on the performance measurement baseline (PMB).

Among these artifacts, the single most important in capturing and understanding the entire contractual and project scope that identifies program events, accomplishments, and accomplishment criteria—and that provides the opportunity for insight into proper integration across elements—is within the IMP.

This is especially true in projects in which technical risk and performance are identified as key factors in our project success criteria. It is the necessary step to capture factors of technical risk and performance, which can be reflected in detailed schedule task performance of the IMS.

In the marketplace, there are few choices of CPM scheduling applications powerful enough to support complex projects. Among these are Microsoft Project, Oracle P6, and Open Plan Professional. There are some other entries that claim to use AI or other “modern” methods to analyze sequences of events, but the three listed provide reliable and understandable results that allow for effective management of the schedule activities and the underlying tasks. In the most sophisticated implementations, schedules will be resource-loaded.

To achieve full integration of PPM elements across subdomains requires extending the core features in the CPM scheduling applications to realize their full informative and business value. This includes the integration of risk management identification and management capabilities, which include but go beyond simple Monte Carlo schedule risk analysis. It would also include automated cost and schedule analysis on the alignment between schedule activities, resource execution and distribution, as well as deploying strategies and measures for risk handling.

Additional extensions include analytical queries to determine weaknesses in the schedule and whether foundational elements are properly tick-and-tied (schedule health). The ability to trace schedule tasks to specific work and technical performance measures provide the rapid ability to identify areas that require immediate action.

In capturing the data elements from across different CPM scheduling applications, or even in a uniform scheduling environment, providing comprehensive reporting, GANTT and visual analytical charting of key factors and elements, including EVM, systems engineering, contract compliance, and other relevant elements within the PPM ecosystem.

Applying Modern Systems Design to Integration in PPM

The most effective way to achieve integration across the PPM ecosystem is through the deployment of a modern power platform.

Key capabilities and components of power platform technology include:

  • Low-code/no-code app deployment: visual configuration designers to create apps without heavy hand-coding.
  • Integration layer: prebuilt schemas, connectors and tools to connect SaaS, on-prem systems, databases, and custom APIs.
  • Data platform and modeling: a common data model based on open data principles that honor data sovereignty, metadata-driven storage, and low-code data manipulation.
  • Analytics and dashboards: embedded BI/reporting to turn app data into actionable insights.
  • Workflow automation: event- and trigger-driven automation (including RPA for UI automation)
  • Governance, security, and lifecycle: role-based access, environment separation (dev/test/prod), ALM, monitoring, and audit.
  • Extensibility: custom code extensions, SDKs, plug-ins, and support for CI/CD and developer tooling.
  • Marketplace/connectors: pre-configured COTS functionality, reusable components, templates, and third-party integrations.

When we combine this technology with a modular open-systems approach in application design (a MOSA) and open data governance, we are able to realize the full intrinsic and business value of data while also achieving maximum flexibility.

These principles first evolved within the systems engineering and model-based engineering communities. But the same benefits identified in this model for physical components within systems also apply to computer systems that control and analyze human systems, such as in PPM. Taking this approach also allows for greater integration between technical performance in systems research and development with the various other subdomains.

The benefits are significant, they include:

  • Faster innovation: Modular components and open data enable parallel development, third‑party extensions, and rapid replacement of parts without system-wide redesign.
  • Reduced vendor lock‑in: Standardized interfaces and governance let organizations mix vendors and swap modules, lowering dependence on single suppliers.
  • Lower total cost of ownership: Reuse, incremental upgrades, and competitive procurement reduce lifecycle costs.
  • Improved resilience and reliability: Fault isolation via modularity and the ability to hot‑swap components or roll back to previous modules improves uptime.
  • Scalability and flexibility: Easily scale capacity or add capabilities (e.g., new energy sources, telemetry modules) by plugging in compatible modules.
  • Interoperability and integration: Standard interfaces and open data models simplify integrating third‑party analytics, grid services, and partner systems.
  • Faster regulatory and market response: Modular upgrades and open data make it easier to meet new compliance requirements or enable new services (demand response, V2G).
  • Better analytics and optimization: Open, governed data enables advanced ML/AI, cross‑system optimization (load forecasting, predictive maintenance), and transparent KPIs.
  • Enhanced security posture: Clear module boundaries and standardized interfaces simplify security reviews; data governance enforces access controls, provenance, and auditability.
  • Ecosystem and marketplace development: Standards + open data foster third‑party marketplaces for modules, apps, and services, driving innovation and value capture.
  • Sustainability and resource efficiency: Easier integration of renewables, storage, and efficiency modules supports decarbonization and circular‑economy practices (component reuse, upgrades).

A Practical Use Case: Microsoft Project

The discussion mentioned in the first paragraph of this post presents an ideal use case for this approach. Microsoft has announced that it plans on retiring Microsoft Online on September 30, 2026.

What this means is that organizations that had invested in this CPM scheduling application will need to make a decision: they can stay within the Microsoft Project environment, or look at the other non-Microsoft CPM applications mentioned earlier. For public project management organizations, further complexity is added by the source of the data related to the schedule: whether it be organic, from suppliers, or some hybrid approach that requires both organic and contracted work.

As I have stated in my earlier posts, I run and operate a software company by the name of SNA Software LLC. The Proteus Envision suite is composed of modern power platform technology, but also built using MOSA principles, and automating data capture and transformation in accordance with open data governance principles.

Rather than a niche application focused on some portion of the project and portfolio domain, our solutions are built to leverage these modern technologies to achieve integration. With the recent FAR overhaul, that simplify many of the regulatory requirements on PPM systems, such as earned value management (EVM) for contracts below $50M, an open system that supports a nimble and modular approach is needed. The shift to the importance of technical performance, schedule and resource management, and risk management become paramount.

With the implementation of the Cybersecurity Maturity Model Certification (CMMC) program, the issue of off-premises Cloud usage is also an issue given the recent controversy regarding Azure GCC High FedRamp. Organizations need a flexible set of options when looking to transition to alternatives when a foundational solution is suddenly no longer available, doesn’t meet expectations, or ages out. Does this agency use in-premises solutions or a commercial cloud environment?

The use case here is to apply applications that automate the capture and transformation of data from any CPM scheduling application. Doing so allows organizations to forgo direct labor in transformation, avoiding the error-ridden and long lead-time brute force data engineering; or the improper use of Excel as an inappropriate systems management solution which siloes data and creates bespoke single points of failure.

The combination of a modern power platform, MOSA, and open data governance is what the current environment demands. At core of this approach is the overriding importance of data—it’s accuracy, transparency, scalability, and integration. Without good data, application of new AI solutions will fail to meet expectations and return-on-investment.

In summary: The Present Challenges in PPM

The most important issues in the PPM domain today revolve around the following:

  • The appropriate application and use of artificial intelligence solutions: the most useful utilization of this promising technology in an ecosystem that requires rigor.
  • The shift from PPM domain siloes: not only in terms of data or analytics, but also in terms of developing and expanding the business acumen of the workforce to be able to effectively use these advanced technologies.
  • The continued importance of assessment and management methods informed by powerful and flexible solutions in the area of large-scale project management.
  • The need for flexibility: to prevent lock-in of proprietary data solutions in a rapidly developing technology environment, and in identifying modular systems solutions to provide upgrades or interoperability rapidly.
  • The rising importance of other PPM indicators: especially those such as technical performance, risk management, and resource execution measures that presage the traditional down-the-line performance indicators in EVM.
  • The utilization of cloud or in-premise deployments, or a combination of the two, to address bandwidth and scaling issues as relevant datasets become larger and more complex with integration.
  • Finding strategies to overtake suboptimization in organizations resulting from rice-bowls, fiefdoms, and silo-building.

Meeting these challenges and finding solutions to them will require collaboration and systems thinking combined with supporting technologies.

Shake it Out – Embracing the Future in Program Management – Part One: Program and Project Management in the Public Interest

I heard the song from which I derived the title to this post sung by Florence and the Machine and was inspired to sit down and write about what I see as the future in program management.

Thus, my blogging radio silence has ended as I begin to process and share my observations and essential achievements over the last couple of years.

My company—the conduit that provides the insights I share here—is SNA Software LLC. We are a small, veteran-owned company and we specialize in data capture, transformation, contextualization and visualization. We do it in a way that removes significant effort in these processes, ensures reliability and trust, to incorporate off-the-shelf functionality that provides insight, and empowers the user by leveraging the power of open systems, especially in program and project management.

Program and Project Management in the Public Interest

There are two aspects to the business world that we inhabit: commercial and government; both, however, usually relate to some aspect of the public interest, which is our forte.

There are also two concepts about this subject to unpack.

(more…)