
My team and I were recently approached by an organization querying us about our team’s experience in the area of systems integration, with Critical Path Method (CPM) scheduling at the center. Doing so is a foundational part of PPM, but many practitioners miss the subtleties of achieving integration in a manner that properly establishes interrelationships across relevant cross-domain datasets which create valid, actionable intelligence within this domain.
The core of success involves applying a coherent and comprehensive automated solution to the set of processes and practices to prioritize, plan, execute, monitor, and govern multiple projects and programs and their associated data. This is known as project and portfolio management (PPM).
When constructing a large, complex project or group of projects, we begin with the project concept, project objectives, framing assumptions, stakeholder identification and read-in, and the identification of risks. This progression then extends to produce success criteria (within the context of key performance indicators or KPIs), the integrated master plan (IMP), the work breakdown structure (WBS), identification of resources within the plan, and finally the integrated master schedule (IMS). Earned value management (EVM), which may or may not apply, will then follow as an assessment of the value of the work being accomplished based on the performance measurement baseline (PMB).
Among these artifacts, the single most important in capturing and understanding the entire contractual and project scope that identifies program events, accomplishments, and accomplishment criteria—and that provides the opportunity for insight into proper integration across elements—is within the IMP.
This is especially true in projects in which technical risk and performance are identified as key factors in our project success criteria. It is the necessary step to capture factors of technical risk and performance, which can be reflected in detailed schedule task performance of the IMS.
In the marketplace, there are few choices of CPM scheduling applications powerful enough to support complex projects. Among these are Microsoft Project, Oracle P6, and Open Plan Professional. There are some other entries that claim to use AI or other “modern” methods to analyze sequences of events, but the three listed provide reliable and understandable results that allow for effective management of the schedule activities and the underlying tasks. In the most sophisticated implementations, schedules will be resource-loaded.
To achieve full integration of PPM elements across subdomains requires extending the core features in the CPM scheduling applications to realize their full informative and business value. This includes the integration of risk management identification and management capabilities, which include but go beyond simple Monte Carlo schedule risk analysis. It would also include automated cost and schedule analysis on the alignment between schedule activities, resource execution and distribution, as well as deploying strategies and measures for risk handling.
Additional extensions include analytical queries to determine weaknesses in the schedule and whether foundational elements are properly tick-and-tied (schedule health). The ability to trace schedule tasks to specific work and technical performance measures provide the rapid ability to identify areas that require immediate action.
In capturing the data elements from across different CPM scheduling applications, or even in a uniform scheduling environment, providing comprehensive reporting, GANTT and visual analytical charting of key factors and elements, including EVM, systems engineering, contract compliance, and other relevant elements within the PPM ecosystem.
Applying Modern Systems Design to Integration in PPM
The most effective way to achieve integration across the PPM ecosystem is through the deployment of a modern power platform.
Key capabilities and components of power platform technology include:
- Low-code/no-code app deployment: visual configuration designers to create apps without heavy hand-coding.
- Integration layer: prebuilt schemas, connectors and tools to connect SaaS, on-prem systems, databases, and custom APIs.
- Data platform and modeling: a common data model based on open data principles that honor data sovereignty, metadata-driven storage, and low-code data manipulation.
- Analytics and dashboards: embedded BI/reporting to turn app data into actionable insights.
- Workflow automation: event- and trigger-driven automation (including RPA for UI automation)
- Governance, security, and lifecycle: role-based access, environment separation (dev/test/prod), ALM, monitoring, and audit.
- Extensibility: custom code extensions, SDKs, plug-ins, and support for CI/CD and developer tooling.
- Marketplace/connectors: pre-configured COTS functionality, reusable components, templates, and third-party integrations.
When we combine this technology with a modular open-systems approach in application design (a MOSA) and open data governance, we are able to realize the full intrinsic and business value of data while also achieving maximum flexibility.
These principles first evolved within the systems engineering and model-based engineering communities. But the same benefits identified in this model for physical components within systems also apply to computer systems that control and analyze human systems, such as in PPM. Taking this approach also allows for greater integration between technical performance in systems research and development with the various other subdomains.
The benefits are significant, they include:
- Faster innovation: Modular components and open data enable parallel development, third‑party extensions, and rapid replacement of parts without system-wide redesign.
- Reduced vendor lock‑in: Standardized interfaces and governance let organizations mix vendors and swap modules, lowering dependence on single suppliers.
- Lower total cost of ownership: Reuse, incremental upgrades, and competitive procurement reduce lifecycle costs.
- Improved resilience and reliability: Fault isolation via modularity and the ability to hot‑swap components or roll back to previous modules improves uptime.
- Scalability and flexibility: Easily scale capacity or add capabilities (e.g., new energy sources, telemetry modules) by plugging in compatible modules.
- Interoperability and integration: Standard interfaces and open data models simplify integrating third‑party analytics, grid services, and partner systems.
- Faster regulatory and market response: Modular upgrades and open data make it easier to meet new compliance requirements or enable new services (demand response, V2G).
- Better analytics and optimization: Open, governed data enables advanced ML/AI, cross‑system optimization (load forecasting, predictive maintenance), and transparent KPIs.
- Enhanced security posture: Clear module boundaries and standardized interfaces simplify security reviews; data governance enforces access controls, provenance, and auditability.
- Ecosystem and marketplace development: Standards + open data foster third‑party marketplaces for modules, apps, and services, driving innovation and value capture.
- Sustainability and resource efficiency: Easier integration of renewables, storage, and efficiency modules supports decarbonization and circular‑economy practices (component reuse, upgrades).
A Practical Use Case: Microsoft Project
The discussion mentioned in the first paragraph of this post presents an ideal use case for this approach. Microsoft has announced that it plans on retiring Microsoft Online on September 30, 2026.
What this means is that organizations that had invested in this CPM scheduling application will need to make a decision: they can stay within the Microsoft Project environment, or look at the other non-Microsoft CPM applications mentioned earlier. For public project management organizations, further complexity is added by the source of the data related to the schedule: whether it be organic, from suppliers, or some hybrid approach that requires both organic and contracted work.
As I have stated in my earlier posts, I run and operate a software company by the name of SNA Software LLC. The Proteus Envision suite is composed of modern power platform technology, but also built using MOSA principles, and automating data capture and transformation in accordance with open data governance principles.
Rather than a niche application focused on some portion of the project and portfolio domain, our solutions are built to leverage these modern technologies to achieve integration. With the recent FAR overhaul, that simplify many of the regulatory requirements on PPM systems, such as earned value management (EVM) for contracts below $50M, an open system that supports a nimble and modular approach is needed. The shift to the importance of technical performance, schedule and resource management, and risk management become paramount.
With the implementation of the Cybersecurity Maturity Model Certification (CMMC) program, the issue of off-premises Cloud usage is also an issue given the recent controversy regarding Azure GCC High FedRamp. Organizations need a flexible set of options when looking to transition to alternatives when a foundational solution is suddenly no longer available, doesn’t meet expectations, or ages out. Does this agency use in-premises solutions or a commercial cloud environment?
The use case here is to apply applications that automate the capture and transformation of data from any CPM scheduling application. Doing so allows organizations to forgo direct labor in transformation, avoiding the error-ridden and long lead-time brute force data engineering; or the improper use of Excel as an inappropriate systems management solution which siloes data and creates bespoke single points of failure.
The combination of a modern power platform, MOSA, and open data governance is what the current environment demands. At core of this approach is the overriding importance of data—it’s accuracy, transparency, scalability, and integration. Without good data, application of new AI solutions will fail to meet expectations and return-on-investment.
In summary: The Present Challenges in PPM
The most important issues in the PPM domain today revolve around the following:
- The appropriate application and use of artificial intelligence solutions: the most useful utilization of this promising technology in an ecosystem that requires rigor.
- The shift from PPM domain siloes: not only in terms of data or analytics, but also in terms of developing and expanding the business acumen of the workforce to be able to effectively use these advanced technologies.
- The continued importance of assessment and management methods informed by powerful and flexible solutions in the area of large-scale project management.
- The need for flexibility: to prevent lock-in of proprietary data solutions in a rapidly developing technology environment, and in identifying modular systems solutions to provide upgrades or interoperability rapidly.
- The rising importance of other PPM indicators: especially those such as technical performance, risk management, and resource execution measures that presage the traditional down-the-line performance indicators in EVM.
- The utilization of cloud or in-premise deployments, or a combination of the two, to address bandwidth and scaling issues as relevant datasets become larger and more complex with integration.
- Finding strategies to overtake suboptimization in organizations resulting from rice-bowls, fiefdoms, and silo-building.
Meeting these challenges and finding solutions to them will require collaboration and systems thinking combined with supporting technologies.







You must be logged in to post a comment.