Artificial Intelligence, Infrastructure, and Systemic Risk: Why Scaling Demands Project Management Maturity
Publications
Morgan State Project Management Magazine - Spring 2026
Baltimore, Maryland, USA
Ricardo Viana Vargas
João Henrique Jacinto
1. Introduction
For a long time, discussing artificial intelligence (AI) became almost automatic. The technology gained prominence, headlines, investments, and a nearly inevitable narrative of continuous progress. For many leaders, the topic began to sound repetitive, as though everything relevant had already been said. However, consolidated indicators suggest the acceleration is more than narrative: the AI Index Report 2025, produced by Stanford’s Institute for Human-Centered AI (Stanford HAI, 2025), documents simultaneous expansion of investment, capacity, and organizational adoption, intensifying the pressure to convert activity into value. That pressure is likely to deepen in the near term, as market projections point to significant growth in global AI spending in 2026 (Gartner, 2026).
The problem is that, in recent months, something has shifted quietly but profoundly. Part of this displacement relates to the materiality of infrastructure: in the report Electricity 2024: Analysis and Forecast to 2026, the International Energy Agency (IEA, 2024) projects a substantial increase in electricity demand associated with data centers, AI, and other digital workloads through 2026, repositioning energy and installed capacity as strategic variables—not merely operational ones.
What began to emerge was not a new feature or a technical advance but a structural questioning of the model underpinning the AI race—a questioning originating not from external critics but from within the ecosystem itself: investors, specialists, and companies directly involved in building this infrastructure. In the report Competition in Artificial Intelligence Infrastructure, the Organisation for Economic Co-operation and Development (OECD, 2025) describes how competition in AI infrastructure can generate concentration effects and dependencies along the chain (chips, cloud, and data centers), with implications for competition, resilience, and long-term strategic freedom.
The central point has shifted from what AI can do to how it is being financed, scaled, and sustained. When this kind of doubt arises, we are no longer discussing technology hype but the fundamentals of management, risk, and strategy. This reading aligns with the approach proposed in the Artificial Intelligence Risk Management Framework (AI RMF 1.0) from the National Institute of Standards and Technology (NIST, 2023), which treats AI risk as sociotechnical—something that must be governed throughout the entire lifecycle rather than merely “resolved” technically at a single project phase.
From a project management perspective, these dynamics resonate directly with the principles outlined in A Guide to the Project Management Body of Knowledge (PMBOK® Guide), which emphasizes that effective project delivery requires not only technical execution but also stakeholder engagement, integrated risk management, and governance tailored to the project’s context and complexity (Project Management Institute [PMI], 2021). It is precisely for this reason that the topic deserves a place at the table of leaders, project management offices (PMOs), and transformation managers: not to celebrate promises but to analyze decisions.
2. When the Entire Economy Depends on a Single Bet
One of the most significant signals of this moment is the degree of concentration that has formed around artificial intelligence. In simple terms, a significant share of recent growth, investment, and large-company valuation has come to depend on a single narrative: the belief that AI would justify virtually any volume of capital invested today.
This backdrop has measurable indicators. In the AI Index Report 2025, Stanford HAI (2025) records, for example, that private AI investment in the United States reached $109.1 billion in 2024, with a wide gap relative to other hubs and with even greater concentration in generative AI. In other words, this is not merely “attention”: it is capital flow at scale, allocated in a highly asymmetric manner.
This type of concentration is not only a market phenomenon. It is, above all, a classic portfolio management warning. When many decisions rest on the same premise—continuous growth, rising demand, guaranteed future returns—the entire system becomes more sensitive to any shift in expectations. If the bet works, everyone gains. If something deviates from the script, the impacts cease to be localized and become systemic. In portfolio terms, it is equivalent to raising the correlation among bets: the risk becomes non-diversifiable.
In the language of the PMBOK® Guide, this scenario calls for robust portfolio governance—the kind that evaluates strategic alignment, monitors interdependencies, and maintains the organizational capacity to adjust when environmental conditions change (PMI, 2021). Without such discipline, scaling AI becomes an exercise in amplifying exposure rather than managing it.
This reading is strengthened further when one examines the infrastructure supporting AI. In Competition in Artificial Intelligence Infrastructure, the OECD (2025) discusses how AI infrastructure tends to exhibit supply bottlenecks, switching barriers, and vertical relationships along the chain (chips, cloud, and data centers), reinforcing power asymmetries and structural dependencies—a design that intensifies concentration not only of value but also of capacity and control.
In corporate environments, this pattern is easily recognizable. Transformation programs that are excessively dependent on a single technology, a single vendor, or a single success scenario tend to exhibit the same fragility. The risk lies not only in project failure but also in the cascading effects it provokes on prior decisions, investments already made, and commitments assumed—especially when dependence becomes not only technological but also contractual and infrastructural, reducing the degrees of freedom for course correction. This type of flexibility loss is consistent with the dynamics of dependencies and lock-ins described by the OECD (2025) in its analysis of competition in AI infrastructure.
3. The Infrastructure Race: When Investing Becomes Reflex Rather Than Strategy
Faced with this concentration, the predominant reaction of large organizations was predictable: accelerate the race for infrastructure. More compute capacity, more data centers, more long-term contracts, more financial commitments. The implicit logic is simple and dangerous: investing less appears riskier than investing too much. This dynamic is not merely financial; it is also physical. The literature already discusses how data center expansion, particularly under cooling regimes and in specific regions, puts pressure on resources such as energy and water, with strong variation across local contexts (Mytton, 2021). From an energy perspective, even before the GenAI wave, analyses of global demand indicated that data center consumption is sensitive to workload growth and operational efficiency, underscoring that “capacity” is not cost-neutral (Masanet et al., 2020).
This movement also appears in corporate language itself. In the Form 10-K for the fiscal year ended December 31, 2024, Amazon describes the investment cycle and indicates its intention to maintain elevated capital expenditure levels, with emphasis on infrastructure to support AWS growth and new workloads (Amazon.com, Inc., 2025). Similarly, in the Annual Report 2025 (FY2025), Microsoft explicitly reports accelerated expansion of AI infrastructure, including the addition of data center capacity across multiple geographies, as a foundation for supporting growing demand for AI and cloud services (Microsoft Corporation, 2025).
The point is that this race is not “just technology”: it carries physical costs and real constraints. In Electricity 2024: Analysis and Forecast to 2026, the IEA projects that data center electricity demand could grow substantially through 2026, driven by AI and other digital workloads, reinforcing the material character (energy, cost, and limits) of this escalation (IEA, 2024). In parallel, studies in the digital sustainability line warn that the acceleration of AI/ML intensifies environmental tensions precisely because computational scale tends to pull energy and water consumption along with it, widening operational and expansion trade-offs (Iman, 2025).
From a project management standpoint, the PMBOK® Guide’s emphasis on progressive elaboration and iterative planning is directly relevant here (PMI, 2021). Projects and programs operating in high-uncertainty environments benefit from incremental resource commitment, informed by validated learning at each stage—a principle that stands in direct contrast to the reflex-driven, scale-first behavior observed in the AI infrastructure race.
The problem is that scale, by itself, does not resolve fundamental uncertainties. It merely amplifies decisions already made. When clarity about returns, margins, and model sustainability remains limited, scaling means increasing exposure before reducing doubt. In practice, this creates a paradox familiar to any experienced manager: the larger the upfront investment, the less flexibility there is to correct course later. The organization begins to protect the bet it has made—not necessarily because it is the best decision but because it has become too large to fail without severe consequences.
It is at this point that the discussion about artificial intelligence ceases to be technological and becomes, definitively, a discussion about governance, strategic discipline, and decision-making maturity—especially when one recognizes that AI risk must be treated as a sociotechnical issue managed throughout the lifecycle, not as a “post-hoc adjustment” (NIST, 2023).
4. The Inflection Point: When Scaling Is No Longer Sufficient
In every major technology wave, there comes a moment when the dominant logic begins to be questioned. In the case of artificial intelligence, that moment appears to be approaching. The idea that it would suffice to scale indefinitely—more data, more processing, more infrastructure—is beginning to encounter practical, economic, and strategic limits.
There are consistent signals of this transition. The AI Index Report 2025, produced by Stanford’s Institute for Human-Centered AI, describes a dynamic in which model scale and competitive intensity continue to advance, but performance differentials among frontier models tend to narrow rapidly across successive benchmarks and cycles—a pattern that, for managers, typically precedes diminishing marginal gains at the “frontier” of performance (Stanford HAI, 2025).
What is emerging, therefore, is not a critique of the technology itself but of the mental model that has guided decisions to this point. Over recent years, the prevailing belief was that investing more would resolve any bottleneck: better models would appear automatically, efficiency gains would continue to materialize, and returns would come as a natural consequence of scale. The critical point is that, even when efficiency gains appear, they can be neutralized by the growth of demand and usage—a classic mechanism discussed since Jevons in The Coal Question, who analyzed how efficiency can induce greater total consumption (Jevons, 1865).
The problem is that this reasoning begins to encounter diminishing returns. A point arrives where adding more resources generates ever-smaller marginal gains while costs continue to accelerate. A study by the Center for Security and Emerging Technology (CSET), in the report Scaling AI: Cost and Performance of AI at the Leading Edge, illustrates this asymmetry well by showing that relatively modest performance increases can require order-of-magnitude jumps in computational cost (Lohn, 2023).
This inflection is especially relevant for leaders and transformation managers because it mirrors a recurrent error in complex programs: believing that additional budget and effort compensate for conceptual flaws, absence of learning, or design problems. When the organization reaches this stage, the challenge ceases to be technical and becomes strategic. In the broader macroeconomic plane, Acemoglu (2024), in The Simple Macroeconomics of AI, argues that aggregate productivity gains associated with AI may be modest over time, precisely because costs, diffusion, organizational frictions, and effective task substitution matter—reinforcing the need for selectivity rather than scale alone.
The PMBOK® Guide provides a structured approach to this challenge through its emphasis on benefits realization management and the alignment of project outcomes with strategic objectives (PMI, 2021). Organizations that treat AI initiatives as projects requiring clear success criteria, staged investment decisions, and measurable value delivery are better positioned to navigate the transition from exploration to exploitation without falling into the trap of escalation of commitment.
Artificial intelligence thus enters a new phase: less dependent on volume and more dependent on applied research, focus, and clear direction. For organizations seeking to scale GenAI, this means treating governance and risk as part of the design itself, throughout the lifecycle, as made explicit by the Generative Artificial Intelligence Profile (NIST AI 600-1), which details risks and specific management actions for generative systems (NIST, 2024).
5. Oracle: When an AI Bet Begins to Strain the Business Model
It is in this context that Oracle’s recent trajectory becomes an emblematic case study. This is not a fragile or poorly managed company—on the contrary, Oracle historically operated with a strong software and services model, robust cash generation, and margin predictability. What draws attention is not “having AI ambition” but the shift in operational and financial profile: the company takes on characteristics typical of capital-intensive businesses (physical infrastructure, contracted capacity, long-term investments), displacing the discussion from the technological plane to the plane of governance and risk tolerance.
This inflection becomes clearer when one examines what the company itself communicated to the market. In the fourth-quarter and fiscal year 2025 earnings announcement, Oracle reported that operating cash flow for FY2025 was $20.8 billion (Oracle Corporation, 2025a). In the same release, the company highlighted that Remaining Performance Obligations (RPO) reached $138 billion, growing 41%, reinforcing the materiality of its backlog and future commitments (Oracle Corporation, 2025a). In terms of strategic narrative, this signals commercial strength and demand; in management terms, it also indicates that a significant portion of the growth thesis depends on consistent conversion of this pipeline, with disciplined execution over time.
The strain emerges when we combine these signals with the dynamics of infrastructure investment. Market analysis of the same earnings cycle emphasized that Oracle reported capex of approximately $21.2 billion in FY2025, associated with accelerated investment to sustain cloud/AI ambitions, and that this investment intensity pushed free cash flow into negative territory while the company signaled continuation of spending levels (Barron’s, 2025). From a governance standpoint, this is less about “a good or bad quarter” and more about a structural effect: the greater the volume of irreversible investment in capacity, the smaller the flexibility to adjust course if assumptions about demand, price, utilization, and margin do not materialize at the expected pace.
In the language of project management, the PMBOK® Guide warns of the dangers inherent in projects where sunk costs begin to drive decision-making rather than forward-looking value analysis (PMI, 2021). This is precisely the dynamic at play: when the bet ceases to be incremental and creates a lock-in effect. By committing capital and capacity, the organization reduces degrees of freedom for subsequent correction because “going back” carries a high cost and significant reputational and financial impact. This pattern aligns with the megaproject literature: Flyvbjerg (2014) synthesizes how large-scale undertakings tend to suffer from overcommitment, escalation of commitment, and difficulty in course correction as decisions become “too large” to reverse without significant losses.
This is where the Oracle case connects directly to the core of this paper: when AI becomes infrastructure and financial commitment, the risk shifts from “Does the model work?” to “Is the decision and execution system robust enough to sustain interdependencies?” This reading is consistent with the view that AI risk must be managed across the entire lifecycle and across multiple dimensions—technical, organizational, operational, and governance—not as a post-hoc adjustment (NIST, 2024). The warning, therefore, is not “The strategy is wrong” but that it displaces the company into a territory where many variables must go right simultaneously—and this is, for leaders and PMOs, a classic signal of the need for decision-making discipline.
6. When Everything Must Go Right Simultaneously: The Systemic Risk of AI
As investments in artificial intelligence deepen, a new type of fragility begins to form: the chain of dependencies. Infrastructure, financing, long-term contracts, user growth, revenue conversion, and continuous access to capital operate as coupled components—and the more coupled they become, the lower the capacity to isolate failures without affecting the whole.
This coupling is not an abstraction: it is sustained by physical and energy infrastructure that has entered the strategic equation. The IEA, in Electricity 2024: Analysis and Forecast to 2026, indicates that the expansion of data centers (including AI and crypto) pressures global electricity demand significantly through 2026, reinforcing that the “race” is not merely digital but material and operational (IEA, 2024).
In other words, it is not enough for the technology to work—it must work at volume, continuously, with cost and resources under control. When this condition becomes a premise, operations depend on multiple simultaneous variables (capacity, efficiency, energy price, hardware availability, credit, adoption, and monetization), and risk ceases to be localized and becomes systemic.
The OECD (2025), in Competition in Artificial Intelligence Infrastructure, describes how the underlying infrastructure (accelerator chips, cloud, data centers, and associated supply chains) tends to exhibit concentration, complexity, and structural barriers, which amplify dependencies and reduce degrees of freedom when supply, cost, or regulatory shocks occur.
In project management terms, the PMBOK® Guide’s treatment of complexity and systems thinking is directly applicable (PMI, 2021). The Guide recognizes that projects and programs exist within larger systems of interconnected elements and that effective management requires understanding these interdependencies, monitoring emergent risks, and maintaining the capacity for adaptive response. When these principles are ignored, and organizations treat AI programs as isolated technical initiatives, systemic vulnerability is the predictable result.
In the AI context, interdependency is amplified for two reasons. First, because the cost of scale and operation drives rigid commitments: if infrastructure is expensive, it must be intensely utilized to “close the books,” pressuring accelerated adoption even when organizational learning is still partial. Second, because the rate of post-experiment abandonment increases the chance of “breaks” in the chain. In a widely cited press release, Gartner (2024) associates the abandonment of GenAI projects after proof of concept with factors such as rising costs, low data quality or adequacy, and governance weaknesses—that is, not “just technology” but execution and management.
The RAND Corporation (2024), complementarily, organizes recurring “failure modes” in AI projects—such as process failures, human-technology interaction failures, and value expectation failures—reinforcing that a single broken link (process, adoption, or value alignment) can compromise the whole.
Here, an element worth exploring as “hidden systemic risk” enters the picture: even when efficiency gains exist, second-order effects can emerge. When discussing the hidden costs of AI/ML adoption and the paradox of exponential digital growth, Iman (2025) revisits the logic of the rebound effect associated with the Jevons paradox, where efficiency gains reduce relative cost but can amplify total demand, while also linking this cycle to pressures on energy and water in data center and AI system operations (Iman, 2025; Jevons, 1865). As a broader context, the digital sustainability agenda has been explored in earlier contributions, situating this discussion within a “green economy” and responsible digital transition perspective (Iman, 2023).
Taken together, the conclusion of this section is clear: the systemic fragility of AI does not arise because the technology “does not work” but because the scaling model requires many conditions (technical, financial, energy, organizational, and governance) to function simultaneously and continuously. It is precisely this type of arrangement that, historically, makes systems highly efficient “on a perfect trajectory” but abrupt and costly when conditions change.
7. The Warning for Leaders, PMOs, and Transformation Managers
It is at this point that the discussion definitively leaves the field of technology and enters the territory of leadership and management. Artificial intelligence does not revoke the basic principles of sound management; it heightens the need to apply them with rigor. This becomes explicit when reference frameworks treat AI as a sociotechnical risk (not merely technical) and orient governance throughout the lifecycle, as do the AI Risk Management Framework (NIST, 2023) and its GenAI complement, the Generative AI Profile (NIST, 2024). In parallel, the creation of management system standards for AI, such as ISO/IEC 42001:2023, signals that the topic is already being treated as a formal managerial discipline, with governance, process, and auditability requirements (International Organization for Standardization [ISO], 2023).
The first warning is clear: hype does not substitute for governance. When strategic decisions are made under euphoria, there is a tendency to minimize risks, underestimate complexities, and overestimate short-term returns—translating into inflated scopes, unrealistic schedules, and commitments difficult to sustain. The signals appear in execution statistics: by projecting that a significant share of GenAI initiatives will not survive past proof of concept, Gartner (2024) attributes this pattern to factors such as data quality, insufficient risk controls, and rising costs—execution bottlenecks and scaling conditions, not “absence of technology.” The causal analysis is consistent with the RAND Corporation’s (2024) diagnosis of recurring failures in AI projects associated with inadequate problem understanding, weak integration into workflows, and organizational misalignment—management and system design failures, not merely model failures.
The PMBOK® Guide speaks directly to this challenge. Its principles of stewardship, value delivery, and systems thinking remind practitioners that project success is not defined by technical output alone but by the value delivered to stakeholders within the constraints of risk tolerance and organizational capacity (PMI, 2021). In the context of AI, this translates into the imperative for clearly defined value gates, rigorous business case validation, and ongoing stakeholder engagement throughout the initiative lifecycle.
The second warning concerns scale. Scaling before learning is a classic error. Responsible AI adoption requires controlled experimentation, short learning cycles, and objective criteria for advancing, adjusting, or retreating. This principle gains substance when operational risk guides recommend risk identification, measurement, controls, and continuous monitoring before expanding exposure—precisely the operational logic proposed by the AI RMF 1.0 and reinforced by the Generative AI Profile for GenAI contexts (NIST, 2023; NIST, 2024). The critical point here is that “responsibility” is not merely abstract ethics: more robust governance reduces rework and increases delivery consistency by mitigating risks that derail initiatives during the transition from pilot to operation, according to Deloitte (2022).
The third warning concerns dependency. The greater the concentrated bet on a single technology, vendor, infrastructure, or model, the lower the capacity for adaptation. In volatile environments, flexibility is a strategic asset; losing it early exacts a high price later. This risk tends to worsen when the ecosystem closes around a few providers and infrastructure bottlenecks. The OECD (2025) analyzes precisely this dynamic when discussing competition and market structure in AI infrastructure, highlighting characteristics that reinforce dependencies along the chain (chips, cloud, data centers). In other words, as interdependency grows, portfolio management must migrate from “bet and scale” to “diversify, create options, and govern.”
In this scenario, PMOs and transformation managers have a central role: creating structures that filter euphoria, challenge assumptions, connect technology to verifiable benefits, and keep strategic options open. The lesson is well known in complex programs: the greater the ambition and complexity, the greater the need for governance and realism—and the megaproject literature systematizes how excess optimism, lock-in, and escalation of commitment degrade results and reduce the capacity for course correction (Flyvbjerg, 2014). Applied to AI, this translates into clear value gates, active risk management, maturity criteria for scaling, and governance that treats AI as organizational transformation—not as a “feature.”
8. Conclusion: Artificial Intelligence Demands Less Euphoria and More Strategic Maturity
Artificial intelligence is, without question, one of the most transformative technologies of our time. But the central point of this article is not the technical capability of models—it is the growing asymmetry between the velocity of investment and the governance maturity required to convert this wave into sustainable value. In macroeconomic terms, this becomes clearer when we observe that projections continue to indicate an accelerated cycle of global AI spending: in a press release, Gartner synthesizes the level of financial expectation pressuring organizations to scale rapidly (Gartner, 2026).
The challenge is that adoption growth does not automatically equal net productivity growth. In the working paper The Simple Macroeconomics of AI, Acemoglu (2024) argues that aggregate productivity gains tend to be more modest than the prevailing narrative suggests, especially when implementation costs, imperfect substitution, and limits of economic applicability are considered. This counterpoint is important because it narrows the margin for “grow first, justify later”: if the macro gain tends to be incremental, governance error becomes a waste multiplier.
Moreover, the discussion about scale cannot ignore its physical limits and second-order effects. In Electricity 2024: Analysis and Forecast to 2026, the IEA (2024) projects a substantial increase in electricity consumption associated with data centers (including AI), reinforcing that this transformation has energy materiality and systemic impact. And here, a frequently underestimated point enters: efficiency does not guarantee a reduction in aggregate consumption. Iman (2025) discusses the rebound effect and the Jevons paradox, indicating that efficiency gains can reduce “cost per unit” and, paradoxically, incentivize expanded usage, causing total consumption to rise—precisely the type of trap that appears when governance treats infrastructure as an operational detail rather than a strategic variable (Iman, 2025; Jevons, 1865).
With this, the final question shifts from “Will AI transform?” to “With what architecture of incentives, competition, and dependencies will the transformation be built?” The OECD’s (2025) Competition in Artificial Intelligence Infrastructure report is relevant because it frames AI infrastructure as a terrain of competition and concentration (chips, cloud, data centers, and supply chains), showing that the risk is not merely corporate—it is structural, because dependency is distributed across a few actors and few bottlenecks.
It is precisely for this reason that, for the project management audience—PMOs, program managers, and transformation leaders—the practical conclusion is direct: competitive advantage lies not in “using AI” but in governing its adoption as a portfolio of bets under uncertainty. The PMBOK® Guide’s foundational principles provide the framework: stewardship of resources, attention to value delivery, stakeholder engagement, systems thinking, and adaptive resilience are not abstractions but actionable management disciplines that directly address the risks outlined in this article (PMI, 2021). In the megaproject literature, the comparison is inevitable: when bets grow rapidly with rigid dependencies and optimistic justifications, the organization tends to fall into well-known patterns. Flyvbjerg’s (2014) synthesis of megaproject overruns and fragilities when scale and complexity exceed decision-making discipline echoes throughout the AI infrastructure cycle. The PMI’s (2014) commentary on Flyvbjerg’s contribution further emphasizes this pattern of overcommitment and governance failure.
Therefore, traversing this cycle with maturity demands an explicit “management system” for risk, control, and continuous improvement—not merely good intentions. On the public-institutional side, the AI Risk Management Framework from NIST (2023) provides operational vocabulary for treating AI as sociotechnical risk (governance, measurement, mitigation, and monitoring). On the organizational management system side, the ISO/IEC 42001:2023 standard consolidates the topic as a formalizable practice, reinforcing that AI governance must be treated as a management capability (policies, roles, auditing, and improvement)—not as an IT appendage (ISO, 2023).
In synthesis, AI will remain at the center of the strategic agenda, but the differentiator will not be following the flow of euphoria. It will be a disciplining scale, protecting degrees of freedom, reducing critical dependencies, and building measurable governance—so that the technology becomes a vector for value delivery (and not merely a vector for exposure). And it is precisely here that PMOs, project managers, and transformation leaders can serve as the “last line of defense”: filtering narratives, demanding criteria, connecting investment to results, and keeping the organization on the right side of the cycle. As the PMBOK® Guide reminds us, the ultimate purpose of project management is not to execute plans but to deliver value—and in the age of AI, that distinction has never been more consequential (PMI, 2021).
References
- Acemoglu, D. (2024). The simple macroeconomics of AI (NBER Working Paper No. 32487). National Bureau of Economic Research. https://www.nber.org/papers/w32487
- Amazon.com, Inc. (2025). Form 10-K (Annual report for period ended December 31, 2024). U.S. Securities and Exchange Commission. https://www.sec.gov/Archives/edgar/data/1018724/000101872425000004/amzn-20241231.htm
- Barron’s. (2025). Oracle to spend billions staking AI claim. There’s 1 key risk. https://www.barrons.com/articles/oracle-stock-earnings-ai-8df5cb1a
- Deloitte. (2022). State of AI in the enterprise (5th ed.). Deloitte Insights. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/state-of-ai-2022.html
- Flyvbjerg, B. (2014). What you should know about megaprojects and why: An overview. Project Management Journal, 45(2), 6–19. https://doi.org/10.1002/pmj.21409
- Gartner. (2024, July 29). Gartner predicts 30% of generative AI projects will be abandoned after proof of concept by the end of 2025 [Press release]. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
- Gartner. (2026, January 15). Gartner says worldwide AI spending will total $2.5 trillion in 2026 [Press release]. https://www.gartner.com/en/newsroom/press-releases/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026
- Iman, N. (2023). Digital sustainability: Paving the way toward a green economy? Sustainability and Climate Change, 16(4), 268–277. https://doi.org/10.1089/scc.2023.0069
- Iman, N. (2025). The hidden costs of intelligence: Artificial intelligence and machine learning adoption and the paradox of exponential digital growth. Sustainability and Climate Change, 18(3), 225–241. https://doi.org/10.1089/scc.2025.0017
- International Energy Agency. (2024). Electricity 2024: Analysis and forecast to 2026. https://www.iea.org/reports/electricity-2024
- International Organization for Standardization. (2023). Information technology — Artificial intelligence — Management system (ISO/IEC 42001:2023). https://www.iso.org/standard/81230.html
- Jevons, W. S. (1865). The coal question. Macmillan.
- Lohn, A. (2023). Scaling AI: Cost and performance of AI at the leading edge. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/scaling-ai
- Masanet, E., Shehabi, A., Lei, N., Smith, S., & Koomey, J. (2020). Recalibrating global data center energy-use estimates. Science, 367(6481), 984–986. https://doi.org/10.1126/science.aba3758
- Microsoft Corporation. (2025). Form 10-K (Annual report for period ended June 30, 2025). U.S. Securities and Exchange Commission. https://www.sec.gov/Archives/edgar/data/789019/000095017025100235/0000950170-25-100235-index.html
- Mytton, D. (2021). Data centre water consumption. npj Clean Water, 4, Article 11. https://doi.org/10.1038/s41545-021-00101-w
- National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
- National Institute of Standards and Technology. (2024). Generative artificial intelligence profile (NIST AI 600-1). https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
- Oracle Corporation. (2025a). Oracle announces fiscal 2025 fourth quarter and fiscal full year financial results [Press release]. https://www.oracle.com/asean/news/announcement/q4fy25-earnings-release-2025-06-11/
- Oracle Corporation. (2025b). Form 10-K (Annual report for period ended May 31, 2025). U.S. Securities and Exchange Commission. https://www.sec.gov/Archives/edgar/data/1341439/000095017025087926/0000950170-25-087926-index.htm
- Organisation for Economic Co-operation and Development. (2025). Competition in artificial intelligence infrastructure. OECD Publishing. https://www.oecd.org/en/publications/2025/11/competition-in-artificial-intelligence-infrastructure_69319aee.html
- Project Management Institute. (2014). What you should know about megaprojects [Review of the article by B. Flyvbjerg]. PMI Thought Leadership Series.
- Project Management Institute. (2021). A guide to the project management body of knowledge (PMBOK® guide) (7th ed.). Project Management Institute.
- RAND Corporation. (2024). The root causes of failure for artificial intelligence projects and how they can succeed (Research Report RRA2680-1). https://www.rand.org/pubs/research_reports/RRA2680-1.html
- Stanford University, Institute for Human-Centered Artificial Intelligence. (2025). AI index report 2025. https://hai.stanford.edu/ai-index/2025-ai-index-report