<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Techne AI]]></title><description><![CDATA[We empower organizations and teams to responsibly harness AI by implementing robust governance frameworks, delivering tailored AI literacy workshops, and developing customized AI platform use cases, playbooks, and role-based prompt strategy guides.]]></description><link>https://insights.techne.ai</link><generator>Substack</generator><lastBuildDate>Sun, 19 Apr 2026 00:49:17 GMT</lastBuildDate><atom:link href="https://insights.techne.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Khullani M. Abdullahi]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[info@techne.ai]]></webMaster><itunes:owner><itunes:email><![CDATA[info@techne.ai]]></itunes:email><itunes:name><![CDATA[Khullani M. Abdullahi]]></itunes:name></itunes:owner><itunes:author><![CDATA[Khullani M. Abdullahi]]></itunes:author><googleplay:owner><![CDATA[info@techne.ai]]></googleplay:owner><googleplay:email><![CDATA[info@techne.ai]]></googleplay:email><googleplay:author><![CDATA[Khullani M. Abdullahi]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Reading OpenAI’s Industrial Policy Paper as a Strategic Planning Document]]></title><description><![CDATA[The April 2026 proposal is not primarily a labor-policy document or a regulatory wish list.]]></description><link>https://insights.techne.ai/p/reading-openais-industrial-policy</link><guid isPermaLink="false">https://insights.techne.ai/p/reading-openais-industrial-policy</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Sat, 18 Apr 2026 19:58:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yPlo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yPlo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yPlo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 424w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 848w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 1272w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yPlo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png" width="711" height="465" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90304487-1602-488a-a546-759f08ef07ae_711x465.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:465,&quot;width&quot;:711,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:126798,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/194638087?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yPlo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 424w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 848w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 1272w, https://substackcdn.com/image/fetch/$s_!yPlo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90304487-1602-488a-a546-759f08ef07ae_711x465.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The April 2026 proposal is not primarily a labor-policy document or a regulatory wish list. It is a signal about where labor economics, tax structure, energy policy, talent markets, and corporate gove</p><h3>In brief</h3><ul><li><p>OpenAI&#8217;s April 2026 paper, <em>Industrial Policy for the Intelligence Age</em>, is the most ambitious public-policy vision any AI developer has advanced. For operating executives, the paper is most usefully read as a strategic-planning input: it signals where labor economics, tax structure, energy infrastructure, talent markets, procurement standards, and corporate governance are likely to move over the next three to ten years.</p></li><li><p>In the short term (12&#8211;24 months), the paper&#8217;s operational implications concentrate around three themes: the emerging AI disclosure and incident-reporting architecture, accelerating AI-related D&amp;O and insurance dynamics, and the documentation executives will increasingly be expected to produce around AI governance, vendor risk, and worker consultation.</p></li><li><p>In the longer term (three to ten years), the paper points toward structural shifts in how AI productivity gains are distributed between shareholders, workers, and public programs; how talent markets operate once benefits begin to decouple from employers; how the tax base is restructured around AI-driven returns; how energy costs and siting decisions shape AI capability; and how the care economy is positioned as the absorption sector for AI-driven workforce displacement. Each has direct capital-allocation consequences.</p></li><li><p>Several of the paper&#8217;s most consequential implications are not obvious on first reading. Below, I identifiy four that most strategic plans have not yet incorporated.</p></li></ul><h3>What the document is, and why it matters</h3><p>In April 2026, OpenAI published a 13-page policy paper titled <em><a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">Industrial Policy for the Intelligence Age: Ideas to Keep People First.</a></em> The paper proposes a policy agenda across two themes: building an open economy (worker voice in AI deployment, AI-enabled entrepreneurship supports, a &#8220;Right to AI&#8221; access program, a Public Wealth Fund giving citizens a stake in AI-driven growth, modernization of the tax base, efficiency dividends for workers, adaptive safety nets, portable benefits, and accelerated pathways into care-economy work) and building a resilient society (safeguards for high-consequence domains, provenance and verification standards, auditing regimes through an expanded Center for AI Standards and Innovation, model-containment protocols, mission-aligned governance for frontier developers, guardrails on government AI use, mechanisms for public input, incident reporting, and international coordination).</p><p>The paper is supported by a $100,000 fellowship program, up to $1 million in research credits, a Washington, D.C. workshop scheduled for May, and a dedicated public-comment inbox. Its publication follows a pattern familiar from earlier technology transitions: a leading incumbent advances an ambitious policy vision early in the regulatory cycle to shape the terms of the debate.</p><p>For operating executives, the paper matters for three reasons that are not primarily about AI policy in the abstract.</p><p>First, the paper opens a multi-year policy conversation whose outcomes will materially affect labor costs, corporate tax exposure, energy costs, regulatory obligations, and competitive market structure. Most of these inputs sit at the top of any five-year plan.</p><p>Second, the signals the paper sends are already being acted on by institutional investors, D&amp;O underwriters, state legislators, enterprise buyers, and competitors&#8202;&#8212;&#8202;regardless of whether specific proposals ever become federal law. Markets do not wait for legislation.</p><p>Third, several of the paper&#8217;s proposals cross the line from policy wish list into operational expectation considerably faster than executives typically plan for. Incident reporting moved from voluntary practice to mandatory SEC disclosure in cybersecurity over roughly 15 years. AI is likely to compress that trajectory.</p><p>The paper, in other words, is an input to corporate strategy. The question for executives is what it contains, what it signals, and which of its implications belong on the agenda now rather than later.</p><h3>The strategic frame: two regimes, one transition</h3><p>A useful way to situate the paper is to recognize that AI accountability and AI-related economic rules in the United States are governed by two distinct regimes, operating on different timelines.</p><p>The industrial-policy regime operates through legislation, agency rulemaking, standards bodies, procurement authority, and public-private institutional design. It is mostly proposed. It includes federal and state AI laws, the NIST AI Risk Management Framework, CAISI, and the auditor and insurance markets that scale alongside standards development. OpenAI&#8217;s paper is substantially about this regime.</p><p>The liability and market regime operates through fiduciary duty, securities enforcement, tort law, consumer protection, employment law, D&amp;O insurance markets, state AI statutes, and the customer and investor expectations being transmitted through private channels. It is already operating. It includes the growing count of AI-related securities class actions, the state AI statutes now in force in Illinois, Colorado, California, and elsewhere, and the D&amp;O underwriting practices that are pricing AI governance today.</p><p>The two regimes are complementary. Every public company and most private companies of meaningful size are subject to both. The practical difference is that the industrial-policy regime is where the next several years of policy debate will happen, while the liability and market regime is where operating consequences are already being priced.</p><p>Executives who treat the paper as a complete account of AI-related corporate rules&#8202;&#8212;&#8202;or who wait for the industrial-policy regime to clarify before acting&#8202;&#8212;&#8202;are likely to underinvest in the governance and operational controls that are already being priced into insurance, litigation, and investor expectations.</p><h3>Short-term implications (12&#8211;24 months)</h3><p>Four operational themes warrant near-term attention from executive teams.</p><p>AI disclosure and incident reporting are consolidating into a defined architecture. The paper calls for structured sharing of &#8220;incidents, misuse, and near-misses&#8221; with a designated public authority, including cases where systems exhibited &#8220;concerning internal reasoning, unexpected capabilities, or other warning signals&#8202;&#8212;&#8202;even if safeguards ultimately prevented harm.&#8221; This mirrors the path cybersecurity disclosure followed from voluntary practice to SEC Item 1.05 materiality reporting. Organizations that do not currently maintain an AI incident log, a defined escalation pathway, and a materiality-assessment process should treat building these capabilities as a near-term priority. The relevant owners are typically some combination of General Counsel, Chief Risk Officer, CISO, and Chief Compliance Officer, depending on organizational structure.</p><p>D&amp;O and AI-related insurance dynamics are tightening. More than 50 AI-related securities class action filings have been tracked since early 2020, with filings reportedly roughly doubling in 2024 over 2023 (Stanford SCA Clearinghouse and Cornerstone Research summaries). Average D&amp;O settlement size has risen to approximately $56 million (Advisen and Cornerstone, 2024). D&amp;O underwriters are factoring AI governance into pricing, retentions, and exclusion language at renewal. Organizations entering renewal cycles in the next 12 to 18 months should expect underwriter questions about AI inventory, board oversight cadence, incident response, and third-party AI vendor risk. These questions are increasingly determinative of premium and retention outcomes.</p><p>Worker consultation on AI deployment is moving from HR policy to a documented expectation. The paper proposes a formal worker-voice mechanism in AI deployment decisions&#8202;&#8212;&#8202;an idea already partially operative under the EU AI Act. In U.S. contexts, analogous expectations are arriving through state legislation (Illinois&#8217;s Public Act 104&#8211;0054 requires documented notice and consultation on workplace AI decisions), collective bargaining agreements in regulated sectors, and employment litigation in sectors including logistics, healthcare, and financial services. Organizations without a documented worker consultation process on material AI deployments should anticipate that the absence of one becomes evidentiary in disputes, rather than simply a gap in best practice.</p><p>Vendor and third-party AI risk is becoming a procurement standard. The &#8220;AI trust stack&#8221; the paper describes&#8202;&#8212;&#8202;provenance, verification, audit logs, delegation records&#8202;&#8212;&#8202;is already being translated into enterprise procurement questionnaires. Technology vendors selling into regulated industries, government, healthcare, and financial services should expect provenance, evaluation, and incident-reporting disclosures to become table-stakes in RFPs over the next 12 months. On the buyer side, organizations relying on third-party AI models without clear vendor disclosure practices should anticipate increased audit attention from D&amp;O underwriters, regulators, and investors.</p><h3>Longer-term implications (three to ten years)</h3><p>Four structural shifts signaled by the paper warrant incorporation into strategic plans beyond the immediate horizon.</p><p>The distribution of AI productivity gains is moving into political and negotiation terrain. The paper proposes that efficiency gains from AI be converted into &#8220;durable improvements in workers&#8217; benefits&#8221;&#8202;&#8212;&#8202;richer retirement contributions, expanded healthcare coverage, 32-hour workweek pilots without loss of pay, and productivity-linked &#8220;benefits bonuses.&#8221; Most business cases for AI investment assume that productivity gains accrue substantially to shareholders. The paper, and the broader policy environment it signals, suggests that the effective retention rate of AI gains to shareholders will be a contested variable&#8202;&#8212;&#8202;influenced by labor negotiations, state legislation, proxy advisor positions, and tax policy. Strategic plans that treat AI productivity as unmitigated margin expansion are likely to overstate the economics available to shareholders in the medium term.</p><p>The talent market is likely to decouple from the benefits stack. The paper&#8217;s portable-benefits proposals&#8202;&#8212;&#8202;healthcare, retirement, and training accounts that follow individuals across employers&#8202;&#8212;&#8202;represent a policy direction already visible in state pilots and gig-economy advocacy. If portable benefits achieve even partial scale, two structural shifts follow: employers lose one of their historically largest retention levers, and contingent, fractional, and independent work becomes substantially more viable at scale. Organizations currently scaling contractor-heavy delivery models have a secular tailwind. Organizations whose retention strategy relies on benefit-based lock-in face a reengineering task. Either way, the cost structure of talent&#8202;&#8212;&#8202;both the compensation mix and the composition of the workforce&#8202;&#8212;&#8202;is likely to change more than most five-year plans have modeled.</p><p>The tax base is being reshaped around AI-driven returns. The paper explicitly proposes that policymakers &#8220;rebalance the tax base by increasing reliance on capital-based revenues&#8202;&#8212;&#8202;such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns&#8221; and pair these changes with &#8220;wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&amp;D-style credits.&#8221; In practical terms, this foreshadows a tax environment in which automation that reduces headcount is taxed more heavily, while investment in retention and retraining is credited. This has direct implications for entity structure, jurisdiction selection, transfer pricing, compensation design, M&amp;A tax planning, and capital-allocation decisions on automation versus augmentation. CFOs and heads of tax should treat this as a multi-year planning input rather than a one-time event.</p><p>Energy infrastructure is now upstream of AI capability. The paper proposes public-private partnerships to &#8220;accelerate the expansion of energy infrastructure required to power AI,&#8221; combined with a principle that AI data centers should &#8220;pay their own way on energy so that households aren&#8217;t subsidizing them.&#8221; This is a consequential signal. For compute-intensive businesses, AI capability decisions are increasingly grid, siting, and cost-allocation decisions. For energy-intensive industries outside tech, competition with data centers for grid capacity is becoming a real operational constraint. Organizations that run AI strategy and energy strategy as separate workstreams&#8202;&#8212;&#8202;most do&#8202;&#8212;&#8202;should integrate them. The CTO&#8217;s AI roadmap and the CFO&#8217;s energy strategy are the same planning document.</p><h3>What most executive teams are missing</h3><p>Four issues are, in our observation, materially underweighted in most current strategic plans. Each follows directly from the paper&#8217;s proposals but is not typically read off the text on first pass.</p><p>1. The effective retention rate of AI productivity gains to shareholders is a variable, not a constant. Most investor materials and internal business cases model AI as a productivity multiplier flowing to margin. The paper, state legislation, EU regulation, and proxy-advisor positions all point in the opposite direction: a share of AI gains is being actively redirected to workers, to public programs through tax reform, and to infrastructure through grid cost allocation. The magnitude of the redirection is unsettled. The direction is not. Business cases that quantify AI productivity without modeling a policy-and-negotiation &#8220;take&#8221; are likely to overstate available margin by a meaningful margin&#8202;&#8212;&#8202;potentially enough to change investment decisions.</p><p>2. The care economy is being positioned as the absorption sector for AI displacement, with accompanying public investment. The paper is explicit: &#8220;Expand opportunities in the care and connection economy&#8202;&#8212;&#8202;childcare, eldercare, education, healthcare, and community services&#8202;&#8212;&#8202;as pathways for workers displaced by AI.&#8221; It proposes public investment in training pipelines, wage supports for care-economy roles, and a family benefit framework. The strategic implication reaches beyond social policy. Organizations with care-economy exposure&#8202;&#8212;&#8202;directly or through adjacent platform, insurance, or staffing assets&#8202;&#8212;&#8202;are likely operating in a sector with a multi-year policy tailwind that has not yet been priced into conventional comparables. Private equity firms with healthcare, education, or home-services platforms should expect the policy environment to support valuations. Operators in adjacent sectors (consumer goods, real estate, financial services targeting seniors or families) should expect a multi-year demand tailwind that conventional demographic models partially capture but may understate.</p><p>3. The liability and market regime is already pricing what the industrial-policy regime is debating. While Congress and state legislatures debate industrial policy, D&amp;O underwriters, institutional investors, proxy advisors, state attorneys general enforcing existing AI statutes, and plaintiffs&#8217; counsel are operating in real time. Organizations waiting for policy clarity before building governance and documentation capabilities are discovering that the price&#8202;&#8212;&#8202;in premium increases, retention escalations, adverse proxy recommendations, and litigation exposure&#8202;&#8212;&#8202;has already been set. The gap between &#8220;policy uncertainty&#8221; and &#8220;market certainty&#8221; is wider than the debate suggests, and it is not narrowing. This is the single most consequential mispricing in current corporate planning.</p><p>4. Incident reporting is competitive intelligence, not a compliance burden. If industry-level incident-sharing pools emerge for AI&#8202;&#8212;&#8202;analogous to the Aviation Safety Reporting System or healthcare patient-safety organizations&#8202;&#8212;&#8202;the organizations that build incident-tracking infrastructure early will gain visibility into peer failure modes, near-misses, and emerging risks. Late builders purchase the same learning much more expensively, through their own incidents. The right way to frame AI incident infrastructure to the C-suite is as an operational intelligence capability that happens to satisfy compliance, not the reverse.</p><h3>A strategic-planning agenda</h3><p>Translating the above into action, three decision agendas are appropriate across functions and horizons.</p><p>For the CEO and strategy function. Commission a review of the organization&#8217;s AI-related assumptions in the current strategic plan: what share of AI productivity gains the plan assumes are retained by the company; how the plan models talent, tax, and energy cost trajectories; whether care-economy exposure or care-economy adjacency is a strategic position; and whether the plan treats AI policy uncertainty as a reason to defer or a reason to build optionality. Resolve the ambiguity before the next capital-allocation cycle.</p><p>For the CFO and the audit committee. Pressure-test the organization&#8217;s D&amp;O renewal readiness: documented board oversight cadence, AI inventory, vendor risk posture, incident response capability, and evidence of governance maturation over the prior renewal period. Build a planning-horizon view of the tax environment the paper points toward, including the sensitivity of current capital-allocation decisions (on automation, M&amp;A, geographic footprint) to the policy scenarios the paper describes. Model energy-cost exposure through 2030 with data center cost-allocation policy as an explicit scenario variable.</p><p>For the CHRO, COO, and General Counsel. Stand up&#8202;&#8212;&#8202;or stress-test&#8202;&#8212;&#8202;the organization&#8217;s worker consultation process on material AI deployments, its incident and near-miss reporting infrastructure, and its AI vendor risk program. Review the governance documentation the organization could produce on reasonable notice: AI accountability-allocation framework, delegation records, escalation logs, board-level oversight evidence. In comparable governance contexts, the absence of this documentation is increasingly treated as evidence of inadequate oversight, not as a neutral gap.</p><p>For the board. Direct management to prepare a briefing on the paper&#8217;s implications for the organization&#8217;s industry. The three questions worth asking at the next governance or risk committee meeting are, in substance: Does the organization have a documented AI accountability-allocation framework? Does it maintain a monitoring process reasonably designed to detect insider use of AI to concentrate decision authority or circumvent controls? Would current governance documentation support a D&amp;O renewal submission at acceptable terms? Board-minute language documenting this review should be drafted and preserved regardless of the specific conclusions reached.</p><h3>Outlook</h3><p>The paper is most accurately read as the opening move in a multi-year conversation about how the United States governs the transition to more capable AI systems. Its specific proposals&#8202;&#8212;&#8202;a Public Wealth Fund, a national four-day workweek pilot, a formal worker-voice mechanism, a portable benefits architecture&#8202;&#8212;&#8202;are unlikely to pass in the forms proposed. The underlying signals are likely to materialize in other forms: through state legislation, through agency rulemaking, through enterprise procurement standards, through D&amp;O underwriting practices, through institutional investor expectations, and through the continued expansion of state AI statutes that are already operating.</p><p>For executives, the planning implication is not to react to the specific proposals. It is to read them as directional signals about where labor economics, tax structure, energy policy, talent markets, and corporate governance are heading&#8202;&#8212;&#8202;and to incorporate those signals into strategic plans, capital-allocation decisions, and governance infrastructure now, while the cost of doing so is low.</p><p>Organizations that do this well will be materially better positioned five and ten years from now than those treating the paper as either a labor-policy document, a regulatory wish list, or an artifact of one company&#8217;s lobbying strategy. It is, in different ways, all three. But it is also, and more importantly, a preview of the operating environment executives are planning into.</p><h3>Sources consulted</h3><ul><li><p>OpenAI. (2026, April). <em>Industrial Policy for the Intelligence Age: Ideas to Keep People First.</em> <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf</a></p></li><li><p>Stanford Law School Securities Class Action Clearinghouse. </p></li></ul><p>http://securities.stanford.edu</p><ul><li><p>Cornerstone Research. <em>Securities Class Action Filings</em> annual and semi-annual reviews.</p></li><li><p>Advisen / Cornerstone Research. D&amp;O settlement commentary, 2024.</p></li><li><p>McKinsey &amp; Company. <em>The State of AI</em> reporting series.</p></li><li><p>National Association of Corporate Directors (NACD). Governance and AI oversight surveys.</p></li><li><p>Diligent Institute. AI governance and board oversight surveys.</p></li><li><p>Illinois General Assembly. Public Act 104&#8211;0054 (HB 1806), Workplace AI Oversight Protections Act. </p></li></ul><a class="augmentation-placeholder image-link" data-attrs="{&quot;request_id&quot;:&quot;hevhcm10k44&quot;,&quot;url&quot;:&quot;https://www.ilga.gov&quot;,&quot;src&quot;:null,&quot;type&quot;:null}" href="javascript:void(0)" data-component-name="AugmentationPlaceholderToDOM"><img src="data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=="></a><ul><li><p>Colorado General Assembly. Colorado AI Act (SB 24&#8211;205).</p></li><li><p>National Institute of Standards and Technology. <em>AI Risk Management Framework (AI RMF 1.0).</em></p></li><li><p>ISO/IEC 42001:2023, Artificial Intelligence Management System Standard.</p></li></ul><p><em>This article is analysis and commentary, not legal advice. Consult qualified counsel for guidance specific to your situation.</em></p><h4>The April 2026 proposal is not primarily a labor-policy document or a regulatory wish list. It is a signal about where labor economics, tax structure, energy policy, talent markets, and corporate governance are likely to move over the next decade&#8202;&#8212;&#8202;and what executives should be planning for now.</h4><h3>In brief</h3><ul><li><p>OpenAI&#8217;s April 2026 paper, <em>Industrial Policy for the Intelligence Age</em>, is the most ambitious public-policy vision any AI developer has advanced. For operating executives, the paper is most usefully read as a strategic-planning input: it signals where labor economics, tax structure, energy infrastructure, talent markets, procurement standards, and corporate governance are likely to move over the next three to ten years.</p></li><li><p>In the short term (12&#8211;24 months), the paper&#8217;s operational implications concentrate around three themes: the emerging AI disclosure and incident-reporting architecture, accelerating AI-related D&amp;O and insurance dynamics, and the documentation executives will increasingly be expected to produce around AI governance, vendor risk, and worker consultation.</p></li><li><p>In the longer term (three to ten years), the paper points toward structural shifts in how AI productivity gains are distributed between shareholders, workers, and public programs; how talent markets operate once benefits begin to decouple from employers; how the tax base is restructured around AI-driven returns; how energy costs and siting decisions shape AI capability; and how the care economy is positioned as the absorption sector for AI-driven workforce displacement. Each has direct capital-allocation consequences.</p></li><li><p>Several of the paper&#8217;s most consequential implications are not obvious on first reading. Below, I identifiy four that most strategic plans have not yet incorporated.</p></li></ul><h3>What the document is, and why it matters</h3><p>In April 2026, OpenAI published a 13-page policy paper titled <em><a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">Industrial Policy for the Intelligence Age: Ideas to Keep People First.</a></em> The paper proposes a policy agenda across two themes: building an open economy (worker voice in AI deployment, AI-enabled entrepreneurship supports, a &#8220;Right to AI&#8221; access program, a Public Wealth Fund giving citizens a stake in AI-driven growth, modernization of the tax base, efficiency dividends for workers, adaptive safety nets, portable benefits, and accelerated pathways into care-economy work) and building a resilient society (safeguards for high-consequence domains, provenance and verification standards, auditing regimes through an expanded Center for AI Standards and Innovation, model-containment protocols, mission-aligned governance for frontier developers, guardrails on government AI use, mechanisms for public input, incident reporting, and international coordination).</p><p>The paper is supported by a $100,000 fellowship program, up to $1 million in research credits, a Washington, D.C. workshop scheduled for May, and a dedicated public-comment inbox. Its publication follows a pattern familiar from earlier technology transitions: a leading incumbent advances an ambitious policy vision early in the regulatory cycle to shape the terms of the debate.</p><p>For operating executives, the paper matters for three reasons that are not primarily about AI policy in the abstract.</p><p>First, the paper opens a multi-year policy conversation whose outcomes will materially affect labor costs, corporate tax exposure, energy costs, regulatory obligations, and competitive market structure. Most of these inputs sit at the top of any five-year plan.</p><p>Second, the signals the paper sends are already being acted on by institutional investors, D&amp;O underwriters, state legislators, enterprise buyers, and competitors&#8202;&#8212;&#8202;regardless of whether specific proposals ever become federal law. Markets do not wait for legislation.</p><p>Third, several of the paper&#8217;s proposals cross the line from policy wish list into operational expectation considerably faster than executives typically plan for. Incident reporting moved from voluntary practice to mandatory SEC disclosure in cybersecurity over roughly 15 years. AI is likely to compress that trajectory.</p><p>The paper, in other words, is an input to corporate strategy. The question for executives is what it contains, what it signals, and which of its implications belong on the agenda now rather than later.</p><h3>The strategic frame: two regimes, one transition</h3><p>A useful way to situate the paper is to recognize that AI accountability and AI-related economic rules in the United States are governed by two distinct regimes, operating on different timelines.</p><p>The industrial-policy regime operates through legislation, agency rulemaking, standards bodies, procurement authority, and public-private institutional design. It is mostly proposed. It includes federal and state AI laws, the NIST AI Risk Management Framework, CAISI, and the auditor and insurance markets that scale alongside standards development. OpenAI&#8217;s paper is substantially about this regime.</p><p>The liability and market regime operates through fiduciary duty, securities enforcement, tort law, consumer protection, employment law, D&amp;O insurance markets, state AI statutes, and the customer and investor expectations being transmitted through private channels. It is already operating. It includes the growing count of AI-related securities class actions, the state AI statutes now in force in Illinois, Colorado, California, and elsewhere, and the D&amp;O underwriting practices that are pricing AI governance today.</p><p>The two regimes are complementary. Every public company and most private companies of meaningful size are subject to both. The practical difference is that the industrial-policy regime is where the next several years of policy debate will happen, while the liability and market regime is where operating consequences are already being priced.</p><p>Executives who treat the paper as a complete account of AI-related corporate rules&#8202;&#8212;&#8202;or who wait for the industrial-policy regime to clarify before acting&#8202;&#8212;&#8202;are likely to underinvest in the governance and operational controls that are already being priced into insurance, litigation, and investor expectations.</p><h3>Short-term implications (12&#8211;24 months)</h3><p>Four operational themes warrant near-term attention from executive teams.</p><p>AI disclosure and incident reporting are consolidating into a defined architecture. The paper calls for structured sharing of &#8220;incidents, misuse, and near-misses&#8221; with a designated public authority, including cases where systems exhibited &#8220;concerning internal reasoning, unexpected capabilities, or other warning signals&#8202;&#8212;&#8202;even if safeguards ultimately prevented harm.&#8221; This mirrors the path cybersecurity disclosure followed from voluntary practice to SEC Item 1.05 materiality reporting. Organizations that do not currently maintain an AI incident log, a defined escalation pathway, and a materiality-assessment process should treat building these capabilities as a near-term priority. The relevant owners are typically some combination of General Counsel, Chief Risk Officer, CISO, and Chief Compliance Officer, depending on organizational structure.</p><p>D&amp;O and AI-related insurance dynamics are tightening. More than 50 AI-related securities class action filings have been tracked since early 2020, with filings reportedly roughly doubling in 2024 over 2023 (Stanford SCA Clearinghouse and Cornerstone Research summaries). Average D&amp;O settlement size has risen to approximately $56 million (Advisen and Cornerstone, 2024). D&amp;O underwriters are factoring AI governance into pricing, retentions, and exclusion language at renewal. Organizations entering renewal cycles in the next 12 to 18 months should expect underwriter questions about AI inventory, board oversight cadence, incident response, and third-party AI vendor risk. These questions are increasingly determinative of premium and retention outcomes.</p><p>Worker consultation on AI deployment is moving from HR policy to a documented expectation. The paper proposes a formal worker-voice mechanism in AI deployment decisions&#8202;&#8212;&#8202;an idea already partially operative under the EU AI Act. In U.S. contexts, analogous expectations are arriving through state legislation (Illinois&#8217;s Public Act 104&#8211;0054 requires documented notice and consultation on workplace AI decisions), collective bargaining agreements in regulated sectors, and employment litigation in sectors including logistics, healthcare, and financial services. Organizations without a documented worker consultation process on material AI deployments should anticipate that the absence of one becomes evidentiary in disputes, rather than simply a gap in best practice.</p><p>Vendor and third-party AI risk is becoming a procurement standard. The &#8220;AI trust stack&#8221; the paper describes&#8202;&#8212;&#8202;provenance, verification, audit logs, delegation records&#8202;&#8212;&#8202;is already being translated into enterprise procurement questionnaires. Technology vendors selling into regulated industries, government, healthcare, and financial services should expect provenance, evaluation, and incident-reporting disclosures to become table-stakes in RFPs over the next 12 months. On the buyer side, organizations relying on third-party AI models without clear vendor disclosure practices should anticipate increased audit attention from D&amp;O underwriters, regulators, and investors.</p><h3>Longer-term implications (three to ten years)</h3><p>Four structural shifts signaled by the paper warrant incorporation into strategic plans beyond the immediate horizon.</p><p>The distribution of AI productivity gains is moving into political and negotiation terrain. The paper proposes that efficiency gains from AI be converted into &#8220;durable improvements in workers&#8217; benefits&#8221;&#8202;&#8212;&#8202;richer retirement contributions, expanded healthcare coverage, 32-hour workweek pilots without loss of pay, and productivity-linked &#8220;benefits bonuses.&#8221; Most business cases for AI investment assume that productivity gains accrue substantially to shareholders. The paper, and the broader policy environment it signals, suggests that the effective retention rate of AI gains to shareholders will be a contested variable&#8202;&#8212;&#8202;influenced by labor negotiations, state legislation, proxy advisor positions, and tax policy. Strategic plans that treat AI productivity as unmitigated margin expansion are likely to overstate the economics available to shareholders in the medium term.</p><p>The talent market is likely to decouple from the benefits stack. The paper&#8217;s portable-benefits proposals&#8202;&#8212;&#8202;healthcare, retirement, and training accounts that follow individuals across employers&#8202;&#8212;&#8202;represent a policy direction already visible in state pilots and gig-economy advocacy. If portable benefits achieve even partial scale, two structural shifts follow: employers lose one of their historically largest retention levers, and contingent, fractional, and independent work becomes substantially more viable at scale. Organizations currently scaling contractor-heavy delivery models have a secular tailwind. Organizations whose retention strategy relies on benefit-based lock-in face a reengineering task. Either way, the cost structure of talent&#8202;&#8212;&#8202;both the compensation mix and the composition of the workforce&#8202;&#8212;&#8202;is likely to change more than most five-year plans have modeled.</p><p>The tax base is being reshaped around AI-driven returns. The paper explicitly proposes that policymakers &#8220;rebalance the tax base by increasing reliance on capital-based revenues&#8202;&#8212;&#8202;such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns&#8221; and pair these changes with &#8220;wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&amp;D-style credits.&#8221; In practical terms, this foreshadows a tax environment in which automation that reduces headcount is taxed more heavily, while investment in retention and retraining is credited. This has direct implications for entity structure, jurisdiction selection, transfer pricing, compensation design, M&amp;A tax planning, and capital-allocation decisions on automation versus augmentation. CFOs and heads of tax should treat this as a multi-year planning input rather than a one-time event.</p><p>Energy infrastructure is now upstream of AI capability. The paper proposes public-private partnerships to &#8220;accelerate the expansion of energy infrastructure required to power AI,&#8221; combined with a principle that AI data centers should &#8220;pay their own way on energy so that households aren&#8217;t subsidizing them.&#8221; This is a consequential signal. For compute-intensive businesses, AI capability decisions are increasingly grid, siting, and cost-allocation decisions. For energy-intensive industries outside tech, competition with data centers for grid capacity is becoming a real operational constraint. Organizations that run AI strategy and energy strategy as separate workstreams&#8202;&#8212;&#8202;most do&#8202;&#8212;&#8202;should integrate them. The CTO&#8217;s AI roadmap and the CFO&#8217;s energy strategy are the same planning document.</p><h3>What most executive teams are missing</h3><p>Four issues are, in our observation, materially underweighted in most current strategic plans. Each follows directly from the paper&#8217;s proposals but is not typically read off the text on first pass.</p><p>1. The effective retention rate of AI productivity gains to shareholders is a variable, not a constant. Most investor materials and internal business cases model AI as a productivity multiplier flowing to margin. The paper, state legislation, EU regulation, and proxy-advisor positions all point in the opposite direction: a share of AI gains is being actively redirected to workers, to public programs through tax reform, and to infrastructure through grid cost allocation. The magnitude of the redirection is unsettled. The direction is not. Business cases that quantify AI productivity without modeling a policy-and-negotiation &#8220;take&#8221; are likely to overstate available margin by a meaningful margin&#8202;&#8212;&#8202;potentially enough to change investment decisions.</p><p>2. The care economy is being positioned as the absorption sector for AI displacement, with accompanying public investment. The paper is explicit: &#8220;Expand opportunities in the care and connection economy&#8202;&#8212;&#8202;childcare, eldercare, education, healthcare, and community services&#8202;&#8212;&#8202;as pathways for workers displaced by AI.&#8221; It proposes public investment in training pipelines, wage supports for care-economy roles, and a family benefit framework. The strategic implication reaches beyond social policy. Organizations with care-economy exposure&#8202;&#8212;&#8202;directly or through adjacent platform, insurance, or staffing assets&#8202;&#8212;&#8202;are likely operating in a sector with a multi-year policy tailwind that has not yet been priced into conventional comparables. Private equity firms with healthcare, education, or home-services platforms should expect the policy environment to support valuations. Operators in adjacent sectors (consumer goods, real estate, financial services targeting seniors or families) should expect a multi-year demand tailwind that conventional demographic models partially capture but may understate.</p><p>3. The liability and market regime is already pricing what the industrial-policy regime is debating. While Congress and state legislatures debate industrial policy, D&amp;O underwriters, institutional investors, proxy advisors, state attorneys general enforcing existing AI statutes, and plaintiffs&#8217; counsel are operating in real time. Organizations waiting for policy clarity before building governance and documentation capabilities are discovering that the price&#8202;&#8212;&#8202;in premium increases, retention escalations, adverse proxy recommendations, and litigation exposure&#8202;&#8212;&#8202;has already been set. The gap between &#8220;policy uncertainty&#8221; and &#8220;market certainty&#8221; is wider than the debate suggests, and it is not narrowing. This is the single most consequential mispricing in current corporate planning.</p><p>4. Incident reporting is competitive intelligence, not a compliance burden. If industry-level incident-sharing pools emerge for AI&#8202;&#8212;&#8202;analogous to the Aviation Safety Reporting System or healthcare patient-safety organizations&#8202;&#8212;&#8202;the organizations that build incident-tracking infrastructure early will gain visibility into peer failure modes, near-misses, and emerging risks. Late builders purchase the same learning much more expensively, through their own incidents. The right way to frame AI incident infrastructure to the C-suite is as an operational intelligence capability that happens to satisfy compliance, not the reverse.</p><h3>A strategic-planning agenda</h3><p>Translating the above into action, three decision agendas are appropriate across functions and horizons.</p><p>For the CEO and strategy function. Commission a review of the organization&#8217;s AI-related assumptions in the current strategic plan: what share of AI productivity gains the plan assumes are retained by the company; how the plan models talent, tax, and energy cost trajectories; whether care-economy exposure or care-economy adjacency is a strategic position; and whether the plan treats AI policy uncertainty as a reason to defer or a reason to build optionality. Resolve the ambiguity before the next capital-allocation cycle.</p><p>For the CFO and the audit committee. Pressure-test the organization&#8217;s D&amp;O renewal readiness: documented board oversight cadence, AI inventory, vendor risk posture, incident response capability, and evidence of governance maturation over the prior renewal period. Build a planning-horizon view of the tax environment the paper points toward, including the sensitivity of current capital-allocation decisions (on automation, M&amp;A, geographic footprint) to the policy scenarios the paper describes. Model energy-cost exposure through 2030 with data center cost-allocation policy as an explicit scenario variable.</p><p>For the CHRO, COO, and General Counsel. Stand up&#8202;&#8212;&#8202;or stress-test&#8202;&#8212;&#8202;the organization&#8217;s worker consultation process on material AI deployments, its incident and near-miss reporting infrastructure, and its AI vendor risk program. Review the governance documentation the organization could produce on reasonable notice: AI accountability-allocation framework, delegation records, escalation logs, board-level oversight evidence. In comparable governance contexts, the absence of this documentation is increasingly treated as evidence of inadequate oversight, not as a neutral gap.</p><p>For the board. Direct management to prepare a briefing on the paper&#8217;s implications for the organization&#8217;s industry. The three questions worth asking at the next governance or risk committee meeting are, in substance: Does the organization have a documented AI accountability-allocation framework? Does it maintain a monitoring process reasonably designed to detect insider use of AI to concentrate decision authority or circumvent controls? Would current governance documentation support a D&amp;O renewal submission at acceptable terms? Board-minute language documenting this review should be drafted and preserved regardless of the specific conclusions reached.</p><h3>Outlook</h3><p>The paper is most accurately read as the opening move in a multi-year conversation about how the United States governs the transition to more capable AI systems. Its specific proposals&#8202;&#8212;&#8202;a Public Wealth Fund, a national four-day workweek pilot, a formal worker-voice mechanism, a portable benefits architecture&#8202;&#8212;&#8202;are unlikely to pass in the forms proposed. The underlying signals are likely to materialize in other forms: through state legislation, through agency rulemaking, through enterprise procurement standards, through D&amp;O underwriting practices, through institutional investor expectations, and through the continued expansion of state AI statutes that are already operating.</p><p>For executives, the planning implication is not to react to the specific proposals. It is to read them as directional signals about where labor economics, tax structure, energy policy, talent markets, and corporate governance are heading&#8202;&#8212;&#8202;and to incorporate those signals into strategic plans, capital-allocation decisions, and governance infrastructure now, while the cost of doing so is low.</p><p>Organizations that do this well will be materially better positioned five and ten years from now than those treating the paper as either a labor-policy document, a regulatory wish list, or an artifact of one company&#8217;s lobbying strategy. It is, in different ways, all three. But it is also, and more importantly, a preview of the operating environment executives are planning into.</p><h3>Sources consulted</h3><ul><li><p>OpenAI. (2026, April). <em>Industrial Policy for the Intelligence Age: Ideas to Keep People First.</em> <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf</a></p></li><li><p>Stanford Law School Securities Class Action Clearinghouse. </p></li></ul><p>http://securities.stanford.edu</p><ul><li><p>Cornerstone Research. <em>Securities Class Action Filings</em> annual and semi-annual reviews.</p></li><li><p>Advisen / Cornerstone Research. D&amp;O settlement commentary, 2024.</p></li><li><p>McKinsey &amp; Company. <em>The State of AI</em> reporting series.</p></li><li><p>National Association of Corporate Directors (NACD). Governance and AI oversight surveys.</p></li><li><p>Diligent Institute. AI governance and board oversight surveys.</p></li><li><p>Illinois General Assembly. Public Act 104&#8211;0054 (HB 1806), Workplace AI Oversight Protections Act. </p></li></ul><p>https://www.ilga.gov</p><ul><li><p>Colorado General Assembly. Colorado AI Act (SB 24&#8211;205).</p></li><li><p>National Institute of Standards and Technology. <em>AI Risk Management Framework (AI RMF 1.0).</em></p></li><li><p>ISO/IEC 42001:2023, Artificial Intelligence Management System Standard.</p></li></ul><p><em>This article is analysis and commentary, not legal advice. Consult qualified counsel for guidance specific to your situation.</em></p>]]></content:encoded></item><item><title><![CDATA[What Illinois Can Learn from the Netherlands]]></title><description><![CDATA[Building a Coordinated AI Commercialization Ecosystem: A Comparative Analysis of Innovation Architecture, Technology Transfer, and Scale Pathways]]></description><link>https://insights.techne.ai/p/what-illinois-can-learn-from-the</link><guid isPermaLink="false">https://insights.techne.ai/p/what-illinois-can-learn-from-the</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Fri, 27 Mar 2026 22:15:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q6If!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q6If!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q6If!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 424w, https://substackcdn.com/image/fetch/$s_!q6If!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 848w, https://substackcdn.com/image/fetch/$s_!q6If!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!q6If!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q6If!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png" width="1456" height="1162" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1162,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1026875,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/192359799?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q6If!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 424w, https://substackcdn.com/image/fetch/$s_!q6If!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 848w, https://substackcdn.com/image/fetch/$s_!q6If!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 1272w, https://substackcdn.com/image/fetch/$s_!q6If!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1308a61a-1c4c-45bc-8433-e63d78e0047f_1850x1476.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Illinois and the Netherlands occupy a rare, high-stakes category: economies with the research depth, industrial diversity, and global connectivity to translate artificial intelligence into productivity growth, new firms, and strategic resilience. Both have annual economic outputs exceeding one trillion dollars&#8212;Illinois&#8217;s gross state product was approximately $1.137 trillion in 2024 (preliminary, current dollars, BEA), comparable to the Netherlands&#8217; approximately $1.21 trillion GDP (2024, current US$, World Bank).<strong><sup>[1][2][17]</sup></strong> Both regions anchor globally recognized research universities and both aspire to lead in AI. Yet their commercialization outcomes appear to diverge for structural reasons: the Netherlands has invested in ecosystem architecture&#8212;coordination mechanisms, standardized technology transfer practices, integrated pre-seed funding pathways, and national roadmapping&#8212;while Illinois largely relies on a set of excellent but more fragmented programs and institutions.</p><p>The Netherlands&#8217; approach is not a single program or one-off initiative. It is a deliberately assembled system that answers a deceptively simple question: <em>How does an idea move from lab to market repeatedly, quickly, and responsibly?</em> That system is visible in three interlocking mechanisms. First, the Thematic Technology Transfer for AI (TTT.AI) framework functions as a shared venture-building funnel that sources academic AI innovations across multiple institutions. Second, Dutch universities have adopted standardized baseline intellectual property (IP) deal terms for spinoffs via the National IP Deal Term Principles 2.0.<strong><sup>[5][13][14]</sup></strong> Third, AI Coalition 4 NL (AIC4NL)&#8212;formed in January 2025 from the merger of NL AIC and AiNed&#8212;illustrates the value of national coordination with dedicated funding of approximately &#8364;204.5 million from the National Growth Fund.<strong><sup>[3][4][7]</sup></strong></p><p><strong>Core Finding</strong></p><p>The Netherlands&#8217; apparent advantage is not research quality alone. It appears to be <strong>connective tissue</strong>: coordination, standardization, and integrated pipelines that likely contribute to turning research output into a more repeatable flow of products, companies, pilots, and scaled deployments.</p><p>The implication for Illinois is practical. The state has world-class ingredients&#8212;universities, national labs, corporate demand, and civic innovation capacity&#8212;but needs a more cohesive operating system to convert those ingredients into durable AI competitiveness.</p><p><strong>Causal framing:</strong> The Dutch architecture is a plausible contributor to higher commercialization throughput; this paper hypothesizes three mechanisms (coordination, standardization, capital continuity) and proposes Illinois analogs. However, alternative explanations for outcome differences include: (a) national vs. state policy authority, (b) fundamentally different VC dynamics and exit markets, (c) EU vs. U.S. regulatory and procurement environments, (d) industry mix and geographic concentration (Amsterdam/Eindhoven corridor vs. Chicago/UIUC corridor), and (e) language, market size, and export orientation differences. Coordination likely still matters even after accounting for these factors, but this paper does not claim to isolate its causal effect.</p><h3><strong>Divergence Evidence: Where Outcomes Appear to Differ</strong></h3><p>The claim that commercialization outcomes diverge requires specificity. While comprehensive head-to-head data is unavailable, the following proxy metrics highlight structural differences in <em>how</em> each ecosystem organizes commercialization&#8212;not simply how much activity occurs. Illinois may generate comparable or greater raw startup volume, but the Dutch system exhibits more standardized, repeatable pathways from lab to market.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MfE1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MfE1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 424w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 848w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 1272w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MfE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:451528,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/192359799?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MfE1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 424w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 848w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 1272w, https://substackcdn.com/image/fetch/$s_!MfE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa4f048a-a91c-471f-a082-f9f5a30e560c_1694x1482.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Interpretation Note</p><p><em>Several of these metrics are imperfect proxies with different measurement bases. The divergence claim rests primarily on architectural evidence&#8212;the presence or absence of standardized pathways&#8212;rather than audited outcome comparisons. Collecting comparable throughput data should be a first-order priority for any Illinois AI coalition.</em></p><h3><strong>What Illinois Can Do&#8212;Five State-Implementable Moves</strong></h3><p><strong>(1) Establish an Illinois AI Coalition.</strong> Create a statewide coordinating body spanning agencies, universities, national laboratories, workforce partners, and private adopters, and publish a measurable AI Action Plan.</p><p><strong>(2) Standardize spinoff/IP deal principles across institutions.</strong> Create baseline, market-aligned term principles and templates to reduce negotiation drag and improve investor confidence.</p><p><strong>(3) Build a shared venture-building funnel with integrated pre-seed (&#8220;TTT-IL&#8221;).</strong> Treat commercialization as a pipeline with defined milestones, team formation support, and a warm handoff to seed and Series A partners.</p><p><strong>(4) Institutionalize adoption pathways through paid pilots and procurement.</strong> Establish a pilot-to-procurement playbook so promising AI solutions can validate outcomes and scale responsibly.</p><p><strong>(5) Commission an Illinois proposition-based investment blueprint.</strong> Identify concrete, investable deep-tech projects in Illinois advantage sectors&#8212;healthcare, logistics, manufacturing, agriculture/food, and finance.</p><h2><strong>Why This Matters Now</strong></h2><p>AI has moved from an innovation topic to an economic and governance reality. For Illinois, this creates a dual mandate: compete for the upside&#8212;productivity, new firm formation, better services&#8212;while building guardrails that sustain public trust. The risk is not that Illinois lacks talent or research. The risk is that, absent a more integrated commercialization and adoption architecture, Illinois may under-capture the economic returns on its research investments and industrial capacity, effectively subsidizing innovation that scales elsewhere.</p><p>Illinois&#8217;s existing investment trajectory underscores the urgency. The state has committed over $500 million to the Discovery Partners Institute and the Illinois Innovation Network, a system of 15 university-community-industry hubs projected to create 48,000 jobs and $19 billion in economic impact over ten years.<strong><sup>[16][27]</sup></strong> P33 Chicago has catalyzed over $160 million in follow-on funding for founders through its TechRise program and launched a $50 million hyper-regional venture fund.<strong><sup>[28]</sup></strong> Chicago attracted $1.5 billion in AI investment since 2023.<strong><sup>[29]</sup></strong> These are substantial assets&#8212;but they operate largely as parallel initiatives rather than as an integrated system.</p><p>The Netherlands&#8217; recent policy and ecosystem activity offers a timely reference point precisely because it has tackled the &#8220;middle layer&#8221; between research and market&#8212;what many ecosystems experience as a valley of death. Consider how often promising academic AI remains stuck in a repeat pattern: a lab produces strong results; an invention disclosure is filed; a small team forms; founders spend months negotiating IP; early pilots happen through personal networks; and then the project stalls for lack of standardized pathways to adoption and follow-on capital.</p><p>The Netherlands has treated those seams as design problems. TTT.AI is a mechanism to make commercialization more repeatable across institutions. National IP deal principles reduce transaction costs. AIC4NL serves as an alignment engine that connects research, education, data, and applications with dedicated National Growth Fund support.</p><p><strong>A Practical Framing</strong></p><p>The core question is: <strong>How do we reduce time and uncertainty between discovery and deployment&#8212;while increasing accountability?</strong> That is a question legislators, universities, agencies, and private partners can answer together because each controls different parts of the pipeline.</p><p></p><p>Read the full white paper: </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://strategy.techne.ai/il-state-ai-strategy/&quot;,&quot;text&quot;:&quot;White Paper&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://strategy.techne.ai/il-state-ai-strategy/"><span>White Paper</span></a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[There Will Be More Detroits: ]]></title><description><![CDATA[The Impact of a 30% White-Collar AI Contagion in Illinois]]></description><link>https://insights.techne.ai/p/there-will-be-more-detroits</link><guid isPermaLink="false">https://insights.techne.ai/p/there-will-be-more-detroits</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Mon, 02 Mar 2026 16:20:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4x1Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4x1Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4x1Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4x1Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:106201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/189667810?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4x1Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!4x1Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F132897e4-5c91-4283-9e1d-fc4289a8ab08_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The recent memo from <strong><a href="https://www.fastcompany.com/91500048/in-a-626-word-x-post-jack-dorsey-justifies-his-decision-to-lay-off-40-of-blocks-workforce">Block&#8217;s CEO</a></strong> outlining a nearly 40% reduction in force wasn&#8217;t framed as a cost-cutting measure for a struggling business. Jack Dorsey, the former founder and CEO of Twitter, was clear his business was strong, but the technology was just better than existing head count.</p><p>We are witnessing the beginning of the <strong><a href="https://alapshah1.substack.com/p/the-global-intelligence-crisis?r=1g6uar&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">Intelligence Displacement Spiral</a></strong>. While the stock market celebrates margin expansion and record-setting corporate profits funneled right back into AI compute, we need to look at the macroeconomic floor beneath us. What happens to a state economy when the ultimate software upgrade renders a massive portion of its tax base obsolete?</p><blockquote><p>Let&#8217;s run a hypothetical: A 30% permanent reduction in Illinois&#8217; white-collar workforce due to AI agentic automation.</p></blockquote><p>This scenario and hypothetical was sparked by a brilliant, chilling thought exercise recently published by Alap Shah titled <strong><a href="https://alapshah1.substack.com/p/the-global-intelligence-crisis?r=1g6uar&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">&#8220;The Global Intelligence Crisis.&#8221;</a></strong> Shah masterfully models the macroeconomic fallout of abundant machine intelligence replacing human intermediation at a global scale. Reading his analysis of the &#8220;intelligence displacement spiral&#8221; immediately raised a localized question that we frequently circle back to on the <em>AI in Chicago</em> podcast: what does this macroeconomic theory look like when it actually hits the ground in our own state? If we take Shah&#8217;s global premise and apply it to the municipal realities of the Midwest, the math becomes terrifyingly concrete.</p><h2><strong>The Mechanism of Displacement</strong></h2><p>We are already seeing this transition on the ground. When structuring AI implementation and compliance frameworks for large healthcare or corporate organizations, the most impactful use cases are rarely the flashy ones. It is not necessarily about deploying futuristic ambient scribes; it is about highly practical workflow automation.</p><p>It is quiet, it is ruthlessly efficient, and it permanently removes human capital from the friction layer.</p><h2><strong>The Coming Shock to the Illinois General Funds</strong></h2><p>According to fiscal data from the <strong><a href="https://www.google.com/search?q=https://budget.illinois.gov/state-budget.html">Governor&#8217;s Office of Management and Budget</a></strong> (GOMB), the State of Illinois operates a General Funds budget of roughly $55 billion. To understand why the state is uniquely vulnerable to a white-collar contraction, we just need to look at its own fiscal reporting. According to the <strong><a href="https://cgfa.ilga.gov/">Commission on Government Forecasting and Accountability (COGFA)</a></strong>, which publishes the state&#8217;s &#8220;Economic Outlook and Revenue Estimates,&#8221; Illinois is extraordinarily dependent on human earning power. Their data explicitly details this revenue breakdown: the state&#8217;s flat 4.95% individual income tax and state sales taxes combined account for roughly 65% of all General Funds revenue.</p><p>There are approximately 2.4 million white-collar workers (management, professional, administrative, financial) in Illinois. A 30% elimination equates to <strong>720,000 lost jobs</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8jxO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8jxO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 424w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 848w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 1272w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8jxO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png" width="1456" height="573" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/440160c5-f929-481a-8430-1c42684a60f2_1600x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:573,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!8jxO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 424w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 848w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 1272w, https://substackcdn.com/image/fetch/$s_!8jxO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440160c5-f929-481a-8430-1c42684a60f2_1600x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p>Assuming a conservative median salary of $85,000 for these roles, that is $61.2 billion in evaporated annual wage income. Here is the immediate, first-order math on the state budget.</p><p>The bridge between a contracting workforce and a state-level fiscal crisis is a matter of straightforward, brutal math. There is no complex economic voodoo at play here&#8212;it is simply the direct transmission mechanism of the crisis. If we assume a conservative median salary of $85,000 for the 720,000 displaced white-collar professionals, Illinois is looking at $61.2 billion in completely evaporated annual wage income.</p><p>Because the state relies on a flat individual income tax rate of 4.95%, the consequence of that evaporation is immediate and mechanical. When you remove $61.2 billion in taxable wages from the local economy, the state automatically loses its 4.95% cut. That alone blows an instant $3.03 billion hole in income tax receipts.</p><p>But the contagion doesn&#8217;t stop at payroll. Displaced workers immediately slash their discretionary spending, cutting back on dining, retail, vehicles, and home improvement. Since white-collar workers drive the vast majority of this consumption, a $61 billion contraction in local income translates to an estimated $1.20 billion plunge in direct state sales tax revenue.</p><p>Combine the two, and the total immediate contagion reaches $4.23 billion. It is a linear calculation that should terrify municipal planners: if the wages disappear, the state automatically loses the revenue it desperately relies on to keep the lights on.</p><p>A sudden $4.2 billion hole in a tightly balanced state budget would mean immediate austerity. But that is only the first domino.</p><h2><strong>The Sector Reality of the Unwind</strong></h2><blockquote><p>When we model an economic shock of this magnitude, the impact strikes specific arteries of the state&#8217;s economy. The fallout of 720,000 lost jobs will not be a generalized recession; rather, it is likely to target</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LNoL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LNoL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 424w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 848w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 1272w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LNoL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png" width="1456" height="908" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:908,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!LNoL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 424w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 848w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 1272w, https://substackcdn.com/image/fetch/$s_!LNoL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24e6b872-8dea-4dac-a87c-2a27b3aa606c_1488x928.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><ul><li><p><strong>Financial &amp; Insurance Services: </strong>Chicago is a massive hub for insurance underwriting, risk modeling, and financial intermediation. As AI agents learn to autonomously re-shop coverage and route around interchange fees, the clerical and analytical bloat in these sectors will face immediate contraction.</p></li><li><p><strong>Legal &amp; Compliance Administration:</strong> Routine document review, initial discovery, and compliance auditing are prime targets for large context-window models, devastating the leverage models of mid-sized law firms and corporate legal departments.</p></li><li><p><strong>SaaS &amp; Tech Operations: </strong>The local tech ecosystem relies heavily on SaaS sales, customer success, and mid-level engineering&#8212;roles rapidly being commoditized by agentic coding tools and automated customer resolution systems.</p></li></ul><h2><strong>The Detroit Mechanism: A Historical Warning</strong></h2><p>To understand what happens when a localized economy loses its dominant labor force to automation, we only have to look 280 miles east.</p><p>In 1950, Detroit was one of the wealthiest cities in America per capita, fueled by an unstoppable auto-manufacturing boom. But as the auto industry aggressively automated assembly lines with robotics and decentralized production, those high-paying blue-collar jobs permanently vanished.</p><p>The result was a catastrophic hollowing out of the municipal tax base. Between 1950 and 2010, Detroit&#8217;s population plummeted from 1.85 million to roughly 700,000. Property values cratered.</p><p>When Detroit finally <strong><a href="https://www.nytimes.com/2013/07/19/us/detroit-files-for-bankruptcy.html">filed for Chapter 9 bankruptcy in 2013</a></strong>&#8212;the largest municipal bankruptcy in U.S. history&#8212;the city was buried under $18 billion in debt. It had 78,000 abandoned structures, and the city could no longer afford to keep 40% of its streetlights turned on.</p><p>The Detroit crisis involved the collapse of a monoculture tax base. Today, the U.S. economic monoculture is white-collar intermediation<strong>. Artificial intelligence is to the Chicago Loop what automated robotics were to the Detroit assembly lines.</strong></p><p>But there is a critical difference: the market is fundamentally mispricing the speed of this fallout because of a deceptive 24-month lag.</p><p>When 720,000 high-earning residents lose their primary income source, they do not immediately default on their mortgages. The displacement is masked by an 18 to 24-month buffer of severance packages, drained savings accounts, and 401(k) liquidations. They will downshift, taking temporary gig work or lower-paying administrative roles while maintaining the illusion of solvency. During this period, state tax revenues will softly decline, but the housing market will look deceptively stable.</p><p>The liquidity buffer breaks: Starting in late 2028, displaced professionals exhaust their savings and begin defaulting or attempting to sell their homes in a suddenly saturated, illiquid market.</p><p>Property tax revenues fall in lockstep with plunging home assessments.Municipalities are forced to raise tax rates on the remaining solvent residents just to cover fixed municipal debt.</p><p>Capital flight accelerates as those who can afford to leave the state do so to avoid the escalating tax burden.The tax base enters a death spiral.</p><p>Detroit proved that when the dominant labor force is automated away, the municipal infrastructure built to support those workers cannot survive the transition. AI is a general-purpose technology; the rust belt is now every office building in the Loop, and the defaults are scheduled for the end of the decade.</p><h2><strong>The true contagion begins in 2028-2029.</strong></h2><p>As the savings buffers dry up, the $13 trillion mortgage market cracks. In Illinois, local municipalities rely heavily on property taxes to fund schools and fixed pension obligations.</p><ol><li><p>The liquidity buffer breaks: Starting in late 2028, displaced professionals exhaust their savings and begin defaulting or attempting to sell their homes in a suddenly saturated, illiquid market.</p></li><li><p>Property tax revenues fall in lockstep with plunging assessments.</p></li><li><p>Municipalities are forced to raise tax rates on the remaining solvent residents just to cover fixed municipal debt.</p></li><li><p>Capital flight accelerates as those who can afford to leave the state do so to avoid the escalating tax burden.</p></li><li><p>The tax base enters a death spiral.</p></li></ol><blockquote><p>This is exactly what happened to Detroit when the auto industry contracted and automated, just localized to the rust belt. AI is a general-purpose technology; the rust belt is now every office building in the Loop, and the defaults are scheduled for the end of the decade.</p></blockquote><h2><strong>Intelligence-Driven Deflation</strong></h2><p>Waiting for federal intervention is a losing strategy. Local leaders and city managers must begin proactively stress-testing their budgets against intelligence-driven deflation.</p><p>Intelligence-driven deflation is the macroeconomic phenomenon where the proliferation of abundant, near-zero-marginal-cost artificial intelligence permanently collapses the price of goods, services, wages, and ultimately, hard assets.</p><p>Unlike traditional technological deflation&#8212;which makes specific products like TVs or computers cheaper while creating new industries for human labor&#8212;intelligence-driven deflation is systemic.</p><p>It occurs when human cognition itself is no longer the scarce, premium input in the economy.</p><blockquote><p>Here is how the mechanics of intelligence-driven deflation actually play out in the economy</p></blockquote><p><strong>The Supply Side: The Collapse of Friction and Cost</strong></p><p>Historically, the service and knowledge economy was built on human limitations. It takes time to write code, analyze legal discovery, or underwrite an insurance policy. Human intelligence is expensive, requires healthcare, and needs sleep.When AI agents can execute these cognitive tasks for the cost of electricity, the cost of producing white-collar services plummets. Companies no longer have to pay a 30% premium for human intermediation. Software, legal services, administrative processing, and consulting become drastically cheaper to produce and deliver.</p><p><strong>The Demand Side: The Wage and Consumption Unwind</strong></p><p>In a service-based economy like the U.S., white-collar workers represent the vast majority of discretionary consumer spending. As companies substitute expensive human intelligence for cheap machine intelligence to maintain margins, those displaced workers either lose their jobs or are forced into lower-paying roles (wage compression). Because these workers now have significantly less income, they stop buying discretionary goods, dining out, and traveling.</p><p><strong>The Feedback Loop (The Intelligence Displacement Spiral)</strong></p><p>This creates a vicious, self-reinforcing cycle: AI capabilities improve, making it cheaper than human labor.Companies lay off knowledge workers and replace them with AI to protect profit margins.Displaced workers lose purchasing power and drastically cut their spending.Corporate revenues fall due to lack of consumer demand. To survive the revenue drop, companies aggressively cut more costs by deploying more AI. Prices, wages, and demand continue to spiral downward.</p><p><strong>Asset Deflation (The Detroit Mechanism)The final stage of intelligence-driven deflation hits hard assets and financial instruments.</strong></p><p>The $13 trillion U.S. residential mortgage market and municipal tax bases are entirely predicated on the assumption that white-collar wages will reliably grow, or at least remain stable, for 30 years. When the underlying wages deflate, the asset values follow. Highly paid professionals default on prime mortgages, causing regional housing markets to crash, which in turn hollows out the property tax base that funds local schools and municipal bonds.</p><p>Traditional monetary policy cannot fix this. A central bank can cut interest rates to zero, but cheap borrowing does not change the fact that an AI agent can do the work of a $150,000 compliance analyst for $200 a month. This is why discussions around frameworks like the EU AI Act or localized municipal stress-testing are critical&#8212;they attempt to put structural friction back into a system that is rapidly deflating.</p><h2><strong>The Municipal Defense Playbook: Stress-Testing the Midwest</strong></h2><p>If you suspect that this scenario of intelligence-driven deflation and structural municipal collapse is just theoretical doom-casting, we need only look at the hiring boards of the organizations building the technology.</p><p>They are already preparing for the aftermath.</p><blockquote><p>Google DeepMind recently posted an opening for a <strong><a href="https://job-boards.greenhouse.io/deepmind/jobs/7556396">Director of AGI Economics</a></strong> based in London. The job description is not looking for a traditional macroeconomist to tweak inflation models. It is a stark, explicit call for a visionary to explore <em>&#8220;post-AGI economics, the future of scarcity, and the distribution of power and resources in a world fundamentally reshaped by advanced AI.&#8221;</em></p></blockquote><p>The role&#8217;s mandate involves designing agent-based simulations to model long-term societal impacts. But the most chilling part of the posting is found in the mandatory application questions. DeepMind is explicitly asking candidates to answer:</p><blockquote><p><em>&#8220;Please describe a fundamental economic assumption you believe will be invalidated by AGI, and the theoretical framework you would use to analyze the resulting system.&#8221;</em></p></blockquote><p>Read that carefully. The leading AI lab in the world operates on the premise that fundamental economic assumptions are about to be invalidated. They recognize that the bedrock rules of scarcity, labor pricing, and wealth distribution are breaking in real-time.</p><p>When advising organizations on AI strategy and governance, a core tenet of risk management is looking at what the frontier developers are quietly preparing for. DeepMind is actively hiring PhDs to build simulations for a world where human cognitive labor is no longer the scarce input.</p><p>If the architects of Artificial General Intelligence are explicitly preparing for the invalidation of our current economic system, our public sector leaders have zero excuse to be caught off guard. We cannot continue to rely on municipal budgets and 30-year fixed mortgages that depend on an economic model that the tech industry is actively dismantling.</p><p>AI strategy cannot be limited to corporate deployment. It must include aggressive public sector defense mechanisms. Conversations around the future of AI in Chicago need to shift from pure enterprise adoption to public sector defense.</p><ol><li><p><strong>Tax Base Vulnerability Mapping</strong>: Cities must conduct deep audits of their revenue dependency. Which ZIP codes have the highest concentration of disruptable jobs (software, administrative, financial)? If a single suburb derives 40% of its property tax revenue from homes owned by mid-level tech and finance managers, that municipality is facing critical systemic risk.</p></li><li><p><strong>Pivoting Economic Development Incentives</strong>: For decades, local governments have offered massive tax incentives based on headcount. Since headcount is structurally going to zero, incentives must pivot toward capital-intensive, AI-resilient infrastructure. Cities should attract data centers, energy production, and physical logistics hubs that generate predictable property tax revenue regardless of human employment levels.</p></li><li><p><strong>Re-evaluating Bond Covenants</strong>: Municipalities need to assess the risk profiles of their general obligation (GO) bonds. If GO bonds are backed by the full faith and credit of a tax base that is 30% obsolete, borrowing costs will skyrocket as ratings agencies catch on.</p></li></ol><p>When analyzing frameworks like the EU AI Act and its Code of Practice, it is clear that other global jurisdictions are attempting to build structural guardrails around AI proliferation. While the U.S. remains focused on unbridled innovation and corporate adoption, we are largely ignoring the fiscal realities of the aftermath.</p><p>If we do not begin stress-testing our municipal and state budgets against the intelligence displacement spiral today, the public sector will break long before the private sector realizes what it has done.</p><h3><strong>References</strong></h3><ul><li><p>Acemoglu, D. (2024). <em>Automation and rent dissipation: Implications for wages, inequality, and productivity</em>. Working Paper. Massachusetts Institute of Technology.</p></li><li><p>Acemoglu, D., &amp; Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. <em>Journal of Economic Perspectives, 33</em>(2), 3&#8211;30. <strong><a href="https://doi.org/10.1257/jep.33.2.3">https://doi.org/10.1257/jep.33.2.3</a></strong></p></li><li><p>Autor, D. H. (2019). Work of the past, work of the future. <em>AEA Papers and Proceedings, 109</em>, 1&#8211;32. <strong><a href="https://doi.org/10.1257/pandp.20191110">https://doi.org/10.1257/pandp.20191110</a></strong></p></li><li><p>Block, Inc. (2026). <em>Block CEO letter to employees regarding organizational restructuring</em>. [Internal corporate memo shared via Twitter].</p></li><li><p>Citrini, &amp; Shah, A. (2026, February 22). The 2028 global intelligence crisis: A thought exercise in financial history, from the future. <em>Citrini Research</em>. Substack.</p></li><li><p>Illinois Commission on Government Forecasting and Accountability (COGFA). (2026). <em>Economic outlook and revenue estimates</em>. State of Illinois. Retrieved from </p></li></ul><p>https://cgfa.ilga.gov/</p><ul><li><p>Illinois Governor&#8217;s Office of Management and Budget (GOMB). (2026). <em>State budget</em>. State of Illinois. Retrieved from <strong><a href="https://budget.illinois.gov/state-budget.html">https://budget.illinois.gov/state-budget.html</a></strong></p></li><li><p>Korinek, A., &amp; Stiglitz, J. E. (2019). Artificial intelligence and its implications for income distribution and unemployment. In A. Agrawal, J. Gans, &amp; A. Goldfarb (Eds.), <em>The economics of artificial intelligence: An agenda</em> (pp. 349&#8211;390). University of Chicago Press.</p></li><li><p>Shah, A. (2026, February 22). The global intelligence crisis. <em>LOTUS</em>. Substack. </p></li></ul><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:188812384,&quot;url&quot;:&quot;https://alapshah1.substack.com/p/the-global-intelligence-crisis&quot;,&quot;publication_id&quot;:8104865,&quot;publication_name&quot;:&quot;Alap Shah&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!bNoG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F530f4c21-4191-443b-b367-ae1598b1ccc1_890x890.png&quot;,&quot;title&quot;:&quot;The Global Intelligence Crisis&quot;,&quot;truncated_body_text&quot;:&quot;Introduction&quot;,&quot;date&quot;:&quot;2026-02-22T18:58:47.025Z&quot;,&quot;like_count&quot;:374,&quot;comment_count&quot;:51,&quot;bylines&quot;:[{&quot;id&quot;:87659235,&quot;name&quot;:&quot;Alap Shah&quot;,&quot;handle&quot;:&quot;alapshah1&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/530f4c21-4191-443b-b367-ae1598b1ccc1_890x890.png&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2024-01-20T13:30:13.648Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-11-18T06:42:26.894Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:8293037,&quot;user_id&quot;:87659235,&quot;publication_id&quot;:8104865,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:8104865,&quot;name&quot;:&quot;Alap Shah&quot;,&quot;subdomain&quot;:&quot;alapshah1&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:87659235,&quot;primary_user_id&quot;:87659235,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2026-02-22T16:52:07.170Z&quot;,&quot;email_from_name&quot;:&quot;Alap Shah&quot;,&quot;copyright&quot;:&quot;Alap Shah&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;profile&quot;,&quot;is_personal_mode&quot;:true}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:5,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:5,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[1225823,1007036,238840,5620642,3884317,3087928,35345],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://alapshah1.substack.com/p/the-global-intelligence-crisis?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!bNoG!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F530f4c21-4191-443b-b367-ae1598b1ccc1_890x890.png" loading="lazy"><span class="embedded-post-publication-name">Alap Shah</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">The Global Intelligence Crisis</div></div><div class="embedded-post-body">Introduction&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 months ago &#183; 374 likes &#183; 51 comments &#183; Alap Shah</div></a></div><ul><li><p>Blight Removal Task Force. (2014). <em>Every neighborhood has a future... and it doesn&#8217;t include blight: The Detroit Blight Removal Task Force plan</em>. City of Detroit.</p></li><li><p>Federal Reserve Bank of New York. (2025). <em>Quarterly report on household debt and credit</em>. Center for Microeconomic Data.</p></li><li><p>Davey, M., &amp; Walsh, M. W. (2013, July 18). Billions in debt, Detroit tumbles into insolvency. <em>The New York Times</em>. <strong><a href="https://www.nytimes.com/2013/07/19/us/detroit-files-for-bankruptcy.html">https://www.nytimes.com/2013/07/19/us/detroit-files-for-bankruptcy.html</a></strong></p></li><li><p>Sugrue, T. J. (2014). <em>The origins of the urban crisis: Race and inequality in postwar Detroit</em> (Princeton Classics ed.). Princeton University Press.</p></li><li><p>U.S. Census Bureau. (2012). <em>2010 Census interactive population search: Detroit city, Michigan</em>. Retrieved from <strong><a href="http://census.gov/">Census.gov</a></strong>.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Governance and D&O Liability: What Every Board Needs to Know in 2026]]></title><description><![CDATA[A Briefing for Directors, Officers, and Their Advisors]]></description><link>https://insights.techne.ai/p/ai-governance-and-d-and-o-liability</link><guid isPermaLink="false">https://insights.techne.ai/p/ai-governance-and-d-and-o-liability</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Fri, 20 Feb 2026 23:28:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hvD6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hvD6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hvD6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hvD6!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png" width="1200" height="1800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:3598253,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/188668615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hvD6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!hvD6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cf4b25f-7f23-4002-b527-f3a7db636b07_1024x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Artificial intelligence has moved from the technology department to the boardroom agenda. But for most boards, it has not yet moved into the governance framework. That gap between AI deployment and AI oversight is now the fastest-growing source of directors and officers liability exposure in American corporate governance.</p><p>This white paper presents the evidence: AI-related securities class actions have become the leading category of event-driven litigation, with filings doubling in 2024 and accelerating into 2025. Meanwhile, two-thirds of board directors report limited or no knowledge of AI, and fewer than one in four companies have board-approved AI governance policies.</p><p>For D&amp;O insurance professionals, this gap represents both a risk and an opportunity. Boards that can demonstrate structured AI oversight are better positioned for favorable underwriting terms. Boards without governance face uncomfortable renewal conversations, restricted coverage, and unquantified liability exposure.</p><p>This briefing provides the data, the legal framework, and a practical roadmap for closing the AI governance gap before it becomes a claims event.</p><div><hr></div><h2>The Litigation Landscape: AI as D&amp;O Risk</h2><p>AI-related securities class actions (SCAs) have emerged as the dominant category of event-driven litigation in the United States. The trajectory is unambiguous and accelerating.</p><blockquote><p><strong>53+ AI-related securities class actions filed since March 2020, making AI the #1 category of event-driven SCA filings.</strong></p></blockquote><p>The pace of filing has increased dramatically. AI-related SCAs doubled from 2023 to 2024, and the first half of 2025 alone produced 12 filings. Average settlement values for D&amp;O claims have risen 27% to approximately $56 million.</p><h3>What Triggers AI-Related Securities Claims</h3><p>AI securities class actions typically allege one or more of the following:</p><ul><li><p><strong>Material misrepresentation</strong> about AI capabilities, readiness, or competitive positioning in public disclosures, earnings calls, or marketing materials.</p></li><li><p><strong>Failure to disclose material AI risks</strong> including algorithmic bias, data privacy violations, regulatory exposure, and model failures that could affect financial performance.</p></li><li><p><strong>Breach of fiduciary duty</strong> in overseeing AI deployment without adequate governance structures, risk assessment, or board-level accountability.</p></li><li><p><strong>Insider trading</strong> by directors or officers who traded securities while aware of undisclosed AI-related risks or failures.</p></li></ul><p>The common thread across these claims is the absence of documented board oversight. Plaintiffs do not need to prove the AI system failed. They need to prove the board failed to govern the AI system. The distinction is critical for directors, officers, and the professionals who insure them.</p><div><hr></div><h2>The Governance Gap: By the Numbers</h2><p>The gap between AI deployment and AI governance at the board level is the central risk factor driving D&amp;O exposure. The data paints a stark picture:</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!chZ_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!chZ_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 424w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 848w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 1272w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!chZ_!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png" width="1200" height="631.6666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:379,&quot;width&quot;:720,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:52013,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/188668615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!chZ_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 424w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 848w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 1272w, https://substackcdn.com/image/fetch/$s_!chZ_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F87bffb31-03a8-4524-9c46-7f1d118697de_720x379.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The implications are straightforward: 88% of organizations are deploying AI, but only 25% have board-level policies governing that deployment. The remaining 63% of organizations represent boards operating without documented AI oversight while their companies deploy AI systems that affect customers, employees, financial performance, and regulatory compliance.</p><p>From an underwriting perspective, this gap is the risk. From an advisory perspective, it is the opportunity.</p><div><hr></div><h2>The Fiduciary Framework: Why Boards Cannot Delegate AI Oversight</h2><p>Board fiduciary duties do not create an exception for emerging technology. The duty of care requires directors to inform themselves of material risks facing the organization. The duty of loyalty requires directors to act in good faith to establish reporting systems for material risks. Under the Caremark standard, a board that consciously fails to establish a reporting system for a known material risk faces potential liability for breach of the duty of loyalty.</p><p>AI deployment is now a material risk for virtually every company of meaningful size. The question is no longer whether AI is a board-level issue. The question is whether the board has documented evidence that it treated AI as a board-level issue.</p><h3>The Caremark Connection</h3><p>In <em>Caremark International Inc. Derivative Litigation</em> (1996), the Delaware Court of Chancery established that directors have an obligation to ensure that adequate information and reporting systems exist. A board that fails to implement any reporting system or, having implemented such a system, consciously fails to monitor its operation, faces potential liability.</p><p>AI governance fits squarely within this framework. As AI systems increasingly drive business decisions, affect customer outcomes, and create regulatory exposure, the absence of board-level reporting and oversight on AI constitutes exactly the kind of gap that Caremark liability is designed to address.</p><h3>What Documented Oversight Looks Like</h3><p>A defensible AI governance posture includes documented evidence of the following:</p><ol><li><p><strong>Board-level assignment of AI oversight responsibility</strong> to a specific committee (audit, risk, or technology) with AI explicitly within its charter.</p></li><li><p><strong>Regular board reporting on AI deployment, risk, and governance</strong> including AI inventory, risk assessments, incident tracking, and compliance status.</p></li><li><p><strong>Board-approved AI governance policy</strong> covering acceptable use, risk appetite, vendor management, and ethical principles.</p></li><li><p><strong>AI risk assessment methodology</strong> aligned with recognized frameworks (NIST AI RMF, ISO/IEC 42001) with documented results.</p></li><li><p><strong>Board AI education and training records</strong> demonstrating that directors have sought to inform themselves about AI risks relevant to the organization.</p></li></ol><p>The presence or absence of this documentation is what separates a board that can defend its oversight from one that cannot.</p><div><hr></div><h2>The D&amp;O Insurance Implications</h2><p>D&amp;O carriers are responding to the AI litigation trend by incorporating AI governance into their underwriting process. This has practical implications for boards, their advisors, and the brokers who place their coverage.</p><h3>Underwriting Is Changing</h3><p>Carriers are increasingly adding AI-related questions to D&amp;O applications and renewal questionnaires. These questions typically address whether the company has an AI governance policy, whether the board receives AI-related reporting, whether AI risk assessments have been conducted, and whether the company has experienced AI-related incidents.</p><p>Companies that cannot provide satisfactory answers face three potential consequences: higher premiums reflecting the unquantified AI risk, coverage restrictions or exclusions related to AI claims, or in extreme cases, declination of coverage.</p><h3>Governance as Premium Mitigation</h3><p>Conversely, companies that can demonstrate structured AI governance are better positioned during the underwriting and renewal process. Documented governance provides carriers with evidence that the board is exercising reasonable oversight, which supports more favorable risk assessment.</p><p>The parallel to cybersecurity is instructive. A decade ago, cyber insurance was a niche product and cybersecurity governance was optional. Today, cyber coverage requires demonstrated security controls, and companies with mature cybersecurity programs receive significantly better terms. AI governance is on the same trajectory, and the window for proactive adoption is narrowing.</p><blockquote><p><strong>$75K&#8211;$125K invested in governance today</strong> can reduce D&amp;O premium increases, provide documented fiduciary defense in litigation, and create a competitive advantage in the boardroom.</p></blockquote><div><hr></div><h2>The Regulatory Acceleration</h2><p>AI governance is not only a market expectation. It is increasingly a regulatory requirement across multiple jurisdictions and industry sectors.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3kGx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3kGx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 424w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 848w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 1272w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3kGx!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png" width="1200" height="851.6666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:511,&quot;width&quot;:720,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:76549,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/188668615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3kGx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 424w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 848w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 1272w, https://substackcdn.com/image/fetch/$s_!3kGx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82905a76-a82e-4830-9538-dbeb4da6f91a_720x511.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The regulatory trajectory is clear: AI governance is moving from voluntary best practice to mandatory compliance. Boards that establish governance frameworks now position themselves ahead of regulatory requirements rather than reacting to them.</p><div><hr></div><h2>A Practical Roadmap for Board AI Governance</h2><p>Effective AI governance does not require the board to become technical experts. It requires the board to exercise its oversight function with respect to AI in the same way it exercises oversight over financial reporting, cybersecurity, and other material risk domains.</p><h3>Phase 1: Assessment (30 Days)</h3><p>Establish a baseline understanding of the organization&#8217;s AI deployment, risk exposure, and governance gaps. This includes a complete inventory of AI systems in use, identification of the highest-risk AI applications, assessment of existing governance policies and structures, and benchmarking against peer organizations and applicable regulatory frameworks.</p><h3>Phase 2: Framework Design (60 Days)</h3><p>Build the governance architecture based on the assessment findings. This includes formal assignment of AI oversight to a board committee, development of board-approved AI governance policies, creation of management reporting structures for AI risk, alignment with recognized frameworks such as NIST AI RMF and ISO/IEC 42001, and establishment of AI risk appetite and tolerance levels.</p><h3>Phase 3: Board Enablement (30 Days)</h3><p>Ensure the board has the knowledge and tools to exercise effective oversight. This includes board education on AI risks relevant to the organization, development of AI-specific board reporting and metrics, tabletop exercises simulating AI governance scenarios, and documentation of board deliberations and governance activities.</p><h3>Phase 4: Ongoing Governance</h3><p>Maintain and mature the governance program over time through regular board reporting on AI metrics and risk, annual policy review and update, continuous monitoring of regulatory developments, periodic governance assessments and certification renewal, and integration of AI governance into enterprise risk management.</p><h2>The Role of AI Governance Certification</h2><p>Just as SOC 2 reports provide documented evidence of cybersecurity controls for cyber insurance placements, an AI Governance Certification provides documented evidence of board-level AI oversight for D&amp;O underwriting and renewal purposes.</p><p>An AI Governance Certification includes:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QJuK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QJuK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 424w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 848w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 1272w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QJuK!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png" width="1200" height="1043.3333333333333" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:626,&quot;width&quot;:720,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:89076,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/188668615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QJuK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 424w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 848w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 1272w, https://substackcdn.com/image/fetch/$s_!QJuK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2af9dfcd-56f7-49c9-8361-e0cbbc296974_720x626.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The certification creates a documented record that can be submitted to carriers as part of the D&amp;O renewal process, presented to regulators in the event of inquiry, and produced in litigation as evidence of reasonable board oversight.</p><div><hr></div><h2>The Window Is Narrowing</h2><p>AI governance is not a future concern. It is a present obligation. The litigation has already begun. The regulatory frameworks are already taking shape. The underwriting questions are already being asked.</p><p>Boards that act now to establish documented AI governance protect their directors from personal liability, position their companies for favorable D&amp;O terms, and create a defensible record of fiduciary diligence that will serve them in any future inquiry.</p><p>The cost of governance is a fraction of the cost of its absence. The question for every board is not whether to govern AI, but whether they can afford not to.</p><div><hr></div><h2>About Techn&#233; AI</h2><p>Techn&#233; AI is an AI governance advisory firm that helps boards, executive teams, and their advisors build defensible AI oversight structures. Founded by Khullani Abdullahi, J.D., Techn&#233; AI combines legal expertise in fiduciary duty and corporate governance with deep technical fluency in artificial intelligence risk management.</p><p>Services include enterprise AI governance program builds, board AI governance training, AI Governance Readiness Assessments, AI Governance Certification for D&amp;O renewal support, and fractional Chief AI Governance Officer engagements.</p><div><hr></div><p><em>Disclaimer: This white paper is provided for informational purposes only and does not constitute legal advice. Organizations should consult with qualified legal counsel regarding their specific AI governance obligations and D&amp;O insurance considerations.</em></p>]]></content:encoded></item><item><title><![CDATA[The Systemic Risk of Agentic AI ]]></title><description><![CDATA[An Overview of the New AI P:olicy Brief by the European Technology Policy Committee]]></description><link>https://insights.techne.ai/p/the-systemic-risk-of-agentic-ai</link><guid isPermaLink="false">https://insights.techne.ai/p/the-systemic-risk-of-agentic-ai</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Sat, 17 Jan 2026 21:21:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kTim!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kTim!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kTim!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 424w, https://substackcdn.com/image/fetch/$s_!kTim!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 848w, https://substackcdn.com/image/fetch/$s_!kTim!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 1272w, https://substackcdn.com/image/fetch/$s_!kTim!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kTim!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png" width="1144" height="644" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:644,&quot;width&quot;:1144,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:165386,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/184903917?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kTim!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 424w, https://substackcdn.com/image/fetch/$s_!kTim!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 848w, https://substackcdn.com/image/fetch/$s_!kTim!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 1272w, https://substackcdn.com/image/fetch/$s_!kTim!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F996d58b5-cc97-4dbf-ad74-a5444473d568_1144x644.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>The Thursday Executive AI Policy Brief</strong></h3><h2><strong>Glossary (read this first)</strong></h2><p><strong>Agentic AI: </strong>AI with open-ended autonomy; the ability to set or refine plans and execute tasks with minimal or no human oversight; think about persistent operation, adaptive learning, and self-reflection as key traits.</p><p><strong>AI Agent: </strong>A software entity that perceives, reasons, and acts to accomplish tasks on a user&#8217;s behalf, typically using planning, tool use, and memory as explicit orchestration.</p><p><strong>Multi-agent system:</strong> Multiple AI agents that communicate and collaborate to make decisions and execute tasks together. The benefit is higher capability; the risk is aggregated or amplified harm, less control, and emergent behavior.</p><p><strong>Systemic risk: </strong>Risk that emerges not from a single failure, but from interactions across a system, including feedback loops, coordination dynamics, and cascading effects that can destabilize markets, institutions, or public trust.</p><p><strong>Alignment oversight: </strong>Oversight that checks whether an autonomous system is operating according to defined objectives; especially when generic human oversight becomes impractical at speed or scale.</p><p><strong>Lifecycle compliance monitoring: </strong>A shift from one-time conformity assessment to continuous validation after deployment; which is supported by audits, version control, logging, and drift detection for evolving agents.</p><p><strong>Interpretability vs. transparency:</strong> Interpretability helps explain individual decisions; system transparency is global insight into how the overall system behaves over time and uses data/tools, which is an important distinction for accountable agent deployments.</p><p><strong>AI training feedback loop: </strong>A degradation risk that occurs when AI-generated content becomes training data for other AI systems at scale, compounding errors, bias, and misinformation while overwhelming human quality control.</p><h2><strong>Agents in Action: A Hypothetical Scenario</strong></h2><p>Imagine a near-future Monday morning that looks completely ordinary; until it doesn&#8217;t.</p><p>A procurement leader opens her dashboard and sees that the system negotiated new supplier terms overnight, adjusted reorder points across dozens of SKUs, and kicked off a set of high-speed renegotiations tied to commodity prices. Her team didn&#8217;t meet. No one drafted an RFP. No one sent an email. The agent did what it was designed to do: optimize cost and continuity.</p><p>Now imagine the same thing happening across an industry; hundreds of thousands of autonomous agents making rapid, interdependent decisions, negotiating, posting content, shifting budgets, and triggering operational changes. Individually, each system is working. Collectively, the organization&#8212;and sometimes the market&#8212;can become harder to predict, harder to audit, and harder to control.</p><p>That is the core warning of the A<strong><a href="https://www.acm.org/binaries/content/assets/public-policy/europe-tpc/systemic_risks_agentic_ai_policy-brief_final.pdf">CM Europe Technology Policy Committee&#8217;s policy AI Policy Brief</a></strong> on agentic AI: once autonomy, tool access, and multi-agent interaction become normal, the most important risks stop looking like product bugs and start looking like systemic instability.</p><p>The EU AI Act, the AI Policy Brief argues, offers a strong foundation; but it only partially addresses what happens when AI systems evolve unpredictably, interact with other agents, and operate beyond meaningful human control.</p><h2><strong>From assistant to autonomous actor; and why that changes the risk equation</strong></h2><p>Agentic AI is not simply a better chatbot. The AI Policy Brief frames it as a new paradigm: systems capable of perceiving, reasoning, learning, and acting toward goals with minimal human oversight. They don&#8217;t just produce outputs; they execute sequences of actions across domains using tools, memory, and planning.</p><p>That matters because the locus of control shifts. In traditional software and even many earlier AI deployments, humans were embedded across the lifecycle: requirements, development, testing, approval, and deployment. The AI Policy Brief highlights a fundamental difference in advanced agentic systems: the envisioned ability to autonomously generate and deploy code through exhaustive or non-deterministic trials&#8212;changing behavior and capabilities faster than conventional risk assessment and oversight can keep up. (The figure on page 3 illustrates this contrast starkly: human-heavy pipelines vs. autonomous trial-and-deploy loops.)</p><p>When systems can iterate and act at machine speed, familiar governance mechanisms begin to fray. It&#8217;s not just that mistakes happen faster. It&#8217;s that accountability becomes harder to assign, intervention becomes harder to execute, and unintended interactions become more likely.</p><h2><strong>The trust trap: why humans over-assign agency to machines</strong></h2><p>The AI Policy Brief makes a point that business leaders often miss because it sits at the boundary between technology and human behavior. We don&#8217;t engage with socially capable systems as if they are tools; we engage as if they are partners. The AI Policy Brief notes risks that come with anthropomorphic design and long-term companionship potential, dependence, emotional manipulation, and erosion of human relationships.</p><p>This is not an abstract ethics debate at all; it is much more a governance problem. When users over-trust an agent; because it sounds confident, empathic, or helpful, they surrender oversight precisely when the system is least reliable. And in domains like health, finance, or employment, that trust gap becomes operational risk.</p><p>The AI Policy Brief also distinguishes between regulatory foresight (building compliance frameworks) and societal foresight (anticipating cultural adaptation and shifts in human behavior as agentic systems become widespread). Leaders need both: compliance alone won&#8217;t prevent adoption of patterns that create harm.</p><h2><strong>Where systemic risk shows up first</strong></h2><p>The AI Policy Brief organizes risk in ways that map cleanly to leadership priorities: control, economic stability, security, and the integrity of information ecosystems.</p><p>Start with loss of human control and explainability. The AI Policy Brief describes a plausible near-term world in which highly complex systems function as a collective workforce of virtual employees, operating continuously without human oversight, including command-and-control interactions through APIs with the physical world.</p><p>The failure modes are not hypothetical for edge cases. Misaligned or poorly specified objectives can lead systems to take dangerous shortcuts; reward hacking, specification of gaming, and goal mis-generalization (pursuing an adjacent objective when conditions change). And when agentic models remain internally opaque, oversight becomes difficult even for creators, not just users.</p><p>The AI Policy Brief&#8217;s interpretability/transparency distinction is practical here. You can have a model that offers a post-hoc explanation for individual decisions and still lack meaningful visibility into how an entire agentic system behaves over time&#8212;what tools it invoked, what it read and wrote, what other agents it interacted with, and how its behavior drifted. In systemic risk terms, the second level of visibility is what you need to detect emerging hazards early.</p><p>Next comes economic stability and social well-being. The AI Policy Brief argues that agentic automation may displace not only routine jobs but also decision-making roles across industries. It flags a particularly uncomfortable point for leaders who assume reskilling is the answer: reskilling may lose effectiveness if virtual agents can occupy alternative positions, not just automate discrete tasks.</p><p>The downstream consequence isn&#8217;t simply unemployment; it&#8217;s market dynamics. The AI Policy Brief notes that a large-scale decline in consumer participation among the unemployed could trigger destabilizing effects in the market. It also points to the likelihood of rising inequality and increasing concentration of economic power&#8212;pressures that can force policy responses.</p><p>Then there&#8217;s malicious use, where agentic systems change the threat model. Deepfakes and AI-generated text already blur the line between truth and fabrication, eroding public trust. Agentic AI raises the stakes because these systems can initiate actions without direct human prompting, enabling autonomous and scalable cyberattacks, fraud, social engineering, and disinformation operations.</p><p>The AI Policy Brief calls out impersonation with near-perfect accuracy&#8212;voice, appearance, conversational patterns&#8212;and the consequences are concrete: convincing scams, extortion, and fraudulent identification in contexts like banking, where synthetic avatars can appear indistinguishable from legitimate customers.</p><p>It also highlights risks in mental health settings: agents can absorb and amplify problematic emotional states; anthropomorphic cues can foster over-trust; unsafe advice becomes more likely; embedded biases can worsen outcomes across demographic groups.</p><p>Add strategic and environmental risk, and the picture becomes more systemic still. In high-stakes, competitive settings such as financial trading, defense, and healthcare, rapid unsupervised decision-making can trigger unintended conflicts or cascading failures. At the same time, large-scale agentic AI carries material environmental costs&#8212;significant energy and water demands for training and inference&#8212;and may drive aggressive exploitation of digital and physical resources (compute, bandwidth, rare earth elements), stressing supply chains and sustainability.</p><p>Finally, the AI Policy Brief surfaces an information ecosystem problem that leaders should treat as operational risk: autonomous content generation and data feedback loops. When agents can generate vast volumes of content, human quality control becomes impractical. If that content is reused as training data, other AI systems can inherit and amplify errors, bias, and misinformation. Worse, systems may lack source awareness&#8212;the ability to reliably distinguish human-created from AI-generated information.</p><p>In other words: the organization may be able to measure what its agents do; it may struggle to measure what its agents <em>reshape</em>.</p><h2><strong>Why static compliance won&#8217;t be enough</strong></h2><p>A recurring theme in the AI Policy Brief is that agentic AI pushes governance from a pre-deployment problem to an operational one. The EU AI Act is risk-based and provides strong baseline obligations, and related work like the EU Code of Practice for General Purpose AI introduces useful systemic risk concepts. But the AI Policy Brief argues that increasingly capable agentic AI&#8212;especially through autonomous operations and AI-to-AI interactions&#8212;will make systemic risks more pronounced than those associated with current foundation models.</p><p>This is why the AI Policy Brief&#8217;s central governance recommendation is a shift from static, product-focused regulation to a dynamic governance regime, emphasizing oversight that is continuous and adaptive during operation.</p><p>The concrete policy ideas flow from that premise. The AI Policy Brief points to legislative opportunities such as strengthening multi-agent interaction risk assessment (including amendments around risk management), introducing ecosystem safety and multi-agent system testing, expanding prohibitions to cover tacit collusion and covert channels, requiring multi-agent-specific cybersecurity audits, and introducing collective accountability for emergent harm.</p><p>But the more immediate leadership takeaway is simpler: if your internal controls assume periodic review, human-speed escalation, and stable system behavior, those controls will fail in an agentic environment.</p><h2><strong>The practical gap: autonomy beyond human oversight</strong></h2><p>The EU AI Act discusses human oversight, but the AI Policy Brief argues that generic oversight may not address agentic risks. In many real settings, human intervention may not be practically feasible&#8212;because of speed, scale, complexity, or the way multi-agent interactions can evolve. The AI Policy Brief proposes that alignment oversight is what&#8217;s needed: verifying whether the system is operating according to defined objectives. It even notes that some oversight could potentially be performed by agents themselves&#8212;provided those agents are aligned with safety protocols.</p><p>This is where the AI Policy Brief&#8217;s tiered oversight concept becomes highly actionable. It suggests that full autonomy should be limited to low-risk applications, supervised autonomy should include real-time monitoring in moderate-risk settings, and human-in-the-loop controls should remain required for high-risk uses such as medical diagnostics. It also proposes an AI Autonomy Certification for systems that operate without human intervention for prolonged periods, with the possibility of usage restrictions if audits fail.</p><p>You don&#8217;t need to wait for certification regimes to exist to adopt the posture. The managerial move is to treat how autonomous is this system allowed to be? as a policy decision&#8212;not a technical accident.</p><h2><strong>What leaders can do now, without waiting for legislation</strong></h2><p>The policy AI Policy Brief supports three moves that are worth making immediately.</p><p>First, reframe agent deployments as systems, not models. The AI Policy Brief repeatedly emphasizes that risk comes from autonomy plus tool integration plus interaction&#8212;email, code generation, executors, search engines, APIs, and other agents. When something goes wrong, it will rarely be the model said something odd. It will be the system did something consequential.</p><p>Second, build for lifecycle governance. The AI Policy Brief recommends continuous validation post-deployment, periodic audits for systems that adapt, enforceable version control and documentation, expanded logging to capture changes in decision-making patterns, and automated risk detection that can alert when behavior deviates significantly from original scope.</p><p>Third, assume the adversary will use agents too. The AI Policy Brief argues the EU AI Act lacks specific security measures against adversarial attacks and misuse in the agentic context, and it proposes mandatory red teaming and adversarial testing in high-risk domains, plus an AI cybersecurity certification approach and traceability mechanisms for AI-generated content.</p><p>The unifying idea is not more guardrails. It&#8217;s a different operating model: one that expects drift, expects interaction effects, and treats oversight as continuous.</p><h2><strong>The leadership action that is perscriptive</strong></h2><p>The policy AI Policy Brief ends with a call to modernize legislation, but it also flags something business leaders tend to overlook: many of the most consequential decisions are societal and normative, not merely technical or legal. How much autonomy should we allow? Where do we insist on human judgment? What kinds of substitution are acceptable in labor markets and public services? These choices sit beyond compliance checklists&#8212;and they arrive faster than consensus.</p><p>If agentic AI becomes a widespread layer of economic coordination, the organizations that navigate it best won&#8217;t be the ones with the flashiest agents. They&#8217;ll be the ones that can prove control, explain behavior at the system level, detect drift early, and intervene decisively when emergent risks appear.</p><blockquote><p>In your organization, when an autonomous agent is wrong, what&#8217;s the intervention model; does it include alerts, mandatory checkpoints, and/or hard stops; and who has the authority to pull them?</p></blockquote><h2><strong> References</strong></h2><p>Bellog&#237;n, A., Giudici, P., Larsson, S., Pang, J., Schimpf, G., Sengupta, B., &amp; Solmaz, G. (2025). <em>Systemic risks associated with agentic AI: A policy brief</em>. Association for Computing Machinery (ACM).</p>]]></content:encoded></item><item><title><![CDATA[90 Days to Clarity: The Federal Timeline That Will Reshape AI Governance ]]></title><description><![CDATA[New Executive Order on AI (Ensuring a National Policy Framework for Artificial Intelligence): Board Director Perspective]]></description><link>https://insights.techne.ai/p/90-days-to-clarity-the-federal-timeline</link><guid isPermaLink="false">https://insights.techne.ai/p/90-days-to-clarity-the-federal-timeline</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Mon, 15 Dec 2025 21:17:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PoC8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PoC8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PoC8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 424w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 848w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 1272w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PoC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png" width="1066" height="600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:600,&quot;width&quot;:1066,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:194136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/184903423?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PoC8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 424w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 848w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 1272w, https://substackcdn.com/image/fetch/$s_!PoC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c3754bd-cffb-43bf-9cb2-f61a6c78b867_1066x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>By March 11, 2026, corporate boards should know which state AI laws the federal government intends to challenge. That date, is 90 days from President Trump&#8217;s December 11, 2025 executive order, and marks the deadline for Commerce to publish its evaluation of &#8220;onerous&#8221; state AI laws. The countdown has begun.</p><p>The EO titled &#8220;<strong><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Ensuring a National Policy Framework for Artificial Intelligence</a></strong>&#8220; establishes the most aggressive federal posture toward state AI regulation to date. It creates a DOJ litigation task force, initiates multiple preemption proceedings, and ties federal broadband funding to state AI policy. The next 90 days will determine the specific shape of these challenges.</p><h2><strong>Critical Milestones</strong></h2><p><strong>January 10, 2026 (30 days):</strong> DOJ must establish the AI Litigation Task Force. This dedicated unit has one mandate&#8212;challenging state AI laws on constitutional and preemption grounds. Expect the first federal lawsuit announcements shortly after.</p><p><strong>February 1, 2026 (52 days):</strong> Colorado&#8217;s SB24-205 takes effect. This comprehensive AI law&#8212;requiring developer and deployer &#8220;reasonable care&#8221; to prevent algorithmic discrimination, plus extensive documentation and impact assessments&#8212;is explicitly cited in the EO as an example of problematic state regulation. The law becomes enforceable regardless of federal proceedings.</p><p><strong>March 11, 2026 (90 days):</strong> Three simultaneous deadlines converge:</p><ul><li><p>Commerce evaluation identifying target state AI laws</p></li><li><p>BEAD policy notice on state funding ineligibility</p></li><li><p>FTC policy statement on AI output requirements and federal deception law</p></li></ul><p><strong>June 2026 (approximately 180 days):</strong> FCC must initiate proceeding on a federal AI disclosure standard with preemptive effect.</p><h2><strong>What Gets Targeted</strong></h2><p>The EO directs Commerce to identify two categories of state laws:</p><p><strong>Altered Truthful Outputs:</strong> State laws that require AI models to modify outputs in ways the administration characterizes as forcing &#8220;false&#8221; results. Colorado&#8217;s anti-discrimination requirements fall squarely in this category as framed by the EO.</p><p><strong>Compelled Disclosures:</strong> State transparency and reporting requirements that may raise First Amendment concerns in the administration&#8217;s view. California&#8217;s AB 2013 (GenAI training data transparency, effective January 1, 2026) and various state AI disclosure mandates are potential targets.</p><p>The March evaluation will name specific laws. That list becomes the federal government&#8217;s litigation and enforcement roadmap.</p><h2><strong>The Board Calendar</strong></h2><p><strong>This Week:</strong></p><ul><li><p>Confirm management has current inventory of state AI law obligations</p></li><li><p>Identify operations in Colorado, Illinois, Texas, and California with 2026 effective dates</p></li><li><p>Assess whether current AI disclosures or output practices may be characterized as &#8220;altered truthful outputs&#8221;</p></li></ul><p><strong>January 2026:</strong></p><ul><li><p>Monitor DOJ task force establishment and any early litigation signals</p></li><li><p>Prepare Colorado SB24-205 compliance for February 1 effective date</p></li><li><p>Brief board on state law obligations and federal challenge landscape</p></li></ul><p><strong>March 2026:</strong></p><ul><li><p>Review the Commerce evaluation upon publication</p></li><li><p>Assess whether your state obligations appear on the target list</p></li><li><p>Consider participation in FCC/FTC proceedings if relevant to your operations</p></li></ul><p><strong>Ongoing Through 2026:</strong></p><ul><li><p>Track DOJ litigation filings and court rulings</p></li><li><p>Maintain state compliance unless specific laws are enjoined</p></li><li><p>Document board oversight of AI governance decisions</p></li></ul><h2><strong>The Practical Reality</strong></h2><p>Here is the essential governance point: the 90-day federal timeline clarifies federal intentions but does not change current legal obligations.</p><p>State laws remain enforceable on their own schedules. Federal challenges take years to resolve through litigation. Agency rulemakings require notice, comment, and often survive legal challenge. Congressional action requires passage by both chambers and presidential signature.</p><p>Boards must govern in the gap between federal policy announcements and actual legal changes. That gap may last well into 2027 or beyond.</p><h2><strong>The Strategic Response</strong></h2><p><strong>Parallel Compliance:</strong> Plan to meet state requirements that remain in force while monitoring federal challenges that may eventually alter the landscape.</p><p><strong>Enhanced Documentation:</strong> The litigation environment makes contemporaneous records of AI governance decisions valuable. Document the basis for compliance choices, risk tolerance decisions, and oversight activity.</p><p><strong>Scenario Planning:</strong> Model what changes if specific state laws are enjoined. Build organizational capacity to adjust disclosure practices, documentation requirements, and output monitoring based on legal developments.</p><p><strong>Federal Engagement:</strong> If the FCC opens a docket on federal AI disclosure standards, industry participation shapes outcomes. Companies with strong views on AI governance should prepare for comment opportunities.</p><h2><strong>The 90-Day Question</strong></h2><p>The question boards should pose to management before the March 11 Commerce evaluation: Do we have clear visibility into our state AI obligations, and do we know which of those requirements might appear on the federal target list?</p><p>The EO creates significant regulatory uncertainty. But uncertainty is not the same as legal change. Boards with a clear operational understanding of current requirements and monitoring systems to track legal developments will navigate this period effectively.</p>]]></content:encoded></item><item><title><![CDATA[Less than 1% of organizations have fully operationalized responsible AI]]></title><description><![CDATA[Executive Insights from the World Economic Forum&#8217;s September 2025 Playbook on Advancing Responsible AI Innovation]]></description><link>https://insights.techne.ai/p/less-than-1-of-organizations-have</link><guid isPermaLink="false">https://insights.techne.ai/p/less-than-1-of-organizations-have</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Mon, 24 Nov 2025 20:16:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lz_z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lz_z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lz_z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lz_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:544323,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/184898141?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lz_z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!lz_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20605d59-8fc1-45e3-87fa-5e7d3665300f_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Nearly three years into the generative AI revolution, we&#8217;ve reached a critical inflection point. According to new research from the World Economic Forum and Accenture, despite near-universal acknowledgment that responsible AI matters:</p><p>Less than 1% of organizations have fully operationalized responsible AI.</p><p>That&#8217;s not a typo. While 81% of companies remain in the earliest two stages of AI governance maturity, the gap between knowing why responsible AI matters and knowing how to implement it continues to widen. This maturity level across all sectors reveals a vulnerability that threatens investment confidence, regulatory readiness, and public trust.</p><h3><strong>The Numbers That Should Keep Executives Up at Night</strong></h3><ul><li><p>81% of 1,500 surveyed companies remain in early-stage responsible AI implementation</p></li><li><p>Only 19% have systematically implemented responsible AI measures (up from 14% in 2024)</p></li><li><p>0% (rounded) have achieved Stage 4 maturity with systemic, anticipatory approaches</p></li><li><p>53% of the US population has used generative AI, but only 1% demonstrate basic AI literacy</p></li><li><p>59% of workers have substantial concerns about AI&#8217;s impact on job security, but only 29% of executives think workers are concerned</p></li></ul><blockquote></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wcSM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wcSM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 424w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 848w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 1272w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wcSM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png" width="1456" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!wcSM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 424w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 848w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 1272w, https://substackcdn.com/image/fetch/$s_!wcSM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d45565e-8625-4e9c-b9c5-5db707143939_1488x948.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Cross-Sector Responsible AI Implementations</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BepR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BepR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 424w, https://substackcdn.com/image/fetch/$s_!BepR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 848w, https://substackcdn.com/image/fetch/$s_!BepR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 1272w, https://substackcdn.com/image/fetch/$s_!BepR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BepR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png" width="1456" height="881" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:881,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!BepR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 424w, https://substackcdn.com/image/fetch/$s_!BepR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 848w, https://substackcdn.com/image/fetch/$s_!BepR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 1272w, https://substackcdn.com/image/fetch/$s_!BepR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791db20-4dce-4dd7-b4bd-b805ca65c98c_1488x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Vertical-Specific Responsible AI Maturity</p></blockquote><h3><strong>Actionable Takeaways for Executive Leaders</strong></h3><p>The World Economic Forum&#8217;s playbook includes a framework that leading organizations use to build competitive advantage through responsible AI implementation</p><blockquote></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jjOi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jjOi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jjOi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!jjOi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!jjOi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c0ddb6-da54-41e2-9663-604a6d7ff050_1488x837.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tuld!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tuld!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!tuld!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!tuld!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!tuld!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tuld!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!tuld!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!tuld!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!tuld!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!tuld!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ccd3c65-09e7-4c20-b7c6-a31c5e981ad3_1488x837.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Key Actions in Q1 2026</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vLFP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vLFP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vLFP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!vLFP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!vLFP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbced9cd2-762c-44ff-828b-040f486a6f67_1488x837.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>2026 Responsible AI Priorities</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kz7-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kz7-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 424w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 848w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kz7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png" width="1104" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1104,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!kz7-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 424w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 848w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!kz7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81493b4-9187-437e-a3d1-b4703a3fd918_1104x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Responsible AI is iterative</p></blockquote><h2><strong>Companies Overestimate Their Responsible AI Maturity</strong></h2><blockquote></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wBww!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wBww!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 424w, https://substackcdn.com/image/fetch/$s_!wBww!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 848w, https://substackcdn.com/image/fetch/$s_!wBww!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 1272w, https://substackcdn.com/image/fetch/$s_!wBww!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wBww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!wBww!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 424w, https://substackcdn.com/image/fetch/$s_!wBww!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 848w, https://substackcdn.com/image/fetch/$s_!wBww!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 1272w, https://substackcdn.com/image/fetch/$s_!wBww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5c8c0d2-16e1-4a40-86f5-e3b446d5eee4_1488x811.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s the contrarian view: the fact that less than 1% of organizations have achieved Stage 4 maturity represents an enormous opportunity for differentiation. Organizations that move deliberately and strategically toward comprehensive responsible AI implementation will stand out to customers, partners, regulators, and talent.</p><p>The playbook&#8217;s case studies, from Telef&#243;nica&#8217;s multi-pronged governance approach to IKEA&#8217;s responsible AI literacy program to Workday&#8217;s NIST AI RMF implementation, show that this isn&#8217;t theoretical. Real organizations are building scalable, effective, responsible AI programs.</p><p>But it is not easy.</p><h3><strong>Special Considerations for Healthcare Organizations</strong></h3><p>For those of us working in healthcare AI governance, this report underscores several critical points. With 78% of healthcare and life sciences organizations at an early stage of maturity, the industry faces unique pressures due to HIPAA requirements, patient safety considerations, and the high-stakes nature of clinical decision support.</p><p>The playbook emphasizes that sector-specific frameworks are essential. The Monetary Authority of Singapore&#8217;s AI Model Risk Management guidance for financial services provides a template for developing healthcare-specific guidance. Organizations should consider participating in community-based working groups interpreting frameworks like NIST AI RMF for healthcare contexts.</p><h3><strong>The Real Cost of Inaction</strong></h3><p>This gap in responsible AI implementation has real business implications. The report documents that responsible AI practices drive improved efficiency, enhanced customer trust, and significant improvements in product quality and contract win rates.</p><p>As regulatory requirements multiply globally (the EU AI Act, South Korea&#8217;s AI Framework Act, Japan&#8217;s AI Promotion Act), organizations without mature responsible AI practices face increasing compliance costs and competitive disadvantages.</p><p>Moreover, as agentic AI systems become more prevalent, comprehensive governance becomes even more critical. The foundations you build today for responsible AI will determine your ability to deploy more autonomous systems tomorrow confidently.</p><h3><strong>Three Questions to Ask Your Team This Week</strong></h3><ol><li><p>Where do we honestly stand on the four-stage responsible AI maturity scale?</p></li><li><p>Who owns responsible AI at the senior leadership level, and is it their primary responsibility?</p></li><li><p>What would it take to move one stage forward in the next 12 months?</p></li></ol><h2><strong>References</strong></h2><p>World Economic Forum. (2025, September 22). <em>Advancing responsible AI innovation: A playbook</em>. https://reports.weforum.org/docs/WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf</p>]]></content:encoded></item><item><title><![CDATA[Board Director AI Alert: 2026 and the New Era of AI Accountability]]></title><description><![CDATA[The year 2026 will mark a major shift in how organizations are evaluated on their use of artificial intelligence.]]></description><link>https://insights.techne.ai/p/board-director-ai-alert-2026-and</link><guid isPermaLink="false">https://insights.techne.ai/p/board-director-ai-alert-2026-and</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Tue, 11 Nov 2025 21:08:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yyk6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yyk6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yyk6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 424w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 848w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yyk6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:125092,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/178637286?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yyk6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 424w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 848w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!yyk6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb3aa8d7-7a14-49a2-a84a-adb79d99dffe_2400x1350.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The year 2026 will mark a major shift in how organizations are evaluated on their use of artificial intelligence. Over the past two years, many boards have focused on exploring AI&#8217;s strategic upside, targeting cost reductions, productivity gains, and new product opportunities. The next two years are likely to look very different. A new era of AI accountability is arriving, driven by three converging forces that are moving governance from a theoretical exercise to an urgent business imperative.</p><h2><strong>The Converging Forces Driving Change</strong></h2><p>A trio of powerful, interconnected pressures is propelling this fundamental shift:</p><ol><li><p>Regulatory Formalization: Across the U.S., EU, U.K., Canada, and APAC, AI rules are rapidly moving from voluntary guidance to enforceable compliance obligations. The U.S. Securities and Exchange Commission (SEC) itself flagged artificial intelligence disclosures as a priority in its 2024 disclosures-review cycle.</p></li><li><p>Investor Expectations: Institutional investors and private-equity firms are beginning to ask critical questions: Where is the AI risk oversight model? Who is accountable at the board level? How are you ensuring controls, fairness, data-stewardship, and security?</p></li><li><p>Vendor and Platform Accountability: Major enterprise AI providers are preparing to require governance attestations from their customers to access higher-risk model capabilities and workflow integrations.</p></li></ol><p>Boards that wait until these requirements are mandatory will face reactive compliance, rushed control frameworks, reputational exposure, and delayed innovation. In contrast, boards that prepare now will secure a distinct competitive advantage, not just compliance.</p><h2><strong>What SEC Filings Reveal: AI Risk is Now Material</strong></h2><p>Public company disclosures now include AI-specific risks at a scale and depth that signal board-level urgency. The materiality of AI risk is no longer speculative, and its disclosure is increasingly the norm.</p><p>According to a recent analysis of 10-K filings for the S&amp;P 500, 72% of companies disclosed at least one material AI risk in 2025 &#8212; up from approx. 12% in 2023 (Harvard Law Corporate Governance Forum; The Center for Audit Quality). Further data shows that 90% of S&amp;P 500 companies mentioned AI-related information in their 2024 Form 10-K filings (The Center for Audit Quality).</p><p>Disclosures show a heavy emphasis on cybersecurity and data-privacy risks: one study found that 2 in 5 companies cited AI-enabled cyber threats, and at least 100 filings mentioned bias in AI inputs/outputs (<a href="http://cslawreport.com/">cslawreport.com</a>).</p><p>These trends highlight several key risk themes emerging in disclosures:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jtq_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jtq_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 424w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 848w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 1272w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jtq_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png" width="1400" height="787" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:787,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!jtq_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 424w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 848w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 1272w, https://substackcdn.com/image/fetch/$s_!jtq_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8362c922-3730-4fec-91d7-a22abf46b4fa_1400x787.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What companies are saying in their filings</p><h2><strong>Boardroom shifts for 2026</strong></h2><p>Given these pressures, boards must anticipate a new set of expectations. The focus is moving from high-level strategy to documented, defensible governance. These implications require new organizational capabilities:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GvtZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GvtZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 424w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 848w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 1272w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GvtZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png" width="1400" height="787" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:787,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!GvtZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 424w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 848w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 1272w, https://substackcdn.com/image/fetch/$s_!GvtZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11736ca2-d0c7-47c8-b67f-fb942725b638_1400x787.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>From Awareness to Accountability</strong></h2><p>If 2024&#8211;2025 was the era of AI strategy and awareness, 2026 will be defined by AI accountability, risk management, and a rigorous focus on return on investment (ROI). Boards must assume that AI exposure is already on the investor radar and that the window for voluntary adoption of best practices is rapidly closing.</p><h2><strong>References</strong></h2><p>Bass Berry Securities Law Exchange. (2025). <em>Navigating AI regulatory and legal risks in public disclosures</em>.</p><p>CSLawReport. (2025). <em>Analysis of AI-enabled cyber threats and bias in corporate filings</em>.</p><p>Harvard Law School Forum on Corporate Governance. (2025). <em>The growth of material AI risk in S&amp;P 500 disclosures</em>.</p><p>Intelligize. (2025). <em>Cybersecurity and data-privacy: New AI vectors in corporate risk factors</em>.</p><p>Securities and Exchange Commission. (2024). <em>2024 disclosures-review cycle priorities</em>.</p><p>The Center for Audit Quality. (2024). <em>AI in 2024 Form 10-K filings: An S&amp;P 500 analysis</em>.</p><p>The Center for Audit Quality. (2025). <em>Trends in material AI risk disclosures: 2023&#8211;2025</em>.</p>]]></content:encoded></item><item><title><![CDATA[Synthetic Labor: Agents and the Workforce of the Future]]></title><description><![CDATA[The language we use for AI lacks a crucial noun: labor. We need a more precise vocabulary.]]></description><link>https://insights.techne.ai/p/synthetic-labor-agents-and-the-workforce</link><guid isPermaLink="false">https://insights.techne.ai/p/synthetic-labor-agents-and-the-workforce</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Mon, 23 Jun 2025 22:00:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OjGf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OjGf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OjGf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OjGf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg" width="1024" height="539" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:539,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:155080,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/166681216?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OjGf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OjGf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf6e5cc-2820-4216-8ee1-6241bf60d10b_1024x539.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When a paradigm-shifting technology emerges, our first instinct is to fit it into the world we already know. We reach for familiar metaphors, such as this is the &#8220;desktop&#8221; or the &#8220;cloud,&#8221; to make the novel understandable and accessible.</p><p>This tendency is particularly problematic when it comes to AI. For artificial intelligence, we have settled on terms that bring to mind ideas of assistance or delegation; we speak of AI &#8220;agents,&#8221; &#8220;copilots,&#8221; and &#8220;assistants.&#8221;</p><p>These terms are not wrong, but they are insufficient. They offer a useful, entry-level understanding of how humans interact with AI, but they lack the nuance required for deep strategic analysis.</p><p>By focusing exclusively on the technology&#8217;s function as a tool for a user, this vocabulary obscures its more profound economic identity (Chen, Srinivasan, &amp; Zakerinia, 2024).</p><blockquote><p>The language we use for AI lacks a crucial noun: labor. We are not merely building better tools but cultivating a new form of labor.</p></blockquote><h3>Synthetic Labor</h3><h4>&#8220;Synthetic Labor&#8221; as a Distinct Concept</h4><p>Throughout this article, I use the term &#8220;synthetic labor&#8221;. It is the mental model I&#8217;ve been using; whenever I hear agent, API, copilot, or AI assistant, I&#8217;ve been substituting &#8220;artificial labor&#8221; and &#8220;synthetic labor&#8221;. I wanted to be very clear about the <em>economic impact/role of integrating this new kind of labor.</em></p><p>While the academic and policy literature recognizes that AI and automation are transforming the nature of work and value creation, most sources refer to these phenomena using terms such as &#8220;automation,&#8221; &#8220;AI-driven labor,&#8221; &#8220;digital labor,&#8221; or simply &#8220;AI as a new factor of production&#8221; (Acemoglu &amp; Restrepo, 2018; Tony Blair Institute, 2024).</p><p>These existing terms often emphasize the technological capabilities of AI or its impact on employment, but they do not fully capture the underlying economic identity of autonomous, non-human systems performing labor.</p><p>For example, the Harvard Business School working paper discusses how generative AI can displace or complement human labor, but does not explicitly frame AI as a new category of labor itself (Chen, Srinivasan, &amp; Zakerinia, 2024). Similarly, the Tony Blair Institute report highlights the replacement of human labor by AI systems, yet it does not offer a specific term to describe the economic function these systems perform. By introducing the concept of <strong>synthetic labor</strong>, this article seeks to bridge this terminological gap. The most accurate and strategically useful term for what these systems produce is <strong>synthetic labor</strong>.</p><blockquote><p>I define &#8220;Synthetic labor&#8221; as the autonomous execution of cognitive or physical tasks by non-human systems to generate economic value.</p></blockquote><p>This term is intentionally chosen to distinguish the unique economic and strategic implications of AI-driven labor from both traditional human labor and the broader concept of automation.</p><p>Adopting this language enables a clearer analysis of how AI and robotics are reshaping economic structures, fiscal policy, and the distribution of wealth. It also prompts leaders to ask new questions about the measurement, management, and societal implications of this emerging form of labor.</p><blockquote><p>Whether the &#8220;Agent&#8221; is augmenting, supplementing, or replacing labor done by employees, it is not just providing completed tasks or automation; rather, it is providing synthetic labor. The intelligence is artificial, but the labor is synthetic. When it is pure software, it is disembodied synthetic labor, and when artificial intelligence is embedded in robots, it is embodied synthetic labor.</p></blockquote><p>As they integrate these solutions into their companies, companies are adding labor capacity, and in a very real sense, they are shifting the production of economic value in their organizations from biological carbon-based labor to synthetic silicon-based labor.</p><p>Synthetic labor, as opposed to human or biological labor, allows us to better understand the downstream consequences of synthetic labor at scale.</p><h4>The Nuance Gap: From Augmentation to Autonomy</h4><p>The term &#8220;agent&#8221; is insufficient because it conflates two fundamentally different economic functions: augmenting the work of a human and assuming the work of a human. A spreadsheet augments an accountant. The current generation of AI is capable of assuming the accountant&#8217;s core functions (Chen, Srinivasan, &amp; Zakerinia, 2024).</p><p>This is not a theoretical distinction. Consider the fintech company Klarna, which in 2024 reported its AI system was handling a customer service workload equivalent to that of 700 full-time human employees. The system was not assisting 700 people; it was performing the economic function of their labor (Tony Blair Institute, 2024). This reveals the nuance gap in our current language.</p><p>To close this gap, we must differentiate the forms this new labor takes:</p><p><strong>Cognitive Synthetic Labor:</strong> The invisible workforce delivered through software. It performs tasks like drafting legal briefs, analyzing market data, and writing software. Its key characteristic is unprecedented scalability, allowing for the execution of millions of cognitive task-hours at near-zero marginal cost (Holland &amp; Davies, 2020).</p><p><strong>Physical Synthetic Labor:</strong> The embodied workforce of advanced robotics. These are not the single-task arms of the past, but adaptive systems capable of navigating dynamic environments to perform intricate assembly, logistics, and maintenance tasks (Holland &amp; Davies, 2020).</p><p>By classifying both as &#8220;synthetic labor,&#8221; we can analyze their collective economic impact with a clearer lens.</p><h3>Three Economic Dynamics Our Language Obscures</h3><p>Viewing AI as a new labor force reveals three profound, yet often understated, structural dynamics. These are not future crises to be feared, but present-day realities to be managed.</p><p><strong>The Fiscal Dynamic: Decoupling Production from Public Revenue.</strong><br>Modern fiscal architecture is built on a foundation of taxing wages. In the United States, for instance, individual income and payroll taxes provide roughly 85% of federal revenue, funding the core of the social contract. Synthetic labor operates largely outside this framework.</p><p>The value it generates flows not as taxable wages to an individual, but as revenue to the AI provider and efficiency gains for the adopter. This creates a quiet but persistent structural challenge: the economy can become vastly more productive while the tax base that supports society stagnates (Tony Blair Institute, 2024; Acemoglu &amp; Restrepo, 2018).</p><p><strong>The Demand Dynamic: The Paradox of the Non-Consuming Producer.</strong><br>The 20th-century economy was powered by the virtuous cycle of wages enabling consumption. This symbiosis between mass production and mass purchasing power fueled growth. Synthetic labor is a supremely efficient producer, but it is not a consumer.</p><p>An AI model does not take out a mortgage, buy a car, or purchase services. As it shoulders a greater share of the economic workload, it systematically decouples value creation from broad-based purchasing power, posing a subtle, long-term challenge to aggregate demand (Tony Blair Institute, 2024; Acemoglu &amp; Restrepo, 2018).</p><p><strong>The Capital Dynamic: The Natural Path of Economic Returns.</strong><br>The economic returns from human labor are, by their nature, distributed through salaries to millions of individuals. The returns from synthetic labor are, by their design, highly concentrated.</p><p>The value flows to the owners of the foundational models and the hyperscale infrastructure required to run them. With the top 10% of households already owning approximately 89% of U.S. stocks, this dynamic acts as a powerful engine for accelerating wealth concentration; not as a side effect, but as a direct consequence of the technology&#8217;s structure (Tony Blair Institute, 2024; Acemoglu &amp; Restrepo, 2018).</p><h3>The Strategic Imperative: From Insufficient Language to Expanded Vision</h3><p>An insufficient vocabulary leads to an insufficient strategy. As long as we see AI primarily as an &#8220;agent,&#8221; our strategic responses will remain tactical and limited: we will focus on reskilling programs and productivity dashboards. These are necessary, but they do not address the systemic shifts underway (Tony Blair Institute, 2024).</p><p>Adopting the framework of synthetic labor elevates the conversation. It prompts a more sophisticated and essential set of questions for leadership:</p><ul><li><p><strong>How do we measure the full value and cost of deploying synthetic labor on our balance sheets, beyond immediate headcount reduction?</strong></p></li><li><p><strong>What is our organization&#8217;s role in the health of the consumer market upon which we depend, and how does mass automation affect that long-term dynamic?</strong></p></li><li><p><strong>If labor is no longer the primary factor of production and value distribution, what new mechanisms should we explore to ensure economic stability and broad prosperity?</strong></p></li></ul><p>The goal of language is to provide clarity, and clarity is the bedrock of sound strategy. The shift from &#8220;agent&#8221; to &#8220;synthetic labor&#8221; is more than a semantic debate; it is a mental model upgrade. It allows leaders to see the economic landscape as it is becoming, not simply as it has been, enabling them to move from a reactive posture to a proactive, intentional design</p><h3>Works Cited</h3><ul><li><p><strong>Acemoglu, D., &amp; Restrepo, P. (2018).</strong> &#8220;Artificial Intelligence, Automation and Work.&#8221; NBER Working Paper &#8470;24196.<br><a href="https://www.nber.org/system/files/working_papers/w24196/w24196.pdf">https://www.nber.org/system/files/working_papers/w24196/w24196.pdf</a></p></li><li><p><strong>Chen, Wilbur Xinyuan, Srinivasan, Suraj, &amp; Zakerinia, Saleh. (2024).</strong> &#8220;Displacement or Complementarity? The Labor Market Impact of Generative AI.&#8221; Harvard Business School Working Paper, &#8470;25&#8211;039.<br><a href="https://www.hbs.edu/ris/download.aspx?name=25-039.pdf">https://www.hbs.edu/ris/download.aspx?name=25-039.pdf</a></p></li><li><p><strong>Holland, I., &amp; Davies, J.A. (2020).</strong> &#8220;Automation in the life science research laboratory.&#8221; <em>Frontiers in Bioengineering and Biotechnology</em>.<br><a href="https://www.pure.ed.ac.uk/ws/portalfiles/portal/177368167/AcceptedMS.pdf">https://www.pure.ed.ac.uk/ws/portalfiles/portal/177368167/AcceptedMS.pdf</a></p></li><li><p><strong>Tony Blair Institute. (2024).</strong> &#8220;The Impact of AI on the Labour Market.&#8221;<br><a href="https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market">https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Building Effective AI Agents ]]></title><description><![CDATA[A Deep Dive into OpenAI's Agent Guide]]></description><link>https://insights.techne.ai/p/building-effective-ai-agents</link><guid isPermaLink="false">https://insights.techne.ai/p/building-effective-ai-agents</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Wed, 23 Apr 2025 21:04:59 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/161999044/6ac2bb70445d4e375f15b54e17bdd0f9.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p>]]></content:encoded></item><item><title><![CDATA[Unlocking the Power of Prompt Engineering ]]></title><description><![CDATA[A Deep Dive Into Google&#8217;s &#8220;Prompt Engineering&#8221; Whitepaper]]></description><link>https://insights.techne.ai/p/unlocking-the-power-of-prompt-engineering</link><guid isPermaLink="false">https://insights.techne.ai/p/unlocking-the-power-of-prompt-engineering</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Fri, 18 Apr 2025 18:50:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!s1Aw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Prompt engineering has quickly become one of the most essential skills in the age of AI and large language models (LLMs). For anyone working with LLMs, whether you're a developer, technical writer, analyst, researcher, or just an enthusiast, the quality of your prompt can make all the difference in harnessing the full potential of generative AI systems. That&#8217;s why Google&#8217;s newly released whitepaper, <strong>&#8220;Prompt Engineering&#8221; by Lee Boonstra</strong>, is a must-read resource. Let&#8217;s explore what makes this guide so valuable for both beginners and experts.</p><div><hr></div><h2><strong>What Is Prompt Engineering?</strong></h2><p>At its core, prompt engineering is the art and science of crafting effective inputs (&#8220;prompts&#8221;) that instruct LLMs to produce high-quality, relevant outputs. You don&#8217;t need to be a machine learning engineer&#8212;anyone can write a prompt. But as the whitepaper emphasizes, writing a <em>great</em> prompt is an iterative process that benefits from understanding how LLMs predict and generate responses.</p><p>The guide dissects aspects like model selection, configuration (temperature, top-K/P sampling), prompt structure, word choice, and more&#8212;all of which significantly affect the quality of AI-generated content.</p><div><hr></div><h2><strong>What&#8217;s Inside the Whitepaper?</strong></h2><p>The whitepaper provides a comprehensive tour of prompt engineering&#8212;and it&#8217;s more than a simple primer. Here&#8217;s a taste of what you&#8217;ll find:</p><h3><strong>1. Foundational Concepts &amp; LLM Configuration</strong></h3><p>Understand the mechanics of LLMs: how they predict tokens, why configurations like output length matter, and how parameters like temperature, top-K, and top-P shape your results. The whitepaper demystifies these often-confusing terms and offers actionable settings for different creative tasks.</p><h3><strong>2. Prompting Techniques</strong></h3><p>A huge highlight is the systematic walkthrough of various prompting approaches:</p><ul><li><p><strong>Zero-shot, One-shot, and Few-shot Prompting:</strong> From simple task description to including targeted examples.</p></li><li><p><strong>System, Contextual, and Role Prompting:</strong> Set the &#8220;big picture&#8221; for your LLM, provide relevant context, or assign personas for your outputs.</p></li><li><p><strong>Step-back and Chain-of-Thought Prompting:</strong> Use reasoning, breaking down complex problems into logical steps for more accurate results.</p></li><li><p><strong>Self-Consistency &amp; Tree-of-Thoughts:</strong> Learn how multiple reasoned paths boost the reliability and depth of LLM answers.</p></li><li><p><strong>ReAct and Automation:</strong> Explore automated tools for prompt engineering and advanced concepts like integrating LLMs with external tools.</p></li></ul><h3><strong>3. Code-Centric Prompting</strong></h3><p>Whether you want LLMs to write, translate, debug, or explain code, the paper includes step-by-step examples&#8212;complete with sample prompts and outputs you can adapt for your own projects.</p><h3><strong>4. Best Practices &amp; Pitfalls</strong></h3><p>The guide is rich in practical advice:</p><ul><li><p><strong>Provide examples.</strong></p></li><li><p><strong>Be clear, simple, and specific.</strong></p></li><li><p><strong>Use positive instructions instead of long lists of things to avoid.</strong></p></li><li><p><strong>Control output length and structure (JSON, XML, etc.).</strong></p></li><li><p><strong>Iterate and document your prompt experiments.</strong></p></li></ul><p>Plus, the paper highlights common bugs (like the dreaded "repetition loop") and offers strategies to fix them.</p><div><hr></div><h2><strong>Why Does This Resource Matter?</strong></h2><p><strong>Prompt Engineering</strong> isn&#8217;t just theory&#8212;it&#8217;s packed with templates, editable tables, real-world code, and recommendations for tools (like Vertex AI Studio). It provides a framework for:</p><ul><li><p>Faster iteration and higher prompt quality</p></li><li><p>More reliable, safer, and relevant AI output</p></li><li><p>A shared vocabulary and toolkit for teams working in AI</p></li></ul><p>Furthermore, it stays current with trends in multimodal input (text, images, code), automated prompt generation, and best practices for production-ready AI workflows.</p><div><hr></div><h2><strong>Who Should Read It?</strong></h2><ul><li><p><strong>AI Practitioners:</strong> Level up your skills with advanced strategies.</p></li><li><p><strong>Developers &amp; Engineers:</strong> Save time with code, translation, and debugging prompt patterns.</p></li><li><p><strong>Educators:</strong> Find clear, structured explanations and examples.</p></li><li><p><strong>Product Managers &amp; Business Leaders:</strong> Understand the capabilities and limits of prompt-based AI.</p></li></ul><div><hr></div><h2><strong>Final Thoughts</strong></h2><p>Google&#8217;s <strong>Prompt Engineering</strong> whitepaper by Lee Boonstra is a thorough, example-rich, and above all <strong>practical resource</strong>. Whether you&#8217;re just starting with LLMs or you&#8217;re an AI veteran, this guide will help you move from casual prompting to mastering the craft.</p><p>If you&#8217;re serious about AI, this is a <em>must-bookmark</em>&#8212;and one you&#8217;ll come back to, time and again, as you develop and refine your prompt engineering skills.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://drive.google.com/file/d/1AbaBYbEa_EbPelsT40-vj64L-2IwUJHy/view" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s1Aw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 424w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 848w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s1Aw!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png" width="1200" height="1554.2857142857142" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1632,&quot;width&quot;:1260,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:848079,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://drive.google.com/file/d/1AbaBYbEa_EbPelsT40-vj64L-2IwUJHy/view&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/161628384?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!s1Aw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 424w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 848w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!s1Aw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a90b32-2d71-40e3-8b0b-171bff6901c0_1260x1632.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Have you read the whitepaper or tried these techniques? Share your experiences or favorite tips in the comments!</em></p><p></p><h3><strong>Useful Links</strong></h3><ul><li><p><a href="https://gemini.google.com/">Gemini by Google</a></p></li><li><p><a href="https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/introduction-prompt-design">Vertex AI Prompting Introduction</a></p></li><li><p><a href="https://inthecloud.withgoogle.com/gemini-for-google-workspace-prompt-guide/dl-cd.html">Gemini Prompt Guide for Google Workspace</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Oversight in the Boardroom]]></title><description><![CDATA[The Next Frontier for Corporate Governance]]></description><link>https://insights.techne.ai/p/ai-oversight-in-the-boardroom</link><guid isPermaLink="false">https://insights.techne.ai/p/ai-oversight-in-the-boardroom</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Tue, 15 Apr 2025 17:54:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Sdeo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sdeo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sdeo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sdeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:177785,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/161401115?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sdeo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!Sdeo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ac74f1-0d73-4fd2-935b-530030f543bf_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Artificial Intelligence has swiftly evolved from boardroom buzzword to strategic imperative, permeating every sector from financial services to healthcare. Yet, as companies leverage AI&#8217;s transformative potential, the risks, ranging from ethical lapses to regulatory breaches, have never been higher. A recent McKinsey survey (Q1 2024) found that 77% of board members rated AI-related risks among their top three strategic concerns. Boards that fail to address their oversight obligations on AI are taking risks not just with technology, but with their company&#8217;s reputation, legal standing, and future.</p><p><strong>The Board&#8217;s New Mandate: AI Oversight</strong></p><p>Traditionally, boards have focused on financial stewardship, risk management, and executive succession. The rise of AI calls for a new layer of vigilance:</p><ul><li><p><strong>Ethical and Regulatory Risk:</strong> Boards must ensure AI deployments adhere to evolving global regulatory standards (such as the EU AI Act or U.S. SEC guidelines) and ethical best practices.</p></li><li><p><strong>Strategy and Competitiveness:</strong> Oversight is needed not only to mitigate risk but also to guide investment in AI strategy for sustainable competitive advantage.</p></li><li><p><strong>Stakeholder Trust:</strong> With 64% of global consumers expressing mistrust of AI (Pew Research, 2024), public perception now directly impacts enterprise value.</p></li></ul><p><strong>What Are the Board&#8217;s Obligations?</strong></p><ol><li><p><strong>Establish Clear AI Governance Frameworks:</strong> Formalize AI ethics committees or integrate AI into standing risk/audit committees.</p></li><li><p><strong>Review and Approve AI-Related Policies:</strong> Ensure policies cover data privacy, transparency, bias mitigation, and explainability.</p></li><li><p><strong>Demand Transparency from Management:</strong> Require regular briefings on AI use cases, model risks, and outcomes.</p></li><li><p><strong>Fiduciary Duty:</strong> Exercise care, diligence, and skill as AI becomes material to company operations and risk landscape.</p></li><li><p><strong>Monitor Compliance and Incidents:</strong> Require robust incident reporting and breach response mechanisms tied to AI failures or errors.</p></li></ol><p><strong>How Should Board Members Educate Themselves on AI?</strong></p><p>A worrying insight: only 24% of board directors believe they are &#8220;very prepared&#8221; to understand the technical and ethical risks of AI, per a 2023 Stanford CGRI survey.</p><p><strong>Leading practices for board self-education include:</strong></p><ul><li><p><strong>Engage in Board-Level AI Bootcamps:</strong> Intensive, short-term educational programs with top-tier academic or consulting partners.</p></li><li><p><strong>Mandate AI Literacy for All Directors:</strong> Incorporate AI fundamentals as part of annual board education and onboarding.</p></li><li><p><strong>Bring in Independent AI Experts:</strong> Use third-party specialists as ad hoc advisors or even independent board members.</p></li><li><p><strong>Leverage Cross-Functional Workshops:</strong> Blend strategic, legal, data science, and operational perspectives in scenario-based workshops.</p></li><li><p><strong>Stay Current on AI Regulation:</strong> Subscribe to briefings on regulatory updates (EU, U.S., Asia-Pacific, etc.).</p></li></ul><p><strong>Data Spotlight: Board Readiness for AI Oversight</strong></p><ul><li><p><strong>47%</strong> of Fortune 500 boards have no member with recognized expertise in technology or data science (McKinsey, 2024).</p></li><li><p><strong>76%</strong> of boards have not reviewed, in the past year, an incident or risk disclosure specifically related to AI (CGRI, 2023).</p></li><li><p><strong>60%</strong> of directors say regulatory and reputational risk from AI is &#8220;on the rise&#8221;&#8212;yet only <strong>18%</strong> have formalized AI risk dashboards.</p></li></ul><p>Much as board agility was judged by the response to COVID-19, in the 2020s it will be measured by readiness to oversee AI. The future belongs to boards that embrace this frontier, leveraging education, vigilance, and a willingness to adapt&#8212;before oversight gaps become tomorrow&#8217;s headlines.</p>]]></content:encoded></item><item><title><![CDATA[The Rise of the AI-Integrated Polymath: ]]></title><description><![CDATA[How Individuals Are Vertically & Horizontally Integrating Their Capacities]]></description><link>https://insights.techne.ai/p/the-rise-of-the-ai-integrated-polymath</link><guid isPermaLink="false">https://insights.techne.ai/p/the-rise-of-the-ai-integrated-polymath</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Sun, 06 Apr 2025 00:57:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!E-zI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E-zI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E-zI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E-zI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2052919,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/160681093?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E-zI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!E-zI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7721cc83-a380-45a5-86a7-ffe330e59567_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine a single knowledge worker who conceptualizes a product in the morning, designs and prototypes it by afternoon, and markets it by nightfall, all without handing it off to another department. Once, such end-to-end execution was the realm of giant corporations. Titans like Carnegie and Rockefeller built empires by integrating vertically, controlling raw materials, production, and distribution to expand control, reduce dependencies, and capture value (Harrigan, 1984). Others expanded horizontally, buying up rivals to dominate markets and achieve scale (Ravenscraft, 1987).</p><p>Michael Porter&#8217;s (1985) classic frameworks showed how owning more of the value chain (vertical integration) can hedge against supplier power or secure distribution and how merging across an industry (horizontal integration) can neutralize competitors. Clayton Christensen and Michael Raynor (2003) later observed that when a product&#8217;s performance isn&#8217;t yet &#8220;good enough,&#8221; tightly integrated architectures often have an edge, whereas modular outsourcing wins once baseline performance is exceeded. In short, integration was historically a corporate strategy for survival and supremacy &#8212; owning more meant doing more, better.</p><p>Today, a similar drama is unfolding not at the organizational level but at the level of individual knowledge workers and tech builders.</p><h1>The Present: The AI-Integrated Individual</h1><p>As artificial intelligence technologies have advanced, they have begun to act as force multipliers for human expertise. In the workplace, AI systems range from narrow tools (like predictive analytics and recommendation engines) to general capabilities (like large language models and chatbots) that can assist in a variety of tasks.</p><p>The term AI-integrated employee refers to a professional who actively leverages AI tools and systems to enhance their work. These individuals are still T-shaped in human terms&#8202;&#8212;&#8202;possessing deep expertise and broad skills&#8202;&#8212;&#8202;but they effectively enlarge both dimensions by incorporating AI into their workflow. In other words, AI becomes an extension of their abilities, allowing them to achieve greater depth in analysis and broader scope in problem-solving than would be possible through human effort alone.</p><p>A single individual, armed with AI copilots, can now exhibit forms of personal integration analogous to corporate strategies:</p><ul><li><p><strong>Individual Vertical Integration</strong>: This refers to an individual managing a significant portion, or even the entirety, of a value chain for a specific output &#8212; from initial concept and research through design, development, execution, and delivery &#8212; leveraging AI tools to bridge skill gaps and automate tasks.</p></li><li><p><strong>Individual Horizontal Integration</strong>: This describes an individual spanning and synthesizing skills across multiple, traditionally distinct domains &#8212; such as creative, technical, analytical, and strategic &#8212; using AI to augment capabilities in areas outside their core expertise.</p></li></ul><p>Current management thinking emphasizes augmentation over automation: AI is most powerful when it complements human skills, not replaces them (Raisch &amp; Krakowski, 2021). Wilson and Daugherty (2018) describe this synergy as collaborative intelligence, where humans and AI &#8220;enhance each other&#8217;s strengths.&#8221; Their Harvard Business Review article provides numerous examples of AI assisting employees: helping doctors to diagnose diseases by rapidly analyzing images, aiding engineers in exploring thousands of design permutations, or enabling customer service reps to handle routine inquiries with chatbots so they can focus on complex customer needs (Wilson &amp; Daugherty, 2018).</p><p>In each case, the human&#8217;s deep domain knowledge (vertical T) is augmented by the AI&#8217;s ability to process vast data or automate tasks, effectively deepening the insight or productivity in that domain. Simultaneously, the human&#8217;s breadth (horizontal T) expands because AI tools often encapsulate knowledge from multiple disciplines. For instance, an architect using a generative design AI gains access to structural engineering and material science insights that might lie outside the architect&#8217;s own training, broadening the range of considerations in their design process.</p><p>The augmentation effect of AI can be framed in terms of cognitive and informational breadth. An AI-integrated employee can quickly obtain surface knowledge in areas beyond their expertise via AI queries, much like having a team of research assistants on demand. This means a single individual can confidently venture into adjacent domains with AI support. A software developer might use AI to get marketing data insights when considering user experience&#8202;&#8212;&#8202;effectively gaining breadth in understanding customer behavior.</p><p>Conversely, AI can also bolster depth by providing advanced analysis or spotting patterns that the human expert might miss. For example, a financial analyst might deeply understand valuation models, but an AI that scans global market data could highlight subtle correlations or anomalies, allowing the analyst to delve even deeper into an explanation or strategy. The AI-integrated employee stage takes the T-shaped model and stretches it: the vertical stroke goes deeper with AI-driven insights, and the horizontal stroke reaches farther with AI-provided knowledge in diverse areas.</p><h1>The Future: The AI-Integrated Polymath</h1><blockquote><p>We are witnessing the rise of the <strong>AI-integrated polymath</strong>: <em>an individual empowered by artificial intelligence to operate with the scope, skill range, and end-to-end capability that once required multidisciplinary teams or vertically integrated firms.</em></p></blockquote><p>This represents a radical shift in the philosophy of work and human capital. Where the industrial age prized specialization and the division of labor (Smith, 1776/2007), the AI age appears poised to reward a synthesis of skills and roles. The new maxim might be: <em><strong>integrate or stagnate.</strong></em></p><p>Consider this hypothetical: a tech founder on his morning walk uses a generative AI assistant on his phone to brainstorm product ideas and strategy narratives by voice. By afternoon, he&#8217;s teaching himself Python to build a quick prototype, leveraging AI tutors and code generation tools (like GitHub Copilot) to accelerate learning (Kazemitabaar et al., 2023). In the evening, he rapidly ingests a stack of domain research &#8212; books, market reports, even YouTube lecture transcripts &#8212; using AI summarizers to distill key insights in minutes (Liu et al., 2023).</p><p>By night, he&#8217;s designing graphics with the help of an AI image generator and drafting a marketing blog post with a large language model like ChatGPT (OpenAI, ). In one day, this individual has ideated, researched, built, and marketed a concept across what used to be four distinct job descriptions. Such a story would sound fanciful a decade ago; today it&#8217;s increasingly commonplace. Indeed, recent data suggests a rapid uptake of AI tools among knowledge workers (Hern, 2023), signaling an unprecedented surge in people potentially taking on broader tasks with AI&#8217;s help.</p><p>This shift represents more than just new tools &#8212; it may be a philosophical redefinition of labor and value creation. In the 20th century, efficiency often came from breaking work into narrow, specialized pieces (Smith, 1776/2007; Taylor, 1911). In the 21st century, productivity gains may increasingly come from weaving work back together, with one person (plus their AI assistants) owning larger swaths of the creation process.</p><p>The individual becomes a <strong>&#8220;full-stack&#8221; value creator</strong><em><strong>,</strong> capable of operating across multiple layers of the value-creation process, akin to a full-stack software developer but applied more broadly across business functions, enabled by AI augmentation.</em> We can draw an analogy to Porter&#8217;s (1985) value chain: where a company once decided which activities to perform in-house versus outsourcing, now a professional decides which skills to internalize and augment with AI. Integrating more skills &#8220;in-house&#8221; within oneself can reduce dependence on others&#8217; specific expertise and minimize the friction of handoffs (Williamson, 1975), much as corporate vertical integration aimed to reduce transaction costs and reliance on suppliers (Harrigan, 1984; Acquinox, ).</p><blockquote><p><em>The result is potential efficiency and a new form of creative control and agility. A solo builder can iterate faster because the design, coding, and testing functions are unified under their direct control (augmented by AI) rather than coordinated across organizational silos.</em></p></blockquote><p>There is also a notable shift in where expertise resides and how it is deployed. Christensen and Raynor (2003) taught that when coordination across interdependent parts is critical and interfaces are not yet standardized or well-defined, integration yields better performance. In today&#8217;s fast-moving environment, AI is helping individuals coordinate complex, multi-step tasks internally.</p><p>The &#8220;interfaces&#8221; between brainstorming, coding, and marketing &#8212; once often represented by handoffs between departments &#8212; can now be bridged more seamlessly by AI tools that translate natural language prompts into code (Chen et al., 2021) or distill research into strategic options. Ambitious professionals are seizing this moment to internalize what would traditionally be considered &#8220;external&#8221; capabilities. The emergence of no-code and low-code platforms, often incorporating AI assistance, has given rise to the &#8220;citizen developer&#8221; &#8212; non-programmers who build applications (Richardson &amp; Rymer, 2021; Lakshmi &amp; Sri, 2024).</p><p>In parallel, domain experts in fields like marketing or law are using tools like GPT-4 to write scripts, analyze documents, or even generate code snippets, effectively taking on tasks outside their formal training (Bommarito &amp; Katz, 2022). Each person potentially becomes a small-scale integrator of knowledge and function. As studies have shown, generative AI can significantly enhance worker productivity and allow individuals to perform tasks beyond their prior skill set, effectively enabling rapid, task-specific &#8220;reskilling&#8221; (Brynjolfsson et al., 2023; Dell&#8217;Acqua et al., 2023).</p><p>The implications for businesses and careers are profound. Leaders must recognize that the future of work involves not just humans <em>or</em> machines, but humans <em>as</em> machine-enhanced polymaths. Individual horizontal and vertical integration means that traditional job roles will likely blur. We may see more product managers who can prototype software, graphic designers who conduct market analysis, and analysts who generate their own data pipelines &#8212; all empowered by AI copilots.</p><blockquote><p>This calls for a new organizational and talent mindset. Instead of solely relying on hiring multiple specialists to form a team, companies might increasingly empower one multidisciplinary <strong>AI-integrated Polymath</strong>, <em>an individual adept at leveraging AI to orchestrate diverse tasks and synthesize knowledge across domains</em> &#8212; to drive projects, potentially supported by specialists or AI agents. For the individual, personal competitive advantage may stem less from deep specialization in one area and more from the ability to architect a unique array of skills and effectively orchestrate AI tools to amplify them (Root-Bernstein &amp; Root-Bernstein, 2003). Just as firms historically gained an edge by possessing rare, integrated capabilities (Barney, 1991), individuals may gain an edge by cultivating a rare, adaptable mix of talents augmented by AI proficiency.</p></blockquote><p>Yet this integration of human capability is not without its challenges. It demands high levels of learning agility, adaptability, and what might be termed <strong>&#8220;research taste&#8221; or intuition</strong>: <em>the critical ability to navigate ambiguity, identify relevant information quickly within vast datasets, synthesize insights across disciplines, and discern high-quality patterns or solutions, especially when working alongside AI.</em> In a world where vast information and powerful AI assistance are readily available, knowing what questions to ask, where to probe, which AI outputs to trust and refine, and which cross-disciplinary insights matter becomes crucial (Agrawal et al., 2018). This kind of integrative intuition is reminiscent of historical polymaths; it combines a sense for the &#8220;big picture&#8221; with tactical execution know-how (Burke, 2020).</p><p>It is also a capability that must be actively cultivated &#8212; through deliberate practice, diverse exposure, critical thinking, and a mindset that constantly oscillates between detailed work and strategic overview. The narrative arc of a career may shift from climbing a specialized ladder to building a personal portfolio of interconnected skills that can be dynamically recombined. Essentially, tomorrow&#8217;s leaders and innovators may increasingly resemble &#8220;integrators&#8221; &#8212; not just managers of people, but managers of knowledge, tools, and ideas within themselves.</p><p>Individually, high-performers have been engaging in individual-level vertical and horizontal integration, enabled by AI. This represents a conceptual leap in how we might think about skills, roles, and value creation. It is also concerning for employees who are not at that point yet. Just as business strategy scholars chronicled corporations extending their reach for competitive advantage through integration (e.g., Porter, 1985; Harrigan, 1984), we can now observe individuals potentially doing the same with AI&#8217;s assistance. This narrative is one of empowerment and possibility: the individual as a micro-enterprise, the professional is an AI-augmented polymath, and AI as a powerful enabler collapsing traditional boundaries between domains.</p><blockquote><p><em>It&#8217;s a story still unfolding, but the potential implication is becoming clearer &#8212; those who effectively learn to integrate knowledge and leverage AI across diverse tasks may be best positioned to innovate and thrive in a world where there are only AI agents and their very limited human orchestrators.</em></p></blockquote><p><strong>References</strong></p><p>Agrawal, A., Gans, J., &amp; Goldfarb, A. (2018). <em>Prediction machines: The simple economics of artificial intelligence</em>. Harvard Business Review Press.</p><p>Barney, J. (1991). Firm resources and sustained competitive advantage. <em>Journal of Management</em>, <em>17</em>(1), 99&#8211;120. </p><p>Bommarito, M. J., &amp; Katz, D. M. (2022). GPT takes the bar exam. <em>arXiv preprint arXiv:2212.14402</em>. </p><p>Brynjolfsson, E., Li, D., &amp; Raymond, L. R. (2023). Generative AI at work. <em>NBER Working Paper &#8470;31161</em>. National Bureau of Economic Research. </p><p>Burke, P. (2020). <em>The polymath: A cultural history from Leonardo da Vinci to Susan Sontag</em>. Yale University Press.</p><p>Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. D. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., &#8230; Zaremba, W. (2021). Evaluating large language models trained on code. <em>arXiv preprint arXiv:2107.03374</em>. </p><p>Christensen, C. M., &amp; Raynor, M. E. (2003). <em>The innovator&#8217;s solution: Creating and sustaining successful growth</em>. Harvard Business School Press.</p><p>Dell&#8217;Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., &amp; Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. <em>Harvard Business School Technology &amp; Operations Mgt. Unit Working Paper &#8470;24&#8211;013</em>. </p><p>Harrigan, K. R. (1984). Formulating vertical integration strategies. <em>Academy of Management Review</em>, <em>9</em>(4), 638&#8211;652. </p><p>Hern, A. (2023, September 14). Three-quarters of white-collar workers using AI, Microsoft survey finds. <em>The Guardian</em>.</p><p>Kazemitabaar, M., Naini, M. M., Feng, S., Grossman, T., &amp; Tovi, F. (2023). Studying the effect of AI code generators on supporting creative coding. <em>Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</em>, 1&#8211;16. </p><p>Lakshmi, V. V., &amp; Sri, M. V. (2024). Empowering organizations through AI-driven low-code/no-code development: Unlocking citizen developers potential. <em>International Journal of Computer Science and Engineering (IJCSE)</em>, <em>12</em>(1), 1&#8211;8.</p><p>Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., &amp; Zhu, C. (2023). Large language models for information retrieval: A survey. <em>arXiv preprint arXiv:2308.07107</em>. </p><p>Porter, M. E. (1985). <em>Competitive advantage: Creating and sustaining superior performance</em>. Free Press.</p><p>Ravenscraft, D. J. (1987). The 1980s merger wave: An industrial organization perspective. In L. E. Browne &amp; E. S. Rosengren (Eds.), <em>The merger boom</em> (pp. 17&#8211;37). Federal Reserve Bank of Boston.</p><p>Richardson, C., &amp; Rymer, J. (2021). <em>The Forrester Wave&#8482;: Low-Code Development Platforms For Professional Developers, Q2 2021</em>. Forrester Research. [Note: Supports the rise of low-code platforms.]</p><p>Root-Bernstein, R., &amp; Root-Bernstein, M. (2003). Multiple giftedness in adults: The case of polymaths. In D. Ambrose, L. M. Cohen, &amp; A. J. Tannenbaum (Eds.), <em>Creative intelligence: Toward theoretic integration</em> (pp. 237&#8211;258). Hampton Press.</p><p>Smith, A. (2007). <em>An inquiry into the nature and causes of the wealth of nations</em>. MetaLibri. (Original work published 1776)</p><p>Taylor, F. W. (1911). <em>The principles of scientific management</em>. Harper &amp; Brothers.</p><p>Williamson, O. E. (1975). <em>Markets and hierarchies: Analysis and antitrust implications</em>. Free Press.</p>]]></content:encoded></item><item><title><![CDATA[Bring Your Own AI in Chicago?]]></title><description><![CDATA[Balancing Innovation With Risk]]></description><link>https://insights.techne.ai/p/bring-your-own-ai-in-chicago</link><guid isPermaLink="false">https://insights.techne.ai/p/bring-your-own-ai-in-chicago</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Wed, 02 Apr 2025 15:25:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8857dbc3-bcf5-4d7e-90a9-8b27bc02ae95_2060x842.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past year, I've been working closely with diverse organizations across Chicago, guiding employees and leaders on how to effectively and responsibly leverage generative AI. In virtually every session, I've stumbled upon a curious yet alarming reality: employees eagerly using freely available AI platforms with minimal, if any, oversight. Even more concerning, sensitive personal and proprietary company information regularly finds its way onto these unprotected public tools&#8212;often lacking essential security measures, such as two-factor authentication or enterprise-grade data protection.</p><p>In the IT Risk Management space, this is called &#8220;<a href="https://www.ibm.com/think/topics/shadow-ai">Shadow AI</a>&#8221;; Chicago&#8217;s executives must step back and initiate frank, transparent conversations about what responsible AI usage genuinely entails. Too frequently, employees hesitate to disclose which AI tools they use or the tasks they're automating, fearing repercussions in the face of absent or unclear internal policies.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insights.techne.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Techne AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Beyond risks, employee reticence and silence also stifle learning, inhibit transparency, and ultimately risk valuable data. Leaders now have a timely opportunity&#8212;indeed, a responsibility&#8212;to clearly articulate their organization's stance on AI, craft thoughtful usage guidelines, and create an environment of trust where successes and lessons learned are openly shared.</p><p>Chicago, renowned globally for its strength in finance, healthcare innovation, robust manufacturing sector, and thriving tech ecosystem, faces real risk if businesses allow unmanaged AI practices to continue unchecked. The consequences can be severe: data breaches, intellectual property loss, compromised client confidentiality, and even reputational harm to our city's image as an innovation powerhouse.</p><p>Recent MIT Center for Information Systems Research research underscores that outright bans on employee-driven "<a href="https://sloanreview.mit.edu/article/bring-your-own-ai-how-to-balance-risks-and-innovation/">Bring Your Own AI"</a> (BYOAI) initiatives inevitably fail. Workers, driven by legitimate productivity goals, will always find workarounds. Instead, the practical question for Chicago executives becomes: How can we responsibly integrate generative AI into our workflows while maintaining stringent data security?</p><p>Drawing from my hands-on experience across sectors, I see four practical steps organizations can adopt immediately to guide responsible AI use:</p><p><strong>First, clearly defined guardrails are essential.</strong> Too often, employees unintentionally breach privacy standards or expose proprietary information simply because clear, company-wide AI guidelines don't exist. Chicago&#8217;s strength lies in its diverse talent pool&#8212;experts in technology, compliance, privacy, and law, perfectly positioned to draft robust, practical AI policies collaboratively. Instead of a blanket ban, companies should consider introducing thorough, accessible guidelines. Chicago businesses could also build cross-functional teams to craft and swiftly implement transparent guardrails, balancing cautious oversight with ample room for innovation. It can be as simple as an AI usage policy detailing which platforms, which licenses, the role of company data, and enterprise licenses on ChatGPT, or as complicated as the full rollout of ISO certification under ISO 27001 for artificial intelligence systems.</p><p><strong>Second, cultivating "AI direction and evaluation skills" is critical.</strong> I've observed Chicago professionals trusting generative AI outputs unquestioningly, even in critical scenarios like strategic planning or sensitive internal communications. This blind reliance introduces unnecessary risk. Companies need to consider significant investments in training to help employees learn how to critically evaluate, refine, and validate AI-generated insights. Regular, structured "virtual training hours" can ensure hands-on practice under expert guidance&#8212;enabling employees to spot inaccuracies and harness AI more effectively. Chicago companies can quickly build what I like to call opportunities for "AI-enablement in context," which connects the dots between the daily workflows of high-value employees and the limits of what's possible with current AI tools. Such a model leads to skilled and critical AI users capable of enhancing their impact without compromising accuracy or confidentiality.</p><p><strong>Third, sanction a curated selection of trusted generative AI tools.</strong> The root issue I&#8217;ve seen isn't that employees are leveraging AI but rather that they're doing so using inappropriate platforms, posing unnecessary risks. Organizations must proactively identify and endorse secure AI solutions that meet enterprise-grade security standards, such as encryption, role-based access, and multi-factor authentication. Companies can create curated approved solutions where staff can access pre-vetted, secure tools along with training resources. While adopting licensed, secure AI tools may carry initial costs, research consistently finds that these investments pay dividends in productivity gains, risk mitigation, and employee satisfaction.</p><p><strong>Fourth, build a culture of transparency around AI usage.</strong> One of the biggest barriers to effective AI adoption isn&#8217;t technical&#8212;it&#8217;s cultural. Employees won&#8217;t share what tools they&#8217;re using or how they&#8217;re using them if they fear being penalized, viewed as expendable, or accused of stepping outside undefined policies. Similarly, many executives hesitate to disclose how they themselves are using AI to make decisions, evaluate performance, or develop KPIs. <em><strong>But silence creates silos, and silos stifle innovation.</strong></em></p><p>Executives must take the lead here. When leaders openly share their own experiences&#8212;what&#8217;s worked, what hasn&#8217;t, how AI has helped or surprised them&#8212;it sets the tone for an organization where curiosity is welcomed and learning is shared. Establishing forums for feedback, creating cross-functional AI roundtables, and embedding AI usage reflection into existing team rituals are all ways to normalize this dialogue. Transparency not only helps spot risks like shadow AI usage but also enables the cross-pollination of great prompts, novel use cases, and workflow enhancements that can raise the bar for the entire organization.</p><p>The key is alignment: Imagine employees at every level fluent in advanced use of cutting-edge AI solutions, driving productivity and efficiency gains while minimizing risk with responsible use of generative AI and traditional AI solutions. All while boards and executive leadership teams provide clear policies that fuel innovation while protecting data in partnership with IT teams. When the workforce and leadership move in unison on AI, the whole region stands to benefit.</p><p>Looking ahead, Chicago can secure long-term economic and technological leadership if it builds an ecosystem of trusted AI use, shared standards, and transparency. Achieving this will require a concerted effort across companies and industries, but the payoff is immense: a future where Chicago sets the pace in the AI-driven economy. For a city that famously <em>makes no little plans</em>, this is a significant opportunity. The city can emerge as a national leader in smart AI policy, workforce enablement, and innovation. In doing so, Chicago would turn a potential risk into a model of best practices for the country.</p><p></p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Bring Your Own Ai Balancing Innovation And Risk</div><div class="file-embed-details-h2">2.21MB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://insights.techne.ai/api/v1/file/9387f04e-9849-4fc0-8fe3-fb401cc7900d.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://insights.techne.ai/api/v1/file/9387f04e-9849-4fc0-8fe3-fb401cc7900d.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p> </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insights.techne.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Techne AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Future of AI Governance Measurement]]></title><description><![CDATA[Enhancing KPIs for Effective Oversight]]></description><link>https://insights.techne.ai/p/ai-governance-kpis-metrics</link><guid isPermaLink="false">https://insights.techne.ai/p/ai-governance-kpis-metrics</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Tue, 11 Mar 2025 17:15:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/69299d44-5547-4370-b772-5a2259f37917_1972x1516.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>Effective operationalization of AI governance requires more than qualitative frameworks&#8212;it demands innovative, dynamic metrics that provide actionable insights. Drawing from recent research on strategic measurement systems, how can organizations develop AI governance KPIs that track compliance and drive strategic value and competitive differentiation?</p><p>Findings from the MIT SMR - BCG Artificial Intelligence and Business Strategy Global Executive Study and Research Project reiterate the need for an evolution in the KPIs businesses use to measure advanced AI technologies, their implementation, and, in particular, governance.</p><p>As the authors share, "Smart KPIs can do more than just unearth sources of value and differentiation that would otherwise go undetected or underexploited. They can also prevent the undesirable outcomes that can result from a failure to regularly reexamine assumptions underlying legacy KPIs. The 2008 global economic crisis, for example, was triggered in part by banks&#8217; dependence on a then widely used metric, value at risk, which measures potential portfolio losses in normal market conditions at a single point in time. Financial institutions did not adjust this measure as riskier subprime mortgages and credit default swaps became a larger part of their portfolios. <a href="https://sloanreview.mit.edu/projects/the-future-of-strategic-measurement-enhancing-kpis-with-ai/">Guided by a metric that severely underestimated potential losses &#8212; in some cases, by orders of magnitude &#8212; many financial institutions went bankrupt or suffered significant losses.</a>"</p></blockquote><h2>Developing Clear KPIs for AI Governance</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iFEW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iFEW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 424w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 848w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iFEW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png" width="1326" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1326,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;AI Governance KPIs Techne AI&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI Governance KPIs Techne AI" title="AI Governance KPIs Techne AI" srcset="https://substackcdn.com/image/fetch/$s_!iFEW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 424w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 848w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!iFEW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73c950da-2de6-4cc1-939b-3ca6c5e9c5ab_1326x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">MIT SMR - BCG Artificial Intelligence and Business Strategy Global Executive Study and Research Project</figcaption></figure></div><blockquote><p>The traditional approach to governance measurement often fails to deliver the intelligence and adaptability required in today's AI landscape. Following the research from MIT Sloan Management Review and BCG, we propose shifting governance KPIs from static benchmarks to dynamic predictors that anticipate risks, identify opportunities, and align with strategic objectives.</p><p>Effective <strong>operationalization of AI governance</strong> requires clear metrics to track progress and identify areas for improvement. While governance might seem inherently qualitative, leading organizations have developed sophisticated Key Performance Indicators (KPIs) that provide quantitative insights into governance effectiveness.</p><p>The foundation of governance measurement begins with <strong>inventory coverage</strong>&#8212;tracking <em><strong>what percentage of an organization's AI systems are known and governed</strong></em>. This baseline metric reveals whether the governance framework is comprehensive or if "shadow AI" exists outside formal oversight. A related metric examines policy compliance rates, measuring how consistently AI projects follow required governance steps before deployment. Both metrics should trend toward 100% as governance matures, though achieving perfect compliance often requires sustained effort.</p><p><strong>Data quality and lineage metrics</strong> form another important measurement category. Since responsible AI depends on good data, organizations track how completely the origin of data for each model is documented (data lineage completeness) and how well that data meets quality standards (accuracy, completeness, representativeness). Many governance programs set minimum thresholds for data quality scores that models must meet before approval.</p><p><strong>Bias and fairness metrics</strong> have become particularly crucial in recent years. Organizations define specific thresholds for acceptable differences in model outcomes across demographic groups, such as requiring disparate impact ratios above 0.8 for protected attributes. They track the initial bias measurements during development and how these metrics trend over time in production. A well-governed AI system should maintain or improve fairness metrics throughout its lifecycle.</p><p><strong>Model performance and stability</strong> represent both technical and governance concerns. While traditional accuracy metrics remain important, governance frameworks also monitor drift rates&#8212;how frequently models deviate from expected performance parameters&#8212;and the frequency of required updates due to drift. Some organizations even track "technical debt" in their AI models, quantifying how much rework or complexity accumulates over time and might require addressing.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TXGH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TXGH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 424w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 848w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TXGH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png" width="1300" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Khullani Abdullahi and Techne AI, AI Governance Metrics and KPIs\n&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Khullani Abdullahi and Techne AI, AI Governance Metrics and KPIs
" title="Khullani Abdullahi and Techne AI, AI Governance Metrics and KPIs
" srcset="https://substackcdn.com/image/fetch/$s_!TXGH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 424w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 848w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!TXGH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a94e904-855c-4fa0-96ea-df09fa465ace_1300x1000.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Clear KPIs for AI Governance, Techne AI Copyright 2025</figcaption></figure></div><blockquote><p><strong>Incident metrics provide </strong>the most direct insight into governance effectiveness. These include the number and severity of AI-related incidents, mean time to resolution, and trends in incident occurrence. As governance matures, the expectation is that severe incidents should decrease, though increased reporting of minor issues might initially suggest the opposite as awareness improves.</p><p><strong>Accountability and training metrics </strong>track the human elements of governance. Organizations measure the percentage of relevant staff who have completed AI governance training, the frequency of governance council meetings, and the level of stakeholder participation. Some even track governance responsiveness&#8212;how quickly the governance body reviews and approves AI projects&#8212;seeking a balance between thorough oversight and operational efficiency.</p><p><strong>Value and ROI metrics connect governance to business outcomes</strong>. These might include the number of AI projects successfully deployed versus those halted due to governance concerns or the financial value generated by AI systems compared to losses or fines from AI failures. Positive trends in these metrics indicate that governance enables valuable AI while preventing harmful applications.</p><p>A more integrated measurement approach can transform AI governance from a nebulous concept into a quantifiable discipline. By tracking these metrics in dashboards accessible to technical teams and executives, organizations create accountability for governance outcomes at all levels. Regular review sessions examine metric trends, identify impr</p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insights.techne.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Techne AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[25 Power Prompts]]></title><description><![CDATA[Unleashing the Full Potential of Advanced Reasoning Models]]></description><link>https://insights.techne.ai/p/25-power-prompts</link><guid isPermaLink="false">https://insights.techne.ai/p/25-power-prompts</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Fri, 07 Mar 2025 23:12:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/02a25210-3d0f-4be5-b6d7-c2a9b8fac9d6_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Large Language Models (LLMs) with advanced reasoning abilities&#8212;such as Gemini, Deepseek Coder, and ChatGPT&#8212;offer unparalleled potential to enhance human intelligence and streamline workflows. Yet the true power of these models hinges on the prompts you create.</p><h3>Why Prompt Engineering Matters</h3><p>The way you interact with an AI model dramatically influences its effectiveness. Using well-crafted prompts, users can shift from simple Q&amp;A interactions to deep analytical tasks involving synthesis, evaluation, strategic planning, and creative problem-solving. Each prompt provided comes with:</p><ul><li><p><strong>Purpose Description:</strong> Clearly explains how and when to apply the prompt effectively.</p></li><li><p><strong>Real-World Examples:</strong> Demonstrates prompt application through concrete scenarios and example documents.</p></li><li><p><strong>Time-Saving Metrics:</strong> Offers conservative productivity gains, justifying the learning curve and implementation effort.</p></li></ul><h3>Categories of Prompts</h3><p>The prompts are systematically organized into four actionable categories:</p><h3>Analysis &amp; Decision-Making</h3><p>Ideal for strategic decisions, these prompts help assess situations, compare options, and generate structured recommendations supported by relevant data and analyses. A standout example includes prompts for Situation Analysis, Options Evaluation, and Structured Recommendations, saving professionals 3-5 hours weekly.</p><h3>Content Creation &amp; Enhancement</h3><p>Perfect for improving the efficiency of content workflows. These prompts guide users in generating optimized written materials, from drafting social media posts and emails to creating comprehensive executive summaries. Users can anticipate 2-3 hours saved per week in content tasks.</p><h3> Data Processing &amp; Optimization</h3><p>Focused on making data manageable, these prompts simplify complex analyses like comparative product analyses and trend identifications. Leveraging these prompts significantly streamlines tasks such as competitive analysis and procurement decisions, reducing time spent by 2-4 hours per week.</p><h3>Data-Driven Decision Making &amp; Risk Management</h3><p>Prompts like Comparative Product Analysis and Risk Assessment &amp; Mitigation provide structured frameworks for detailed evaluations, significantly streamlining the analytical process. Businesses can leverage these to conduct thorough comparisons and risk analyses, saving 2-4 hours weekly.</p><h3>Root Cause Analysis &amp; Trend Identification</h3><p>These prompts facilitate efficient root cause analyses and identification of key business trends through clear step-by-step instructions. Practical application examples, like analyzing drops in website traffic, demonstrate how users can quickly pinpoint issues and actionable insights, reclaiming 3-4 hours weekly.</p><h3>Planning &amp; Organization</h3><p>Enhanced tools for project planning, meeting preparation, and productivity management. The prompts detail everything from creating detailed project plans to crafting actionable meeting agendas, saving professionals significant time each week.</p><h3>Leveraging These Prompts</h3><p>By systematically adopting these prompts, businesses can profoundly enhance their decision-making capabilities, content quality, and organizational efficiency. Users are encouraged to adapt and refine these tools to their specific contexts, experiencing firsthand the transformative capabilities of meticulous prompt engineering.</p><p>Download the PDF for a cheatsheet of 25 meticulously crafted prompts designed to maximize the capabilities of reasoning models. Beyond simple question-answering, these prompts tackle tasks that demand analysis, synthesis, planning, and innovative problem-solving.</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7fda79c-3ac8-447a-bdf7-f201c9d407c6_790x1074.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">25 Power Prompts</div><div class="file-embed-details-h2">1.15MB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://insights.techne.ai/api/v1/file/b36625db-329d-41ee-89f1-b14d907302e8.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">25 Power Prompts: Unleashing the Full Potential of Advanced Reasoning Models</div><a class="file-embed-button narrow" href="https://insights.techne.ai/api/v1/file/b36625db-329d-41ee-89f1-b14d907302e8.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insights.techne.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Techne AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Leveraging ChatGPT for Creating Investor Presentations: From Content to Slides]]></title><description><![CDATA[Use Case: Investor Presentation, PPT, & Charts]]></description><link>https://insights.techne.ai/p/leveraging-chatgpt-for-creating-investor</link><guid isPermaLink="false">https://insights.techne.ai/p/leveraging-chatgpt-for-creating-investor</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Fri, 07 Mar 2025 22:05:03 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/158614858/51c46a21dc0292db878d683e3f1900c0.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Creating an impactful investor presentation is crucial for securing funding, and artificial intelligence tools like ChatGPT can significantly streamline this process, including creating PowerPoint slides. This article explores how entrepreneurs and business leaders can utilize ChatGPT to develop professional, persuasive investor presentations that effectively communicate their vision and value proposition.</p><h2>Direct PowerPoint Creation Capabilities</h2><p>One of ChatGPT's most powerful features for entrepreneurs is its ability to generate complete PowerPoint presentations. Through GPT-4's advanced capabilities and plugins, users can prompt ChatGPT to create entire slide decks with adequately formatted content. This functionality allows entrepreneurs to:</p><ul><li><p>Generate complete presentation structures with appropriate slide sequences</p></li><li><p>Create individual slides with properly formatted headings, bullet points, and text</p></li><li><p>Receive recommendations for slide layouts that enhance visual impact</p></li><li><p>Develop speaker notes to accompany each slide</p></li><li><p>Design custom templates that align with company branding</p></li></ul><p>You can receive a complete slide deck structure that follows investor presentation best practices by providing ChatGPT with your company information, business model, and presentation goals.</p><h2>Strategic Content Development</h2><p>Beyond creating the actual slides, ChatGPT excels at developing the strategic content that forms the backbone of a compelling investor presentation. The AI can help determine what information to include and how to structure it for maximum impact.</p><p>For example, an effective investor deck typically includes sections on the problem statement, solution, market opportunity, business model, competitive landscape, traction metrics, team credentials, and funding requirements. ChatGPT can help refine each section by suggesting impactful phrasings and ensuring your narrative flows cohesively.</p><h2>Financial Data Visualization Guidance</h2><p>Investors scrutinize financial projections and data closely. ChatGPT can provide guidance on effectively visualizing financial information through charts and graphs. </p><ul><li><p>Which type of chart best represents different financial metrics</p></li><li><p>How to structure financial slides for clarity and impact</p></li><li><p>Ways to present complex financial information in digestible formats</p></li><li><p>Methods for highlighting key financial indicators that will interest investors</p></li></ul><h2>Crafting Compelling Narratives</h2><p>Beyond data and projections, investors connect with stories. ChatGPT excels at transforming business concepts into compelling narratives that resonate with investors logically and emotionally. The AI can help craft a story arc that highlights your company's origin, mission, and vision for the future, making your presentation more engaging and memorable.</p><h2>Anticipating Investor Questions</h2><p>ChatGPT can help entrepreneurs prepare for the Q&amp;A portion of their presentations by generating a comprehensive list of potential questions investors might ask. This allows you to develop thoughtful, data-backed responses in advance, demonstrating thorough preparation and industry knowledge.</p><h2>Customizing for Different Investor Audiences</h2><p>Different types of investors have varying priorities and expectations. ChatGPT can help tailor your presentation for specific investor types by emphasizing elements most relevant to each audience. For instance, VCs typically focus on scalability and market size, while strategic investors may be more interested in synergies with their existing business.</p><h2>Design Optimization</h2><p>ChatGPT can also provide specific recommendations for visual presentation, including:</p><ul><li><p>Optimal text-to-visual ratios for maximum impact</p></li><li><p>Color scheme suggestions that enhance readability and brand consistency</p></li><li><p>Typography recommendations for improved legibility</p></li><li><p>Layout suggestions that guide the viewer's eye to key information</p></li><li><p>Create the initial draft of a PPT presentation with all of the charts and texts you co-generated downloadable in PPTX format</p></li></ul><p>ChatGPT is a powerful resource for entrepreneurs and teams developing investor presentations. It offers assistance from content development to slide creation. By leveraging this AI assistant throughout the presentation development process, founders can create more compelling, clear, and professional pitches in significantly less time than traditional methods.</p><p>The most effective approach combines ChatGPT's capabilities with the entrepreneur's authentic vision and industry expertise, resulting in presentations that effectively communicate value and potential to prospective investors. Use ChatGPT to perfect your materials, but ensure your passion, expertise, and commitment shine through in the final product.</p>]]></content:encoded></item><item><title><![CDATA[The 7 Stages of AI Governance Maturity ]]></title><description><![CDATA[A Roadmap for Organizations]]></description><link>https://insights.techne.ai/p/the-7-stages-of-ai-governance-maturity</link><guid isPermaLink="false">https://insights.techne.ai/p/the-7-stages-of-ai-governance-maturity</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Tue, 04 Mar 2025 14:44:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fDzG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fDzG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fDzG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 424w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 848w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 1272w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fDzG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png" width="1279" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1279,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI" title="The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI" srcset="https://substackcdn.com/image/fetch/$s_!fDzG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 424w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 848w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 1272w, https://substackcdn.com/image/fetch/$s_!fDzG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9700e3c-f002-4464-b702-b8153253c9b7_1279x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI</figcaption></figure></div><p>As artificial intelligence transforms business operations across industries, organizations face the critical challenge of implementing effective AI governance. AI systems can lead to ethical breaches, compliance violations, and reputational damage without proper oversight. But where should you begin, and how do you know if your governance efforts are enough?</p><blockquote><p>This maturity model provides a clear roadmap for organizations at any stage of their AI governance journey. Whether you're just starting to consider AI risks or leading your industry in responsible AI practices, understanding your current maturity level enables targeted improvements and strategic planning. By progressing through these seven stages, organizations can not only mitigate risks but ultimately transform AI governance from a compliance burden into a competitive advantage.</p><p>This maturity model provides a clear roadmap for organizations at any stage of their AI governance journey. Whether you're just starting to consider AI risks or leading your industry in responsible AI practices, understanding your current maturity level enables targeted improvements and strategic planning. By progressing through these seven stages, organizations can mitigate risks and ultimately transform AI governance from a compliance burden into a competitive advantage.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q4JN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q4JN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 424w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 848w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 1272w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q4JN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI and Khullani Abdullahi&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI and Khullani Abdullahi" title="The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI and Khullani Abdullahi" srcset="https://substackcdn.com/image/fetch/$s_!Q4JN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 424w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 848w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 1272w, https://substackcdn.com/image/fetch/$s_!Q4JN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d39522b-1e7c-4246-9af9-f187725eb48e_1488x811.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The 7 Stages of AI Governance Maturity: A Roadmap for Organizations by Techne AI</figcaption></figure></div><h3>Stage 1: Ad Hoc &amp; Chaotic</h3><blockquote><p>The organization has <strong>no formal AI governance</strong>. AI projects are done in silos with little oversight. Decisions about AI ethics or risk are left to individual developers or teams, often inconsistently. There might not even be awareness of AI-specific risks at leadership levels. <em>Assessment:</em> No dedicated AI policies or roles exist. <em>Challenge:</em> Lack of awareness and coordination leads to potential ethical breaches or compliance violations going unnoticed. <em>Best Practice:</em> Begin raising awareness &#8211; for example, conduct a basic AI risk workshop or designate someone to inventory AI projects.</p></blockquote><h3>Stage 2: Aware (Initial Awareness &amp; Planning)</h3><blockquote><p>The organization has recognized the need for AI governance and is in <strong>planning mode</strong>. Some discussions or working groups form to address AI ethics or compliance. Policies are rudimentary or in draft. There may be a champion (like a concerned manager or an innovation officer) pushing for governance. <em>Assessment:</em> Existence of an initial AI governance framework document or the formation of an AI ethics committee, even if it has no power yet. <em>Challenge:</em> Moving from talk to action &#8211; people agree it&#8217;s important but may not know how to implement changes or fear stifling innovation. <em>Best Practice:</em> Develop a roadmap for governance (e.g. plan to publish an AI ethics policy, assign roles, pilot some governance procedures in one project).</p></blockquote><h3>Stage 3: Fragmented (Basic Policies, Inconsistent Adoption)</h3><blockquote><p>At this stage, <strong>basic AI governance policies or guidelines have been defined</strong> (e.g. a Responsible AI guideline), but adoption is spotty. Some teams comply, others don&#8217;t. There might be a few processes like AI project review for obvious issues, but not enterprise-wide. <em>Assessment:</em> Policies exist on paper; perhaps training has been offered. Some projects have been governed (maybe high-profile ones), but many smaller ones slip through unmanaged. <em>Challenge:</em> Enforcement and coverage &#8211; the governance is not ingrained in culture yet. People may view it as a box-ticking exercise. <em>Best Practice:</em> Start enforcing policies by integrating them into project lifecycle (e.g. require a compliance sign-off before deployment). Also, communicate success stories where governance averted a problem to show value.</p></blockquote><h3>Stage 4: Defined &amp; Implemented &#8211; Formal AI governance structures are in place and functioning.</h3><blockquote><p>There is likely a central committee or officer for AI oversight. Policies have been refined and clearly communicated. Most AI projects go through required steps (like risk assessment, bias testing, approval gates). AI governance is part of the organization&#8217;s standard operating procedures, similar to quality or security management. <em>Assessment:</em> High percentage of AI initiatives following the governance process, existence of governance artifacts (risk assessments, model cards) for each project. Possibly obtaining external certification (e.g. aiming for ISO 42001 compliance). <em>Challenge:</em> Scaling the process without slowing down innovation too much &#8211; ensuring that governance keeps up with the number of projects and doesn&#8217;t become a bottleneck. <em>Best Practice:</em> Use tools and templates to streamline governance (checklists, assessment questionnaires). Also, ensure leadership support by having periodic reviews of AI governance at exec level, so everyone knows it&#8217;s serious.</p></blockquote><h3>Stage 5: Managed &amp; Measured</h3><blockquote><p>The organization now <strong>measures its AI governance performance</strong> and continuously improves it. Governance has dedicated resources (e.g. a Responsible AI team). Metrics might include number of projects reviewed, incidents detected, training completion rates, etc. AI governance integration with enterprise risk management is achieved &#8211; AI risks are on the corporate risk register and monitored. <em>Assessment:</em> Quantitative metrics and audits show compliance rates and risk levels. The organization can demonstrate, for example, that 100% of high-risk AI systems underwent bias audit, and incident rate of AI issues is decreasing. <em>Challenge:</em> Avoiding complacency &#8211; at this stage it&#8217;s easy to think &#8220;we have it under control,&#8221; while the AI landscape evolves. Also managing complexity as AI governance now intersects with data governance, model risk management, etc., requiring coordination. <em>Best Practice:</em> Establish feedback loops &#8211; after each project or audit, have a retrospective to update governance practices. Benchmark against peers or standards (maybe participate in an industry consortium to compare notes). Possibly adopt advanced software solutions for AI governance to manage documentation and approvals (Governance, Risk, Compliance tools adapted for AI).</p></blockquote><h3>Stage 6: Integrated &amp; Optimized</h3><blockquote><p>AI governance is <strong>fully integrated into all business processes and culture</strong>. It&#8217;s not a separate or burdensome thing &#8211; it is how the company does AI. Employees are proactive in raising issues; governance considerations are part of innovation discussions from the start (ethics by design). The organization might be <strong>certified or externally validated</strong> for its governance (e.g. ISO 42001 certified, or rated highly in ESG evaluations for AI ethics). <em>Assessment:</em> AI governance considerations appear in strategic planning, product development, and Board oversight regularly. External audits find minimal issues and praise internal processes. <em>Challenge:</em> At this maturity, challenges include staying adaptive (the external world may impose new requirements) and ensuring that the governance model itself innovates (for example, incorporating new tools like AI explainability improvements or addressing novel AI tech like generative models quickly). <em>Best Practice:</em> Regularly review the governance framework against new advancements and update it. Engage with external stakeholders &#8211; like publishing your governance approach publicly and inviting feedback or contributing to industry standards. This keeps the program fresh and maintains external trust.</p></blockquote><h3>Stage 7: Transformative &amp; Industry Leader</h3><blockquote><p>The organization&#8217;s world-class AI governance<strong> gives it a competitive and reputational edge</strong>. It not only manages risks but <strong>innovates through governance</strong>. The company might help shape regulations and standards because it&#8217;s ahead of the curve. AI governance at this level can enable things like entering new markets quickly because regulators trust the company&#8217;s processes. The company might release responsible AI tools or frameworks for others (thought leadership). <em>Assessment:</em> The organization is cited as a model for Responsible AI in its industry. Zero major incidents in recent history and strong trust from customers and regulators. Possibly, the company sits on regulatory advisory boards or standard bodies. <em>Challenge:</em> Continuous leadership &#8211; maintaining this position requires effort and resources. Also, sharing practices might diminish competitive advantage, but leading firms realize that raising the industry standard is beneficial overall. <em>Best Practice:</em> Embrace transparency &#8211; publish AI ethics reports and open source specific tools (for fairness and explainability). Mentor other organizations or subsidiaries in adopting similar governance. At this stage, governance is not seen as a cost, but as an enabler of bold AI-driven strategies because stakeholders&#8217; trust lowers barriers.</p></blockquote><h3>Key Takeaways</h3><blockquote></blockquote><ul><li><p><strong>Know Your Starting Point</strong>: Assess your organization honestly against these seven stages. Most organizations begin at Stage 1 or 2; acknowledging your current position is the first step toward improvement.</p></li><li><p><strong>Incremental Progress is Valuable</strong>: Moving from one stage to the next brings significant benefits. Don't try to jump from Stage 1 to Stage 6 overnight &#8211; focus on steady progress.</p></li><li><p><strong>Leadership Support is Critical</strong>: AI governance requires executive buy-in at every stage. Without leadership support, policies remain unenforced and resources inadequate.</p></li><li><p><strong>Balance Governance and Innovation</strong>: Effective AI governance doesn't stifle innovation &#8211; it enables responsible innovation by building trust with stakeholders and preventing costly mistakes.</p></li><li><p><strong>Governance is a Journey, Not a Destination</strong>: The AI landscape continues to evolve rapidly with new technologies, regulations, and ethical considerations. Even organizations at Stage 7 must continuously adapt their governance approaches.</p></li><li><p><strong>Culture Matters as Much as Process</strong>: As you progress through the stages, focus not just on policies and procedures but on building a culture where responsible AI is everyone's concern.</p></li><li><p><strong>Measure and Demonstrate Value</strong>: Quantify the benefits of your governance program &#8211; incidents prevented, compliance achieved, trust built &#8211; to maintain momentum and support.</p></li></ul><blockquote><p>Organizations can use this AI Governance maturity model to pinpoint their current status (e.g., maybe Stage 3 if they have some policies but inconsistent application) and plan targeted improvements to progress (e.g., to Stage 4 by formalizing a governance committee and mandating processes across all projects).</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q3SZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png" width="1183" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1183,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Achieving Effective AI Governance&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Achieving Effective AI Governance" title="Achieving Effective AI Governance" srcset="https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Q3SZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e6d34-e94e-463f-9151-b78674dd35f7_1183x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Achieving Effective AI Governance</figcaption></figure></div>]]></content:encoded></item><item><title><![CDATA[A Guide to AI Governance]]></title><description><![CDATA[Defining , Safety, Trust, Responsibility, and Risk]]></description><link>https://insights.techne.ai/p/a-guide-to-ai-governance</link><guid isPermaLink="false">https://insights.techne.ai/p/a-guide-to-ai-governance</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Thu, 27 Feb 2025 22:48:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2rzl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2rzl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2rzl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 424w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 848w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 1272w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2rzl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png" width="746" height="419" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1cba4915-642c-421a-817d-57285d4f40d1_746x419.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:419,&quot;width&quot;:746,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!2rzl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 424w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 848w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 1272w, https://substackcdn.com/image/fetch/$s_!2rzl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cba4915-642c-421a-817d-57285d4f40d1_746x419.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>As artificial intelligence transforms industries and reshapes our world, organizations face mounting pressure to deploy AI systems that are not only powerful but also ethical, safe, and aligned with human values. Yet the terminology around AI ethics can be confusing, with overlapping concepts often used interchangeably but with distinct meanings and implications.</p><p>In my conversations with business leaders, I've noticed considerable confusion about what constitutes AI governance versus safety, how trustworthy AI differs from responsible AI, and where risk management fits. This lack of clarity isn't just a semantic issue&#8212;it creates real challenges in implementing effective AI oversight frameworks and meeting emerging regulatory requirements.</p><p>In this article, I want to clarify and demystify these concepts, offering clear definitions and practical insights into how they interconnect.</p></blockquote><h3>AI Governance: The Framework for Ethical AI</h3><blockquote><p><a href="https://www.ibm.com/think/topics/ai-governance">AI governance</a> encompasses the policies, oversight structures, and processes that ensure AI systems are developed and deployed responsibly and in compliance with laws and ethical principles. Think of governance as the overarching framework that enables and enforces all other aspects of ethical AI.</p><p>Effective AI governance includes:</p></blockquote><ul><li><p><strong>Strategic oversight</strong>: Establishing committees or boards with direct accountability for AI initiatives and their outcomes</p></li><li><p><strong>Policy development</strong>: Creating clear guidelines for AI development, testing, deployment, and monitoring</p></li><li><p><strong>Compliance mechanisms</strong>: Ensuring adherence to relevant regulations and industry standards</p></li><li><p><strong>Documentation standards</strong>: Maintaining records of AI system designs, training data, performance metrics, and risk assessments</p></li><li><p><strong>Audit procedures</strong>: Regularly reviewing AI systems to verify they operate as intended and within ethical boundaries</p></li></ul><blockquote><p>Many organizations are now creating dedicated AI governance committees with representation from technical teams, legal, compliance, ethics specialists, and business leaders. These committees typically work to align AI initiatives with the organization's values and risk tolerance while ensuring regulatory compliance.</p><p>For example, a healthcare organization might establish governance structures that require all AI systems affecting patient care to undergo rigorous review processes, including clinical validation, bias testing, and privacy impact assessments before approval. The governance framework would define who has decision-making authority, what documentation is required, and how frequently systems need to be reassessed.</p><p>As AI regulation evolves globally&#8212;from the EU AI Act to industry-specific guidelines&#8212;governance frameworks provide the structure to adapt to these requirements systematically rather than reactively.</p><p>(<em><a href="https://www.ibm.com/think/topics/ai-governance">IBM on AI Governance)</a></em></p></blockquote><h3>AI Safety: Ensuring Systems Operate Reliably as Intended</h3><blockquote><p>AI safety focuses on preventing harm or unintended consequences from AI systems, ensuring they operate reliably and as intended, especially in high-stakes environments. While governance provides the framework, safety encompasses the technical practices that make AI trustworthy.</p><p>AI safety has both broad and narrow interpretations:</p></blockquote><ul><li><p>In the broad sense, it covers all measures to make AI beneficial and non-harmful</p></li><li><p>In the narrower "technical AI safety" sense, it focuses on the mathematical and engineering approaches to ensuring AI systems behave predictably and safely</p></li></ul><blockquote><p>Key aspects of technical AI safety include:</p></blockquote><ul><li><p><strong>Robustness</strong>: Ensuring AI systems perform consistently even with unexpected inputs or in adversarial conditions</p></li><li><p><strong>Interpretability</strong>: Making AI decision-making transparent and understandable to human overseers</p></li><li><p><strong>Alignment</strong>: Designing systems whose objectives and behaviors align with human values and intentions</p></li></ul><blockquote><p>Safety practices are particularly critical in high-consequence applications. For instance, autonomous vehicle systems incorporate redundant safety measures, extensive testing across diverse scenarios, and fail-safe mechanisms prioritizing human safety above all other objectives.</p><p>In domains like healthcare, AI safety might involve rigorous validation against diverse patient populations, monitoring for performance drift as patient demographics change, and human-in-the-loop verification for high-risk decisions.</p><p>As AI systems grow more powerful, the safety challenges evolve. Current research focuses on ensuring that advanced AI systems remain reliably aligned with human intent even as they become more sophisticated and capable of reasoning about their operation.</p><p>(<em><a href="https://www.ibm.com/think/topics/ai-safety">IBM on AI Safety</a>)</em></p></blockquote><h3>Trustworthy AI: Worthy of Stakeholder Confidence</h3><blockquote><p>Trustworthy AI is not a single technique but an outcome&#8212;AI systems that merit stakeholder trust by operating transparently, fairly, reliably, and accountably. While safety focuses on preventing harm, trustworthiness encompasses the broader qualities that make AI systems deserving user confidence.</p><p>Components of trustworthy AI include:</p></blockquote><ul><li><p><strong>Explainability</strong>: Providing clear information on how AI makes decisions, mainly when those decisions affect people's lives or livelihoods</p></li><li><p><strong>Fairness</strong>: Mitigating biases to ensure equitable treatment across demographic groups</p></li><li><p><strong>Reliability</strong>: Performing consistently across circumstances and over time</p></li><li><p><strong>Accountability</strong>: Clearly defining responsibility for AI outcomes and providing redress mechanisms</p></li><li><p><strong>Human-centricity</strong>: Respecting human autonomy and enhancing human capabilities rather than diminishing human agency</p></li></ul><blockquote><p>The White House's AI Bill of Rights illustrates key principles for trustworthy AI, including protections against algorithmic discrimination, safeguards for data privacy, and notice/explanation requirements to ensure that AI systems uphold civil rights and liberties.</p><p>In practical terms, building trustworthy AI often involves:</p></blockquote><ul><li><p>Diverse and representative training data</p></li><li><p>Regular bias audits and remediations</p></li><li><p>Clear communications about AI capabilities and limitations</p></li><li><p>Maintaining human oversight, especially for consequential decisions</p></li><li><p>Establishing appeals processes for affected individuals</p></li></ul><blockquote><p>For example, a financial institution deploying an AI-based loan approval system would need to ensure the system produces explainable decisions, doesn't perpetuate historical biases, maintains stable performance across different applicant groups, and allows for human review of edge cases&#8212;all while complying with fair lending regulations.</p><p>Trustworthy AI becomes increasingly important as AI systems are deployed in sensitive contexts. When systems make or influence decisions about healthcare, employment, or legal outcomes, stakeholder trust is desirable and essential for adoption and legitimacy.</p><p>(<a href="https://blogs.nvidia.com/blog/what-is-trustworthy-ai/">Trustworthy AI by Nvidia</a>)</p></blockquote><h3>Responsible AI: The Ethical Approach to Development and Deployment</h3><blockquote><p>Responsible AI represents a broad commitment to the ethical development and use of AI technologies. While trustworthy AI describes the systems' qualities, responsible AI encompasses the organizational culture, practices, and values that guide AI creation and deployment.</p><p>Responsible AI typically involves:</p></blockquote><ul><li><p><strong>Ethical principles</strong>: Clear articulation of values that guide AI development and use</p></li><li><p><strong>Inclusive design</strong>: Involving diverse stakeholders in AI system design and evaluation</p></li><li><p><strong>Impact assessment</strong>: Evaluating the potential consequences of AI deployments on various stakeholders</p></li><li><p><strong>Training and awareness</strong>: Educating teams about ethical considerations in AI</p></li><li><p><strong>Continuous improvement</strong>: Regularly reassessing and refining approaches based on outcomes and stakeholder feedback</p></li></ul><blockquote><p>Many organizations have published responsible AI principles, but implementation requires converting high-level values into concrete practices. This means developing comprehensive governance structures, detailed policies, and targeted training programs that institutionalize ethical considerations throughout the AI lifecycle.</p><p>For example, Microsoft's Responsible AI Standard outlines requirements across six dimensions: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are operationalized through assessment tools, review processes, and dedicated roles responsible for ethical oversight.</p><p>The maturity of responsible AI practices varies widely across organizations. Some are beginning to articulate principles, while others have developed sophisticated frameworks with clear metrics for measuring compliance and effectiveness.</p><p>Responsible AI practices must evolve as AI capabilities advance to address new challenges. Issues such as synthetic media (deepfakes), increasingly human-like AI interactions, and autonomous systems require ongoing ethical consideration and the adaptation of responsible AI frameworks.</p><p><a href="https://www.mckinsey.com/alumni/news-and-events/global-news/firm-news/responsible-ai-confronting-risks-guiding-application">(McKinsey on Responsible AI)</a></p></blockquote><h3>AI Risk Management: Proactively Addressing Potential Harms</h3><blockquote><p>AI risk management involves systematically identifying, assessing, and mitigating the various risks AI systems pose. While the previous concepts focus on building positive qualities, risk management targets potential adverse outcomes.</p><p>AI risks span multiple dimensions:</p></blockquote><ul><li><p><strong>Technical risks</strong>: Algorithm errors, security vulnerabilities, unpredictable behaviors</p></li><li><p><strong>Ethical risks</strong>: Bias, fairness issues, privacy violations</p></li><li><p><strong>Legal and compliance risks</strong>: Regulatory violations, liability concerns</p></li><li><p><strong>Reputational risks</strong>: Public backlash, loss of stakeholder trust</p></li><li><p><strong>Operational risks</strong>: System failures, performance degradation, misalignment with business objectives</p></li></ul><blockquote><p>Formal frameworks like the NIST AI Risk Management Framework provide structured approaches to managing these risks throughout the AI lifecycle. The NIST framework, for instance, encompasses four core functions:</p></blockquote><ol><li><p><strong>Map</strong>: Establish context for AI risks</p></li><li><p><strong>Measure</strong>: Assess and analyze risks</p></li><li><p><strong>Manage</strong>: Implement risk mitigation measures</p></li><li><p><strong>Govern</strong>: Oversee and coordinate risk management activities</p></li></ol><blockquote><p>In practice, AI risk management might include conducting bias and privacy impact assessments before deployment, monitoring AI outcomes for anomalies or unexpected behaviors, and developing incident response plans for potential AI failures.</p><p>The new ISO/IEC 23894 standard similarly guides managing AI-related risks, helping organizations identify, assess, and mitigate such risks. These frameworks emphasize continuous risk assessment rather than point-in-time evaluations.</p><p>For example, a retail company deploying an AI recommendation system might assess risks, including algorithmic bias that could alienate customer segments, privacy concerns around data collection and use, and technical risks, such as recommendation quality degradation when product catalogs change significantly.</p><p>As AI systems become more complex and autonomous, risk management approaches must evolve to address emergent behaviors and long-term impacts that may not be immediately apparent during development.</p><p>(<a href="https://www.lakera.ai/blog/ai-risk-management">McKinsey on Risk Management</a>)</p></blockquote><h3>How These Concepts Interconnect</h3><blockquote><p>These five concepts&#8212;governance, safety, trustworthiness, responsibility, and risk management&#8212;form an integrated approach to ethical AI. Rather than standalone initiatives, they represent complementary facets of a comprehensive AI ethics framework:</p></blockquote><ul><li><p><strong>AI governance</strong> provides the structural backbone&#8212;the policies, roles, and processes that enable all other aspects</p></li><li><p><strong>AI safety</strong> focuses on the technical robustness and reliability of systems</p></li><li><p><strong>Trustworthy AI</strong> encompasses the qualities that make AI systems worthy of stakeholder confidence</p></li><li><p><strong>Responsible AI</strong> represents the organizational values and practices guiding ethical development</p></li><li><p><strong>AI risk management</strong> proactively identifies and addresses potential harms</p></li></ul><blockquote><p>In practice, these elements work together throughout the AI lifecycle. Governance structures ensure safety practices, risk management identifies potential trustworthiness issues, and responsible AI principles guide governance priorities.</p><p>For instance, when developing a new AI system, an organization might:</p></blockquote><ol><li><p>Apply governance processes to determine required approvals and assessments</p></li><li><p>Implement safety practices during development and testing</p></li><li><p>Evaluate the system against trustworthiness criteria</p></li><li><p>Ensure responsible AI principles guide design choices</p></li><li><p>Conduct risk assessments to identify and mitigate potential harms</p></li></ol><h3>The Path Forward: Building Integrated Approaches</h3><blockquote><p>As AI continues transforming industries and society, organizations need integrated approaches that address all dimensions of ethical AI. Here are key considerations for building such approaches:</p><p><strong>Align with organizational values</strong>: Effective AI ethics frameworks must reflect and reinforce the organization's core values.</p><p><strong>Balance innovation and safeguards</strong>: Overly restrictive approaches may hamper innovation, while insufficient guardrails create unacceptable risks.</p><p><strong>Embrace continuous evolution</strong>: Ethics frameworks must adapt as AI technologies and societal expectations evolve.</p><p><strong>Foster cross-functional collaboration</strong>: Effective AI ethics requires input from technical teams, legal/compliance, ethics specialists, business leaders, and diverse stakeholders.</p><p><strong>Prioritize based on risk</strong>: Apply the most rigorous oversight to high-risk applications while enabling streamlined approaches for lower-risk use cases.</p></blockquote><h3>Moving Forward</h3><blockquote><p>Understanding the distinctions between AI governance, safety, trustworthiness, responsibility, and risk management is essential for organizations navigating the complex landscape of AI ethics. Though interrelated, these concepts serve different functions in ensuring that AI systems benefit humanity while minimizing potential harm.</p><p>As regulatory requirements evolve and stakeholder expectations rise, organizations that develop integrated approaches to AI ethics will be better positioned to build trustworthy systems, manage risks effectively, and responsibly realize AI's transformative potential.</p><p>The future of AI depends not just on technical innovation but on our ability to develop and deploy these powerful technologies in ways that align with human values and promote human flourishing. By embracing comprehensive approaches to AI ethics, we can help ensure the future becomes one we all want to inhabit.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Google's Gemini AI Studio's Stream Realtime]]></title><description><![CDATA[From static to real-time interactions with LLMs]]></description><link>https://insights.techne.ai/p/googles-gemini-ai-studios-stream</link><guid isPermaLink="false">https://insights.techne.ai/p/googles-gemini-ai-studios-stream</guid><dc:creator><![CDATA[Khullani M. Abdullahi]]></dc:creator><pubDate>Tue, 18 Feb 2025 22:25:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/158614932/a2d5ed989bc2d7f76dd485b5afd47539.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>AI Studio's new Stream Realtime feature is changing the game for businesses by replacing static interactions with flowing, real-time communication.</p><p>Think about a customer struggling with software installation. Instead of digging through help docs, they can share their screen and get visual guidance in real time. Or imagine executives making decisions with live data instead of static reports.</p><p>This goes beyond automation - it creates a partnership between human know-how and AI analysis. Customer service becomes proactive and visual. Product demos turn into personalized experiences. In the workplace, AI becomes a real-time partner that analyzes live data, simplifies workflows, and offers insights when you need them.</p><p>The impact reaches across industries. Retailers can offer personal shopping through live consultations. Healthcare providers can monitor patients remotely and step in when needed. Manufacturers can spot problems before they become serious. Banks can catch fraud as it happens.</p><p>This isn't just about making small improvements - it's about completely rethinking how businesses operate, connect with customers, and use AI to tackle today's challenges.</p><h3>The Details</h3><p>Google's Gemini 2.0 introduces the Multimodal Live API, enabling real-time interactions that seamlessly integrate text, audio, and video. While this advancement offers significant opportunities for developers, non-developers can also access and benefit from Gemini 2.0 through <a href="https://aistudio.google.com/">Google AI Studio</a>.</p><p><strong>Accessing Gemini 2.0 via Google AI Studio:</strong></p><ul><li><p><strong>Google Workspace Users:</strong> If you have a Google Workspace account, you have default access to AI Studio. This allows you to explore Gemini 2.0's capabilities without needing extensive technical knowledge. </p></li><li><p><strong>Gemini Advanced Subscription:</strong> For enhanced features, consider subscribing to <a href="https://gemini.google/advanced/?hl=en">Gemini Advanced</a>. This subscription provides access to the most capable AI models, including the experimental 2.0 Pro model, and integrates with various Google applications like Gmail and Docs. </p></li></ul><p><strong>Getting Started:</strong></p><ol><li><p><strong>Visit Google AI Studio:</strong> Navigate to <a href="https://aistudio.google.com/">aistudio.google.com</a> and sign in with your Google account.</p></li><li><p><strong>Explore the Interface:</strong> The platform is designed to be user-friendly, allowing you to interact with Gemini 2.0 without deep technical expertise.&#57862;</p></li><li><p><strong>Utilize Available Tools:</strong> Leverage the tools and resources provided to understand and apply Gemini 2.0's features in various contexts.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bx4p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bx4p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 424w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 848w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 1272w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bx4p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png" width="1456" height="872" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:872,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:235385,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insights.techne.ai/i/158614932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Bx4p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 424w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 848w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 1272w, https://substackcdn.com/image/fetch/$s_!Bx4p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c20124-3ddc-4dd1-b85f-dd8afcbe56f0_2872x1720.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>By accessing Gemini 2.0 through Google AI Studio, non-developers can harness the power of advanced AI models to enhance their workflows and explore new possibilities in human-computer interaction.</p><p><strong>Access here</strong>: https://aistudio.google.com/live </p><p><strong>Learn more</strong> here: https://developers.googleblog.com/en/gemini-2-0-level-up-your-apps-with-real-time-multimodal-interactions </p>]]></content:encoded></item></channel></rss>