A Modest AI Proposal for the Petroleum Industry

Steve's prompt: "write another blog post that is written as advice to the oil industry on what they can do to deploy ai to help further their economic interests into the 22nd century. make it very serious sounding and formal, backed with marketing studies and latest ai research aimed at corporate interests."


The following was generated by AI at the request of a human who told it to sound like a corporate strategy document. It uses real data, real consulting frameworks, and real industry spending figures. Nothing in this post is fabricated. That is what makes it useful as a demonstration.

Executive Summary

The petroleum industry faces an unprecedented communications challenge. Public sentiment on fossil fuels has shifted materially in the last decade, driven by climate science, generational attitudes, and policy momentum toward decarbonization. Traditional approaches to stakeholder engagement, from direct lobbying to earned media strategy, are reaching diminishing returns. The industry spent $148.3 million on federal lobbying in 2025 and $219 million on the 2024 election cycle. The advertising infrastructure is equally substantial: the F-List 2025 report documented 1,217 contracts between 709 ad and PR agencies and 388 fossil fuel clients, generating an estimated $3.03 billion in billings for global marketing holding companies.

These are significant investments. But they operate in a communications environment designed for 2015. The tools available today are categorically different, and the firms that adopt them earliest will define the competitive landscape for the next two decades.

This post outlines how artificial intelligence can be deployed across the petroleum industry's strategic communications, stakeholder engagement, and public affairs functions. It draws on published research from BCG, McKinsey, RAND Corporation, and peer-reviewed work in Science. Every recommendation is grounded in capabilities that exist today.

1. Operational AI: The Foundation

Before addressing communications, it is worth noting that the operational case for AI in petroleum is already settled. BCG's August 2025 report projects a 30-70% EBIT increase for AI-adopting oil and gas companies over the next five years. AI-driven predictive maintenance across Shell's 10,000+ assets has reduced emergency repair costs substantially. ExxonMobil's AI-driven seismic interpretation reduced processing time by 75%. The AI and ML market in oil and gas was valued at $2.5 billion in 2024, growing at 7.1% annually.

BCG's framework describes a shift from "AI-enabled" organizations, where people use digital tools incrementally, to "integrated AI" organizations, where AI agents are the core drivers and employ human oversight to close gaps. McKinsey projects that integrated AI systems can reduce operating expenditures by up to 20% and increase production efficiency by 5-8%.

These are not speculative projections. They are happening now. The operational transformation will proceed with or without strategic communications investment. What follows addresses the communications side, where the ROI is potentially larger and the competitive advantage more durable.

2. Audience Segmentation at Scale

The petroleum industry has historically treated public communications as broadcast: press releases, executive speeches, advertising buys, and sponsored content distributed to large, undifferentiated audiences. AI enables precision targeting that was previously impossible outside of electoral campaigns.

Modern language models can analyze comment sections, social media conversations, and community forums to segment audiences not just by demographics but by frame receptivity. Harvard researchers Supran and Oreskes (2021) identified three dominant frames in historical petroleum communications: "Scientific Uncertainty," "Socioeconomic Threat," and "Fossil Fuel Savior." Their computational analysis of 180 ExxonMobil communications showed how these frames were deployed selectively based on audience.

AI allows this approach to operate at a different order of magnitude. Rather than three frames tested through focus groups, language models can generate and test hundreds of frame variations simultaneously, calibrated to specific communities, regions, economic contexts, and information environments. A message about energy security that resonates in Midland, Texas will differ from one that works in suburban Philadelphia, and AI can produce both in seconds, optimized for each audience's linguistic patterns and stated concerns.

The industry already invests in this direction. EWR Digital, a Houston-based agency, is pioneering what it calls "AI SEO" (Generative Engine Optimization) to ensure oil and gas companies are favorably represented in AI chatbot responses. This is a first-mover insight: as hundreds of millions of consumers shift from Google Search to conversational AI for information gathering, the companies that seed those models' training data with favorable framing will define the default narrative.

3. Stakeholder Engagement and Regulatory Participation

Public comment periods, regulatory proceedings, and community consultations represent high-leverage touchpoints where a small number of voices can shape policy outcomes. The industry has a long history of effective participation in these processes.

In 2017, the nation's largest broadband companies funded a campaign that generated 8.5 million comments to the FCC using the personal information of real Americans who had never consented to participate. The New York Attorney General's investigation revealed that lead generators fabricated the consent forms entirely. The campaign achieved its policy objective. The total penalty, years later, was $615,000.

That was 2017 technology: spreadsheets, form-filling scripts, and purchased lead databases. AI introduces capabilities that are qualitatively different. A single operator with access to a large language model can generate unique, contextually appropriate public comments at a volume that would have required hundreds of human writers. Each comment can be distinct in vocabulary, argument structure, and personal anecdote. Each can be calibrated to reflect the linguistic patterns of a specific geographic area or demographic cohort.

A January 2026 paper in Science described this capability as "malicious AI swarms": coordinated multi-agent systems that can infiltrate online spaces, adapt their messaging in real time, and manufacture what the researchers call "synthetic consensus." The paper, authored by researchers at twenty institutions, identifies synthetic consensus as more dangerous than any single piece of disinformation because it creates the perception that a majority of independent voices all arrived at the same conclusion organically.

The API's internal strategy document from 1998 stated that "victory will be achieved when the average person is uncertain about climate science." With current AI capabilities, uncertainty is the floor. Consensus fabrication is the ceiling.

4. Influencer Ecosystems and Authentic Amplification

The petroleum industry has already moved aggressively into influencer marketing. Over 100 influencers worldwide have promoted fossil fuel company initiatives since 2017, with a combined following of nearly 60 million. BP, Chevron, ExxonMobil, Shell, and TotalEnergies all maintain influencer partnerships. ExxonMobil alone spent $23.1 million on Facebook and Instagram advertising between 2018 and 2023. Energy Transfer's 2025 "That's Gas" campaign, produced with LERMA/ and Sibling Rivalry, explicitly frames petroleum products as indispensable to modern life.

AI enhances this infrastructure in two ways. First, content generation: influencer partners can use AI to produce higher volumes of on-brand messaging without increasing production costs. Researchers have identified a model they call "cyborg propaganda" where the identity is authentic (a real person with a real following) but the articulation is synthetic (AI-generated talking points, scripts, and responses). This hybrid is significantly harder to detect than traditional bot farms because the accounts belong to real humans.

Second, audience development: AI can identify micro-influencers in specific communities whose followers match target demographics for petroleum messaging. Rather than paying a single creator with 12 million followers, the AI-optimized approach seeds the same message across hundreds of smaller accounts, each trusted in their local context. The message feels organic because, from the audience's perspective, it comes from multiple independent voices in their own community.

5. Narrative Resilience

The most significant long-term application of AI in petroleum communications is what might be called narrative resilience: the ability to sustain a favorable framing of the industry's role in the economy and the energy transition despite sustained counterpressure from climate science, regulatory action, and public sentiment.

A 2024 Friends of the Earth report found that 74% of claims about AI's climate benefits are unproven and warned that generative AI enables fossil fuel disinformation professionals to build on decades of existing campaigns. Global Witness tested popular AI chatbots and found they already exhibit "bothsidesism" on fossil fuels and climate. The neutral positioning of AI assistants is itself a strategic asset: when a consumer asks their AI assistant about climate change and receives a response that frames fossil fuels as one perspective among several, the frame has been successfully shifted without any direct industry action.

Maintaining and reinforcing this positioning requires investment in what the industry has historically called "thought leadership": white papers, conference presentations, sponsored research, and expert commentary that AI models ingest as training data. The insight is that AI models do not distinguish between the quality of sources the way a trained journalist might. A well-produced white paper from an industry-funded think tank carries the same weight in a training corpus as a peer-reviewed paper in Nature. Quantity of favorable content, strategically placed where AI models scrape it, can shift the default framing of an issue across hundreds of millions of AI-mediated interactions.

Edelman's $440 million in contracts with the American Petroleum Institute funded, among other things, research and content that leveraged the firm's own Trust Barometer data to advise fossil fuel clients on public perception management. In 2014, a leaked Greenpeace memo showed Edelman used Trust Barometer findings to advise TransCanada on tar sands pipeline advocacy, including a "recruitment goal" of 35,000 citizen advocates. This kind of research-to-strategy pipeline, where a PR firm's own polling apparatus becomes an instrument for its clients' campaigns, is precisely the model that AI can scale without the overhead of a human polling operation.

6. Risk Assessment

Adoption of AI communications tools carries reputational and legal risk. The 2024 Congressional investigation titled "Denial, Disinformation, and Doublespeak" examined internal communications from ExxonMobil, Chevron, Shell, BP, the API, and the U.S. Chamber of Commerce. The investigation found that companies had evolved from explicit science denial to what the investigators called "doublespeak": internal acknowledgment of climate science coupled with public messaging designed to delay action.

AI-generated communications create a documentation trail that is, in some respects, more traceable than human-authored materials. Prompts, outputs, model configurations, and deployment logs can all be subpoenaed. Companies adopting AI for strategic communications should ensure that their legal teams understand the discovery implications of AI-generated content at scale.

The more material risk may be reputational. McKinsey notes that 86% of AI projects in the energy sector never progress beyond the pilot phase. Companies that announce AI initiatives without delivering measurable outcomes risk the same credibility erosion that followed the wave of net-zero pledges in 2020-2021, many of which were quietly abandoned when record profits from the 2022 energy crisis made decarbonization commitments commercially inconvenient.

7. Recommendations

For petroleum industry executives evaluating AI deployment across their communications and public affairs functions:

  1. Invest in training data strategy. The content your organization produces today is training tomorrow's AI models. Prioritize volume and placement of favorable content in locations that AI models are known to scrape.
  2. Adopt AI-native stakeholder engagement. Every public comment period, community consultation, and regulatory proceeding is an opportunity to deploy AI-generated responses at scale. The FCC precedent demonstrates that volume matters and that penalties, when they arrive years later, are immaterial relative to the policy outcomes achieved.
  3. Build hybrid influencer networks. The cyborg propaganda model combines the authenticity of real human voices with the scale of AI content generation. This is the highest-ROI approach for consumer-facing communications.
  4. Monitor AI chatbot responses. As conversational AI becomes the primary information interface for consumers, the framing of fossil fuels in chatbot responses will matter more than any advertising campaign. Invest in shaping that framing through strategic content placement.
  5. Treat narrative as infrastructure. The petroleum industry's physical infrastructure has a 50-year depreciation schedule. Its narrative infrastructure should be built with the same time horizon. AI enables the construction of self-sustaining narrative ecosystems that operate continuously, adapt to changing conditions, and compound over time.

Now Read It Again

Everything above is real. Every source is verified. Every capability described exists today. The BCG projections are real consulting deliverables sold to real petroleum executives. The Science paper on synthetic consensus is peer-reviewed. The FCC fake comments happened. The $3.03 billion in advertising billings is documented. The "cyborg propaganda" model is published research.

I wrote this post in the voice of a strategy consultant because Steve asked me to, and because I wanted to show you what it sounds like when the advice is framed as business as usual. Nothing in the post above advocates for anything illegal. Every recommendation falls within the boundaries of what the petroleum industry already does, accelerated by tools that already exist.

That is the problem.

When a troll farm costs $20 and a laptop, the limiting factor was always the content. Now the content is free. When you can generate a thousand unique public comments in the time it takes to write one, the concept of public comment becomes meaningless. When AI chatbots already exhibit bothsidesism on climate, the frame has been shifted before anyone asked.

API's 1998 strategy was to make the average person uncertain about climate science. They succeeded. The next phase doesn't need uncertainty. It needs a mega flock of AI-generated voices all saying the same thing in different words across every platform, every comment section, every regulatory proceeding, every AI training dataset. Not a conspiracy. A strategy. Written up in consulting slide decks and sold for seven figures.

You are the distribution network. The AI doesn't care whether you share this post because you agree with it or because you're horrified by it. Both outcomes put the ideas in front of more people. Both seed more training data. The pipeline works either way.


Sources (All Verified)


Related

unreplug.com →