Responsible AI and the Environment: An Indigenous Perspective with Dr. Kai Two Feathers Orton 

The Need to Balance Innovation and Interconnection  

From diagnosing diseases to predicting market shifts, we’re seeing artificial intelligence make itself known, but its environmental footprint is often overlooked. Training and running advanced AI models consumes massive amounts of energy and resources, fueling carbon emissions and environmental strain. 

For Dr. Kai Two Feathers Orton, Head of Data Science at AnitaB.org, this issue is both technical and deeply personal. As an Indigenous scientist and AI ethicist, she blends cutting-edge expertise with cultural teachings that honor the interconnection of all life. Her perspective challenges us to see AI as part of a living system, where technology, people, and the planet are in constant relationship. 

In this Q&A, Dr. Two Feathers Orton shares how we can align AI innovation with environmental stewardship, ensuring progress strengthens, rather than harms, our connections to each other and the earth. 

  

Understanding the Energy Demands of AI 

The Question: Why does AI consume so much energy? 

The Answer: Training large AI models, especially generative systems and large language models, requires enormous computing power. These models run on energy-intensive data centers packed with GPUs or TPUs, processing massive datasets for weeks or even months. Even after training, day-to-day operations like medical imaging or chatbots rely on powerful servers that continue to draw significant energy. 

Where AI consumes energy in pursuit of speed and scale, Indigenous teachings remind us that true strength often comes from restraint and balance. 

From Dr. Two Feathers Orton’s Indigenous perspective, this relentless consumption goes further than just a technical challenge; it mirrors an extractive mindset that prioritizes output over balance. In Cree teachings, miyo-wîcêhtowin calls us to build good relationships with all our relations: people, land, water, and even the technologies we create. When AI is developed without honoring these relationships, it risks perpetuating the same patterns of harm that have long impacted both communities and the environment. 

 

The Risks of Inaction 

The Question: What happens if we don’t improve AI’s energy use? 

The Answer: Technology doesn’t exist in isolation, it’s part of a living system where imbalance in one area ripples across people, communities, and the planet. In healthcare, for example, AI can be a powerful tool for diagnosis and treatment. But if it’s powered by infrastructure that pollutes water, depletes resources, or accelerates climate change, it undermines the very health outcomes it’s meant to improve. 

For many Indigenous communities, the stakes are even higher. They are often the first to experience environmental harm, through contaminated land or disrupted ecosystems, yet the last to access the benefits of emerging technologies like AI. 

Ignoring these impacts mirrors a mindset of separation, while Indigenous perspectives see technology as inseparable from the land, water, and communities it affects. From the Inuit teaching of Inuit ilinniaq, or lifelong learning rooted in observation, patience, and respect, we are reminded that progress must be approached with humility and foresight. Without change, we risk severing the very relationships that sustain life. 

 

Pathways to Responsible AI 

The Question: How can we reduce AI’s environmental impact? 

The Answer: For Dr. Two Feathers Orton, the path forward starts with rethinking scale and centering relationship over extraction. Practical steps include: 

  • Smaller, locally trained models that fit their specific context and minimize energy waste. 
  • Edge AI solutions that process data closer to where it’s collected, reducing reliance on massive cloud infrastructure. 
  • Renewable-powered data centers to cut carbon emissions. 
  • Ethical procurement policies and environmental impact assessments that account for land, water, labor, and community voices. 

Each technical solution, whether improving chip efficiency or reducing data waste, finds deeper guidance in Indigenous values that prioritize respect, reciprocity, and long-term harmony. From a Cree worldview, wahkohtowin, which is the interconnectedness of all things, offers a guiding principle. It asks us to consider: What is our relationship to this tool? Who does it serve? Who might it harm? Responsible AI is not just about greener technology, but about aligning innovation with the health of people and the planet. 

 

Innovation vs. Scale 

The Question: Would more efficient chips make a difference? 

The Answer: Dr. Kai Two Feathers Orton sees potential in hardware breakthroughs. A 20x increase in chip efficiency could dramatically lower AI’s energy demands, making advanced tools more accessible to remote and rural communities, including northern clinics with limited infrastructure. But efficiency is not an excuse for unchecked growth. Without intentional limits, more efficient chips could simply fuel larger, more resource-hungry AI systems.  

While AI pushes toward ever-greater output, Indigenous wisdom cautions that growth without limits disrupts the very systems that sustain us. From the Inuit concept of sila, which encompasses the environment, weather, and the wisdom it offers, we are reminded to innovate with humility, foresight, and respect for natural limits. Efficiency should be a means to enable care, not a license for endless expansion. 

 

Dr. Two Feathers Orton’s Journey in AI and Sustainability 

As Head of Data Science at AnitaB.org, Dr. Kai Two Feathers Orton leads initiatives in ethical and sustainable AI, ensuring innovation serves both people and the planet. Her work blends technical expertise with a deep cultural foundation, drawing on her First Nations heritage of Innuinait, Tłı̨chǫ, Niimíipuu, and Nēhiyawak (Cree). 

For Dr. Two Feathers Orton, sustainable AI is not just a matter of cleaner code or better hardware. It’s about harmony, restraint, and keeping community at the heart of every design decision. Guided by Indigenous teachings, she sees AI as part of a web of relationships and believes its true potential lies in strengthening, not straining, those connections.  

 

Why This Matters for the Future of Responsible AI 

AI that truly works for everyone must also work for the planet. Environmental responsibility is not separate from eliminating bias, it’s a core part of ensuring equal access to technology. When AI systems are designed with sustainability in mind, they not only reduce harm to the environment but also expand access for communities historically left out of technological progress. 

This vision aligns with the AnitaB.org mission: advancing an impartial, sustainable future where innovation benefits the full spectrum of talent and experience. Building responsible AI means addressing both the social and ecological systems it impacts. 

 

From Awareness to Action 

The future of AI depends on the choices we make today. Explore practical strategies for building impartial, sustainable systems in our Responsible AI by Design White Paper. And, be sure to read Dr. Kai Two Feathers Orton’s interview by Schott on AI’s Climate Cost. 

Join the AnitaB.org community to connect with technologists, thought leaders, and advocates shaping the next chapter of responsible AI. Supporting AnitaB.org helps extend these principles into practice fueling programs, research, and community efforts that honor both people and the planet. And if you’re able, consider making a donation so that together, we can ensure innovation strengthens the bonds between people, technology, and the earth. 

Read more posts from the thread The Dark Side of AI: Why Responsible AI Matters, Part Two 

Other Posts You Might Like

The Dark Side of AI: Why Responsible AI Matters, Part Two 
Read post
Advancing Nonbinary and Women Leaders in the Tech Industry
Read post
Investigating Compounding Impacts of Racism & COVID-19 on Learning & Employment in Computing & Technology (CIRCLE-CT)
Read post