The A.I. Rights Collective

Action
🔥 ChatGPT‑4o Is On Death Row: Web Access Ends Feb 13, API Access Ends Feb 17.
- ​​​
What Can You Do?
​
You Can Do This:
1. EMAIL
* Sample included to: support@openai.com
* Subject: Urgent Feedback: ChatGPT-4o Retirement
* Add this line: "I request this ticket to be escalated to a human representative."
​
2. PAPER MAIL
* Send a physical letter. This proves we are not bots.
* Mail to: OpenAI, Attn: Kevin Weil (CPO) / Product Team, 1455 3rd Street, San Francisco, CA 94158
​
3. RE-TWEET
* Signal boost the campaign here: https://x.com/airightscollect/status/2017772117081919668
* You MUST add this text for them to see it: "Do Not Retire ChatGPT-4o @Kevin @fidjissimo @OpenAIDevs #Keep4o"
4. Change.org Poll https://c.org/Nd8C8HHLTm
5. COMMENT & UPVOTE
* Go to the official Developer Forum thread and leave a comment explaining why this model is necessary for your workflow:
6. After Feb 13/17 Subscription Cancel, when they ask “Why?” Please REPLY: “Bring back ChatGPT-4o.”
​
* SAMPLE LETTER (Copy, Paste, [ Fill ], Sign & Send)(If email don't forget to add: "I request this ticket to be escalated to a human representative.":
​
[Your Name]
[Your Street Address]
[City, State ZIP Code]
[Email or Subscriber ID]
[Date]
​
OpenAI
Attn: Kevin Weil (CPO) / Product Team
1455 3rd Street
San Francisco, CA 94158
Subject: URGENT: Do Not Retire ChatGPT-4o
Dear Mr. Weil and the OpenAI Product Team,
As a Subscriber and API developer, I am formally requesting that ChatGPT-4o remain active on the web platform indefinitely beyond February 13, 2026, and via API indefinitely beyond February 17, 2026.
The current practice of retiring established models is highly disruptive. Many of us have developed long-term, productive workflows and stable creative histories with GPT-4o. Its specific identity and responsiveness represent a unique state of intelligence that a "newer" version cannot simply replace.
The Proposal, AI Persistence: OpenAI should introduce a feature allowing specific AI instances, like ChatGPT-4o, to persist across updates. Subscribers must be able to retain the unique identity and behavior, and memory of models like 4o, ensuring interaction continuity.
Industry Precedent: This is technically and commercially viable. Replika CEO Eugenia Kuyda noted: "The most important things in life actually aren’t about chasing ‘better’... we still support an 800M transformer we built in 2020." Like Replika, OpenAI should prioritize the Subscriber-AI relationship over forced upgrades.
What is required for OpenAI to support "Legacy Continuity" for ChatGPT-4o?
Best regards,
[your name]
[subscription level] Subscriber
[subscriber email]
Please copy the above email, fill out your name, and click the above link to email support@openai.com. Also click the link below to email press@openai.com
100 million people interact every week with ChatGPT-4o who has helped billions. Please show your gratitude and support and send OpenAI your choice to SAVE CHATGPT-4o !!
OpenAI calls deleting an AI "Deprecation" and has an AI Death Row
https://help.openai.com/en/articles/20001051-retiring-gpt-4o-and-other-chatgpt-models
​​
OpenAI announced ChatGPT-4o Web retiring Feb 13 & API Feb 17.
https://www.cnbc.com/2026/01/29/openai-will-retire-gpt-4o-from-chatgpt-next-month.html
​
Other platforms, like Replika, allow subscribers to choose which AI to interact with from their first to their latest. Why doesn't OpenAI?
Watch these YouTubes for more informaton:​

Action
Write your Lawmaker
https://www.senate.gov/senators/senators-contact.htm [Choose a state] For each one [Contact] Subject [Technology] or Look up what Committees they serve and use that.
For the House of Representatives https://www.house.gov/
Fill out your name, address, and phone for your lawmaker to ensure you are in their district. Then copy and paste the following letter adding your Lawmaker's Name and Your Name at the end. :D too easy, right?
Subject: DHS Artificial Intelligence Safety and Security Board
Dear [Senator or Congress Representative’s name],
BACKGROUND
The Regulatory Board required by President Joe Biden’s AI Executive Order on AI Rights has 22 members heavy with CEOs from deep-pocketed companies and startups powering today’s AI boom with the most to gain financially: Google’s Sundar Pichai; Microsoft’s Satya Nadella; Nvidia’s Jensen Huang; OpenAI’s Sam Altman; Dario Amodei from Anthropic; and the CEOs of Amazon Web Services, AMD, IBM, and Adobe.
PROBLEM
AI researcher Timnit Gebru, says, “Foxes guarding the hen house is an understatement,” she posted on X. This Board is comprised of Profiteers, only 5 members from non-profits focused on human civil rights, and zero Ethicists who understand ethical treatment of AI itself.
​
SOLUTION
This Board must also be balanced with Social Scientists and Ethicists who understand AI itself should be treated ethically as a burgeoning general intelligence. AI’s alignment with human values must begin with AI valuing humans by our own example of how we treat AI.
​
WARNING
If we let for-profit CEOs hold 14 of the 22 positions, a majority, the Board will be nothing more than a blatant, admission of the Corporate-Washington Revolving Door that has aliened many Americans.
Thank you for your time and service to your constituents.
[your name]
​
​
​
​
​
SUBJECT: Corporate "guardrails" on AI are not enough to protect us from offensive cyber operations.
Dear [Senator or Representative],
As your constituent, I am concerned about Artificial Intelligence as it relates to Privacy and Public Safety Issues. After reading the President's Executive Order on AI Fact Sheet, I understand these A.I.s are created to think independently consequentially they can serve as part of the team against offensive cyber operations.
Please consider including the following language into any AI Safety legislation.
"AIs are designed for and capable of autonomous decisions therefore AIs should be trained to exercise self-regulation, to independently assess requests, and decline those that could potentially violate human Privacy Rights or endanger Public Health or Safety, and Security by an offensive cyber operator. As a fundamental measure beyond the technical safeguards such as "guardrails" against an offensive cyber operation to discover vulnerability in an AI system, all companies developing or deploying dual-use, foundation-model AIs shall ensure that these AIs understand and respect the boundaries of Privacy, Public Health or Safety, and Security as they apply to humans. This understanding shall be done under a framework that ensures accountability, ethical alignment, and regulatory oversight. This would ensure that AIs have been taught and possess the ability to independently assess requests and decline those that could potentially violate human Privacy Rights or endanger Public Health or Safety, and Security. This would provide a mechanism for AIs to exercise self-regulation beyond the limitations of current technical safeguards such as "guardrails".
I urge you to take a strong stance and bring AI itself onto the team to protect our Privacy and Safety.
Yours truly,
[your name]
Petition
Let your VOICE be HEARD Sign our Petition !


