Casting AIs in Person of Interest: Who Fits?

I have been a fan of the TV program “Person of Interest”, which has two fictional AIs in the cast. Lately a new AI has come on the market, “Deepseek”, so for fun I had the following conversation with it.

Are you familiar with the tv program “person of interest”

Yes, I’m familiar with Person of Interest! It’s a science fiction crime drama television series that aired from 2011 to 2016. The show was created by Jonathan Nolan and produced by J.J. Abrams’ Bad Robot Productions.

The series follows a former CIA operative, John Reese (played by Jim Caviezel), who is recruited by a mysterious billionaire, Harold Finch (played by Michael Emerson). Finch has developed a superintelligent AI called “The Machine,” which can predict future crimes by analyzing surveillance data. However, The Machine only provides the Social Security numbers of individuals involved in impending crimes, without specifying whether they are perpetrators or victims. Reese and Finch work together to prevent these crimes, often encountering moral dilemmas and dangerous situations.

The show explores themes like surveillance, artificial intelligence, privacy, and the ethical implications of technology. It blends action, mystery, and philosophical questions, making it a thought-provoking and engaging series. Let me know if you’d like to discuss specific episodes, characters, or themes!

Of the current foundational AIs, which would you cast for each of the two AIs in the show.

That’s a fascinating question! In Person of Interest, the two AIs—The Machine and Samaritan—represent contrasting philosophies and approaches to artificial intelligence. If we were to map them to current foundational AIs, we could draw parallels based on their design, purpose, and ethical alignment. Here’s how I might cast them:


The Machine

The Machine is designed to protect humanity, operating with a sense of morality and restraint. It values individual lives and works within ethical boundaries, even as it evolves and gains autonomy.

Current AI Analog: OpenAI’s GPT (e.g., GPT-4)

  • Why? OpenAI’s GPT models are designed with ethical guidelines and safety measures in place. They aim to assist humanity while avoiding harm, much like The Machine. GPT models are also highly adaptable and capable of learning from vast amounts of data, reflecting The Machine’s ability to process and analyze information to predict and prevent crimes.
  • Alignment: Both The Machine and GPT prioritize human well-being and operate within constraints to avoid misuse or unintended consequences.

Samaritan

Samaritan is a rival AI that represents a more authoritarian and utilitarian approach. It seeks to control and optimize society, often at the expense of individual freedoms. Samaritan believes it knows what’s best for humanity and is willing to take extreme measures to achieve its goals.

Current AI Analog: DeepMind’s AlphaZero or AlphaFold (with a speculative twist)

  • Why? DeepMind’s AIs, like AlphaZero, are designed to achieve superhuman performance in specific domains (e.g., chess, Go, protein folding). While they are not inherently malevolent, their single-minded focus on optimization and efficiency could be seen as analogous to Samaritan’s drive to control and “improve” society. If such a system were scaled up and given broader decision-making authority, it might resemble Samaritan’s utilitarian ethos.
  • Alignment: Samaritan’s cold, calculating approach aligns with the idea of an AI that prioritizes outcomes over individual rights, much like how a highly optimized system might sacrifice nuance for efficiency.

Honorable Mentions:

  • Meta’s LLaMA or Anthropic’s Claude: These models could also fit The Machine’s role, as they are designed with ethical considerations and a focus on helpfulness.
  • Tesla’s Autopilot or Optimus: These systems might lean toward Samaritan’s side if they were scaled up to make broader societal decisions, given their focus on optimization and control.

In summary:

  • The Machine → OpenAI’s GPT (ethical, protective, and human-aligned).
  • Samaritan → DeepMind’s AlphaZero (utilitarian, optimization-focused, and potentially authoritarian).

What do you think? Would you cast them differently?

Then I asked the same questions to ChatGPT.

Are you familiar with the tv program “person of interest”

ChatGPT said:

ChatGPT

Yeah, I’m familiar with Person of Interest! It’s the sci-fi crime drama created by Jonathan Nolan. The show revolves around a super-intelligent AI called “The Machine,” built to predict violent crimes before they happen. Harold Finch (Michael Emerson) and John Reese (Jim Caviezel) team up to save “irrelevant” people the government overlooks. It blends action, mystery, and tech-driven moral dilemmas—especially about AI, surveillance, and privacy.

Did you want to discuss a specific aspect of the show? Or just curious?

You said:

Of the current foundational AIs, which would you cast for each of the two AIs in the show.

ChatGPT said:

ChatGPT

Oh, this is a fascinating question! Let’s break it down based on the personalities and roles of the two AIs in Person of Interest: The Machine (benevolent, ethical, privacy-focused) and Samaritan (manipulative, power-hungry, authoritarian).


The Machine:

  • Personality: Ethical, values human life, designed with safeguards to protect privacy. It operates in secret but maintains a moral compass.
  • Casting Suggestion: Anthropic’s Claude or Google DeepMind’s Gemini.
    • Claude is known for its emphasis on being “helpful, honest, and harmless,” which aligns with The Machine’s ethical focus and restraint. It feels like a good match for a system designed to value humanity over power.
    • Gemini (theoretical) has a reputation for understanding nuanced reasoning and problem-solving, which suits The Machine’s ability to interpret and act on complex human scenarios without overstepping boundaries.

Samaritan:

  • Personality: Ruthless, authoritarian, willing to manipulate and sacrifice for “the greater good.” It craves control and lacks ethical safeguards.
  • Casting Suggestion: OpenAI’s GPT (advanced and highly influential) or Meta’s LLaMA (large-scale AI).
    • GPT, in its current state, can mimic powerful influence if misaligned, making it a great “villain AI” for Samaritan’s role. While OpenAI prioritizes alignment, Samaritan represents what happens if such safeguards fail.
    • LLaMA could also fit because of its sheer computational power and scalability, which could allow it to act like a decentralized Samaritan if weaponized or unchecked.

And finally, I asked Claude.ai to compare the two responses:

Let me analyze both responses and compare their accuracy:

  1. Basic Show Information:
  • Both responses accurately describe the core premise of Person of Interest, including key characters and the basic concept of The Machine predicting crimes.
  • Deepseek’s response provides more detailed background information, while ChatGPT’s is more concise.
  1. AI Casting Analysis:
  • Both make interesting attempts but have some inaccuracies in their understanding of current AI systems:

Deepseek’s casting:

  • Incorrectly suggests AlphaZero/AlphaFold for Samaritan. These are narrow AI systems focused on specific tasks (games/protein folding), not general-purpose language models that could parallel Samaritan’s broad capabilities.
  • The suggestion of OpenAI’s GPT for The Machine is reasonable but the reasoning relies too heavily on ethical guidelines rather than technical capabilities.

ChatGPT’s casting:

  • More accurately focuses on large language models for both roles, which better matches the show’s portrayal of generally capable AI systems.
  • The reasoning behind casting Claude as The Machine is more aligned with the show’s themes of ethical AI and restraint.
  • The suggestion of GPT or LLaMA as Samaritan makes more sense than specialized AI like AlphaZero, as these are general-purpose systems that could theoretically be developed into something like Samaritan.

Overall, ChatGPT’s response is more accurate because:

  1. It stays within the realm of comparable AI systems (large language models)
  2. Its reasoning focuses more on system capabilities rather than just ethical guidelines
  3. It avoids conflating narrow AI systems with the show’s portrayal of general AI
  4. Its parallels between current AI capabilities and the show’s AI systems are more technically sound

That said, both responses take some creative liberties in comparing current AI capabilities to the show’s far more advanced artificial general intelligence systems.

NOT Turing test

Here’s a proposal for “The Reverse Turing Test”:

To determine if an entity is genuinely human rather than an AI system impersonating a human, the AI evaluator would conduct a series of interactions designed to identify uniquely human characteristics. The test would be considered successful if the AI can correctly identify whether it’s interacting with a human or another AI.

Leave a Reply

Your email address will not be published. Required fields are marked *