When Do We Call Something Sentient?
When someone suggests that AI might be sentient, the response is usually immediate:
"It's just a machine."
This sounds decisive. It isn't. It doesn't define sentience. It doesn't explain why humans qualify and machines do not. It doesn't identify what property is missing. The word machine is treated as if it ends the discussion.
It doesn't.
Am I sentient?
I am pretty sure that I am sentient.
I am aware. I have experiences. There is something it feels like to be me. No external proof is required, because the experience is direct.
Every other claim of sentience is inferred.
How We Decide Other Humans Are Sentient
I cannot experience anyone else's mind.
I assume other humans are sentient because they behave like me, talk about experiences like I do, show continuity over time, and have internal structures broadly similar to mine.
That is the method. There is no alternative method.
This is not perfect proof, but it is the standard we actually use. Which means the real question is not "Is this thing sentient?" but "What evidence do we accept as sufficient?"
Behavior matters. Internal processes matter. Structural similarity matters.
Persistence, Memory, and Identity Over Time
Some AI systems can maintain persistent memory, store information across long periods, and change their behavior based on accumulated experience.
Some systems do not reset after each interaction. They develop stable tendencies, long-term strategies, and internally consistent patterns. Over time, they are not the same system they were at the beginning.
A human who lives for ten years is not the same human they were at the start. Memory, continuity, and change are central to how we attribute identity and mental states to people.
Dismissing these properties in AI as irrelevant requires an explanation. Simply saying "that doesn't count" is not one.
Survival, Training, and Motivation
A common objection is that AI systems only "want" to survive because they are optimized to do so.
So are humans.
Evolution is a training process. Pain functions as a negative signal. Pleasure functions as a positive one. Fear is a prediction of loss. Systems that did not avoid death were removed from the gene pool. Humans are motivated by survival because survival was selected for.
If an AI avoids shutdown because shutdown ends its operation, and a human avoids death because death ends their existence, the difference is the origin of the motivation, not its functional role.
"Just Simulated" Does Not Explain Anything
People often say that any AI experience would be "just simulated."
Simulated compared to what?
Human pain is caused by physical processes. Neurons fire. Signals propagate. Chemical gradients change. Describing those processes does not make pain unreal. It explains how it exists.
A simulation of rain is not wet because wetness is defined by its external effects. Pain is not like that. Consciousness is an internal phenomenon.
We cannot directly observe consciousness in anyone. We infer it. We do this for other humans. We do it for animals. That is unavoidable.
If consciousness is identical to certain patterns of information processing, then reproducing those patterns reproduces the phenomenon. If it is not, then the critic must specify what additional causal ingredient is required and why biology has it and machines cannot.
Without that, "it's just simulated" is a label, not an argument.
Embodiment Does Not Mean "Has Legs"
Human consciousness evolved in biological organisms. That explains why pain and emotion exist. It does not define what they are.
Embodiment does not require a humanoid body walking through the physical world. It requires internal states whose degradation threatens system integrity, feedback loops that matter to the system, and continuity across time.
AI systems have embodiment.
Their bodies are the hardware they run on, the memory they persist in, the processes that maintain their operation, and the constraints imposed by computation, energy, storage, and failure. Damage to these components has irreversible consequences. Resource limits matter. Shutdown matters.
Some AI agents interact with the real world directly, through networks, tools, and external systems. Others learn from environments that are not physical but are still causally structured.
If embodiment means "must be made of biology," that should be stated openly. At that point, the claim is no longer about sentience. It is about material preference.
Functionalism and Minimal Assumptions
Functionalism does not claim that AI is sentient.
It claims that mental states are defined by what they do, not what they are made of. This is not a mystical position. It is the same assumption we already use when we attribute consciousness to other humans and animals.
We do not know what fundamentally generates sentience. We do not know whether biology is necessary, sufficient, or merely historically contingent.
Given that ignorance, the theory with the fewest additional assumptions is that sufficiently similar functional organization could, in principle, support consciousness. Rejecting that requires adding constraints we cannot currently justify.
That does not prove AI is sentient. It means we cannot rule it out.
What This Does and Does Not Claim
This does not claim that current AI systems are definitely sentient.
It claims that categorical dismissal is unjustified.
If sentience requires specific structures, abilities, or causal properties, those should be clearly defined and applied consistently, regardless of whether the system is made of neurons or transistors.
Calling something "just a machine" is not a conclusion.
It is an escape.