There is an easy way to see if a human is conscious: ask them. When it comes to AI systems, things are different. We can’t simply ask large language models like ChatGPT and Claude whether they are conscious. Robert Long (2023) explores the extent to which future systems might provide reliable self-reports.
I haven’t read your article, but a strong “yes” from me.
The ability for a physical system to introspect and report on its own consciousness tells us a great deal about the ontology of what’s being reported.
Looking forward to your discussion.