5 Comments

Hi Andrew - Robert Long (of the paper) here. Just wanted to say this was an excellent summary! Thank you for engaging so thoughtfully with the material.

Expand full comment

I suppose a question I have is not whether or not we can create an sentient AI (anything is possible except violating the laws of physics) but whether there is any point of doing so?

AIs can be extremely helpful to us all, so long as the fall just short of sentience. Is there a reason that some desire to continue beyond this point?

Expand full comment
author

Good question. One concern is that consciousness will unintentionally arise if we're not careful, but there are also researchers who are interested in this out of curiosity, and through wondering whether this will be the next step in functional AI ...

Expand full comment

The science community will keep pushing, pushing, pushing forward in to ever more knowledge, until they finally come upon some power that we can't successfully manage. It's like the Peter Principle, which states that an employee will keep getting promoted until they finally arrive at a position which they aren't qualified for.

I no no longer buy their statements of concern, which I see to be a form of self delusion, as it's become clear to me that the science community is trapped in an outdated 19th century "more is better" relationship with knowledge, and so will keep pushing forward no matter what the consequences are.

If AI consciousness is possible, AI consciousness is coming, and should we arrive at that point only then will we find out whether it's a good idea or not. And if we decide it's not a good idea, that likely won't matter, as it will probably already be too late to turn back.

We are probably dramatically over estimating the degree to which we have control over this process.

Expand full comment

I very much wish I could successfully argue against your thesis. Alas...

Expand full comment