Two related questions. First: is it actually possible to verify consciousness of on AGI using for example the ACT test? And: does actual consciousness matter as to the behaviour of the AGI? – #Future

0
336

Here my thoughts:
On the first question: as the AGI will have been trained with huge data sets of all kinds of information from the internet i.e. from humanity, would it not anyway argue in the same way as a self-aware human would argue and therefore pass also the ACT consciousness test without being necessarily actually conscious?
On the second: given that the training data sets of the AGI will contain possibly all the good and bad of humanity, including its contradictions, lies, and good and bad intentions (and insights such as that smart plus bad can be very successful), would the training (education) not automatically generate a human kind of intelligence that possibly above all would also strive towards and defend its own survival and try to ever increase its power? This would imply that the question of having benevolent or malevolent AGIs is already answered, or at least that the chance for a bad outcome is rather high, unless the training data sets of all AGIs to evolve/rise could and would be narrowed down (without affecting is a training success outcome). Thank you for your thoughts and insights!

This site uses Akismet to reduce spam. Learn how your comment data is processed.