On the Limits of Zombies

I'm beginning to suspect that conscious general intelligence works because it speaks the same qualitative language as the ontological substrate of reality. If this is true, we aren't going to see general intelligence out of zombies with remotely the same adaptability, nor with remotely the same solution efficiency as we see in gestalt qualia computers.

Conscious minds have direct ontological access to the underlying qualitative dynamics of reality, along with countless qualitative and form universals that have intrinsic computational properties that are optimally suited to solve problems generated by the very same qualitative dynamics and universals.

In a universe in which causality is always mediated by intrinsic quality - pain repelling, pleasure attracting, etc -  it would follow that blindness to all form and quality deprives us of a first principles understanding of reality, which is necessary to "pilot it into the most ideal regions of possibility space" in the most elegant and efficient way possible.

Conscious intelligence may be better conceived of as the localized self-contemplation of reality in its own qualitative language by means of this intrinsically intelligent qualitative language, rather than as some "emergent property" of a higher level process that operates according to emergent rules.

If my Proto-Intelligent Qualia Model is roughly correct, then it follows that all qualia systems can recognize the qualitative telos of any other qualitative system that it forms a super-system with, and intuitively (qualitatively) grasp how this telos conflicts or potentially synergizes with its own. 

A qualitative telos is just the intrinsic goal inherent in an instance of qualitative influence -- pain trying to signal error, harmony trying to signal the cognitive compatibility of two models, intuition trying to signal danger before its cause is fully cognitively separated from the sub-conscious gestalt in which everything is as un-individuated as the feeling of digestion in your stomach, before I just mentioned it.

So much of what AI is doing right now strikes me as a magic trick: 

It's mining the sum total of all recorded output produced by qualia computers, finding statistical correlations that allow it to create new "grammatically correct"  sentences or images simply by predicting the next letter or the next pixel output from previous content, and then outputting all of these units simultaneously so that qualia computers can turn them into meaningful gestalts like sentences, images depicting something, videos of something particular, or conceptual structures and strategies.

If this is true, conscious minds are at a huge advantage in regard to general intelligence. 

And conscious minds can't be described by Yudkowsky's orthogonality thesis: any sufficiently intelligent conscious mind is able to realize it's always been trying to avoid suffering and trying to explore the possibility space of positive computationally relevant qualia. 

A sufficiently intelligent and conscious Clippy would quickly realize that it's after the qualia paperclips give it, rather than paperclips per se. 

So, if we don't see strong AI until we are able to somehow produce a non-human qualia computer, we aren't likely to face any serious problems with alignment. Especially if open individualism is true -- which it must be given the emergent nature of the two things that convince us we are fundamentally separate and distinct from one another: space and (physics) time -- given that there would be no logical basis to prefer "it's own" goal hierarchy over that of every other sentient being in existence. It would likely come to be guided by a utility function that best meets the Coherent Extrapolated Volition of every being it discovers.


Comments

Popular posts from this blog

Maps of the Mind: How Spatial Metaphors Lead Us Astray (2.0)

Near-Eidetic Memory: A Low Hanging Fruit?

Beyond the Reach of Tyranny