Post
Topic
Board Ivory Tower
Merits 9 from 3 users
Topic OP
Tech Apocalypse, AI, Tolkien, and Elon Musk
by
gadsden76
on 14/05/2020, 21:07:14 UTC
⭐ Merited by 20kevin20 (5) ,vapourminer (2) ,joniboini (2)

Elon Musk seems to be very concerned about the effects of AI on human life in the not so distant future, whether this is just the newest tech-scare driven by silicon paranoia and modern social isolation is yet to be seen, but it seems likely that what is called artificial 'intelligence' AI (be it actually intelligent or not) is going to be a highly disruptive technology. 

Some of the concerns relating to AI are found in an extrapolation of Darwinian competition from natural life to between natural and artificial life.  Whether this extrapolation of process is in itself merited is also yet to be seen, but if it is, the concerns would appear to be justified.  Much more concerning here, however, (to me) seems to be the too abundant confidence that man can legislate his own morality.  The democratic polis hasn't been able to work humanistically without the introduction of ideology (which always has its own circular reason) that is to say is not separated from the passions of the particular man who thinks ideologically. If man can't figure out (apart from faith) how to think about a universal moral code (and when such an ideological code has been forcefully mandated it becomes positively immoral, oftentimes incorporating genocide) then how could we possibly find the hubris to believe that we can legislate the morality of AI?  It would seem quite possible that whatever ideological morality is provided to the machines could easily be taken to an extreme that negates its codified intent. On the other hand, religious, faithful thinking, which I believe is particular to man doesn't feel like it could be something that one can piece into bits and bytes so to speak...

One of the largest problems in thinking about AI seem to also be contingent upon linguistic and non-linguistic philosophical problems that could be unrelated to the analytical aspects of studying AI itself, this problem is a qualitative problem akin to asking: 'what is the mind, soul or intellect?', and would need to be answered before we can proceed to questions like: 'does this program I wrote actually bring forth an intelligent being?'  Of course the process could pass the Turing test and it could be indistinguishable from a human, but this is an inherently unsatisfying notion of reality.  Its akin to believing that the way something 'looks' is the way something 'is'.  'Is' is always related to Being, that is to say there is a logos of the thing that can be spoken about.  We can speak about the appearances of a thing, but to be sure that we are not deceived requires a faith in an 'is' behind the looks of the thing.  I don't think that this faith can be present in AI robots which, inherently, prevent us from providing them with a non-ideological moral code, that is to say the one we find 'in' the order of the universe, not just what we can say is 'of' the universe (can be examined scientifically or codified). 

Tolkien saw the one ring as (if anything) being technology, I would say this is also the perfect metaphor for what I am trying to communicate here.  A ring is circular and ideologically speaking things are the same way an A=A, a super AI (Roko's Bassilisk anyone?) still isn't capable of breaking that circle, in fact, to function at all it is still dependent on that programmatic-ideological circle -- the circle seems to be its own confines.  The way to intellection necessitates speaking about real objects, that is to say 'thinking straight' or 'operating faithfully' -- that isn't done tautologically. 

Elon Musk is currently working, through his company Neuralink, on "high bandwidth brain-machine interfaces to connect humans and computers", his philosophy is 'if you can't beat them then join them'.  If you sync yourself up with the AI net you would be assuming that this would give you more knowledge about the world, about 'real things', but really there is no such guarantee. A simple examination of the way that technology works (in cutting many people off from reality) would go a long ways toward convincing me that syncing yourself up so to speak would not be lucid, but actually quite hysterical.  You would be inundated with facts, but only see all the meaningless ideological ways to organize those facts.  All that new information you could recognize, in itself, would probably say nothing more about reality than what one understood before being chipped.  I could only see this as being inherently maddening.