• Welcome to Jose's Read Only Forum 2023.
 

Computers versus Commom Sense

Started by Charles Pegge, December 05, 2007, 12:18:44 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Charles Pegge

This talk is about the Cyc reasoning engine, and all the complexities associated with understanding natural language and making inferences. This is an easy to follow and entertaining presentation with minimal jargon.

Cyc has been around for over 20 years and has a large ontology base incorporating thousands of rules and micro-theories, and a knowledge base large enough to be able to do intelligent searches using Google and Alta Vista.

It's most immediate application is risk assesment for national security

Dr Douglas Lenat
CyCorp

http://video.google.com/videoplay?docid=-7704388615049492068&q=engedu


Kent Sarikaya

I haven't watched the video yet, but it sounds fascinating and I will.

I had to write before even watching in that... being a huge sci-fi fan, well I was till totally absorbed with coding... I used to love the show Babylon 5 and if any of you remember
there was a Psi-Corp in there, spelled differently. But still sounds neat to hear it in the real world :)

Thanks for the link!

Charles Pegge

It made me appreciate how difficult natural languages are. Douglas Lenat reckons we are some way off fully fledged linguistic AI. We might have it around 2020. But it will be a step as significant as when humans first started using speech to communicate.

Kent Sarikaya

I finally had some time to watch this video. Wow!
I am happily surprised at how advanced of a system they already have and to be on the verge of self learning now is a wonderful thing.

To think that by 2020, we could have true AI is really something to be excited about. Very inspiring to think of what can be around the corner!!

Donald Darden

I never trust any scientist's estimate of what is possible or when it will be available. As a coder, I've come to appreciate the difficulties of just finding coommne ground between one computer dialect and another, even if they all are suppose to be members of the same language group, such as Basic. We are often forced to adopt a very limited set of operations that are siimilar enough to allow for a common method of expression.

The range of human experience, background, cultural influences, and circumstances is far broader than any variances in computer architecture and structure, and there are not only major language groups to consider, but the local dialects and coloqualisms ttha have to be dealt with, in addition to the speciallized languages that grow up around different diciolines.  There is also the issue of reinforcement and feedback, which is essential for AI to work.  How can a machine knoww hat criteria to judge the adequacy of a given translation, and does its determinations reflect those of the majority or most knowledgeable among us?  Who is to judge? 

There is no one way or right way to say anything.  You can start with a topic like whether it is possible for humans to think constructively without a language or not.  When I think, it is always in terms of words that I know, and often in the form of how would I explain this outloud to others, or write it out to be read and understood.  Perhaps intuition is only thought that we cannot quite verbalize because we have not yet found words to adequately express those thoughts.

But computers do not think, and whatever they do, it is in response to code that humans use to create some emulation.  And man is going to code based on his notion of what is desireable, and within his scope of time and effort.  In other words, why create an virtual or artificial form of single cell structures if we then have to wait a few million years to see if any mutate through random change to become multi-celled creatures?  We don't have time for that.  We will instead forge the equivalent of multi-cell organisms and see if we can tweak them to leap to the next state, which is demonstrate some property of life, such as self-repair, self-replecation, awareness, defense or offense capability, or whatever.  But by tweaking our mechanism, we are imposing our perception of what is possible and trying to bias the game to substantiate our beliefs.

If you are an old science fiction fan, you can understand that science has really been disappointing to its followers, Look at 2001:  A Space Oddessy for an idea of where some people thought we should be by now.  But in terms of actually taking man to other planets or even to the stars, we are further from those goals than ever.  The advances of the last few decades that seem to have brought us closer to those goals were all done by machines, because we are unable to make those journeys on our own.

So it is not like machines and computers are not important and useful tools for us, but the idea that machines will someday just take over, or that they will achieve a state of total independence, or that they will ever become the equals of man don't seem to have any real basis, other than fanciful thinking along the lines of "what if...".

I'm certain in my own mind that the forces that will tear the fabric of society and lay waste to the underpinnings of technololy are becoming more evident and clearly defined over time, and that expectations that we will somehow continue along the paths that the more optomistic among us have envisioned will come to naught.  Man will never leave this planet en masse, never colonize other worlds, never know if he is alone amongst all the stars and galaxies, and will perish when his benevolent world is pushed beyond its recouperative limits and he is proved incapable of adapting sufficiently in order to survive.

If you have ever read enough science fiction, you will know that even this end has been contemplated before, and that there are some that believe that future civilizations, if man survives, will be constrained against taking the unnatural path of technology, and forced to live according to some natural code that will exclude anything that results in a pollutant.