andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2009-03-06 04:01 pm
Entry tags:

Delicious LiveJournal Links for 3-6-2009

[identity profile] nmg.livejournal.com 2009-03-06 04:44 pm (UTC)(link)
re: Wolfram, I'm getting a bit of a Nouveau AI-flavoured CYC feel about this.

[identity profile] the-locster.livejournal.com 2009-03-06 10:09 pm (UTC)(link)
uh huh. Of course knowledge by itself isn't strong AI, it's part of what is needed for strong AI. The ability to learn and construct knowledge for oneself is key but perhaps the big thing that is oft overlooked is the executive control, choosing what to learn and do and think about. Motivation and incentive, what form do these take in a machine intelligence with no need to reproduce?

Something like this could provide a pretty neat search engine which in turn accelerates all kinds of research, much like google and the internet have done in recent years. It's perhaps another step towards strong AI. Ultimately of course we still need more powerful computers, you just can't run something as smart as a human on today's (standard) computer hardware, even if it does operate in a fundamentally different way. Information processing isn't intelligence but a crapton is required for intelligence.

[identity profile] the-locster.livejournal.com 2009-03-07 06:19 pm (UTC)(link)
"Meaning" requires AI, from my understanding. The ability to parse plain English generally requires likewise.

I agree. You need intelligence to determine context and unravel the true meaning. It's not a simple parsing but an iterative process (between high level thought and low level parsing) that homes in on (gradient descent, minimizing error, etc) the correct or most likely meaning.

I largely suspect Occam's razor applies here. Given the competing hypotheses that Wolfram has cracked it Vs. Wolfram is a nutball, my money is on option B :)

[identity profile] jccw.livejournal.com 2009-03-07 01:30 pm (UTC)(link)
Yeah, this does sound a lot like Cyc and/or the "Semantic" Web reheated, neither of which are anywhere near to working. The problem with Wolfram is that he seems even more susceptible to "not invented here" syndrome than most smart people.

I was just talking to an AI prof here who said something interesting about how the Cyc people started out being totally ad hoc and hacking away and eventually *needed* to adopt better principles for modular and higher-order reasoning - all those things that us crazy PL and logic researchers have been going on about.