In 2003, I received a B.S. in Symbolic System with Honors from Stanford University, with a concentration in A.I.
data structures and representations
commonsense semantic knowledge representation and reasoning
logical and other symbolic approaches
differences between classical logic and how humans represent knowledge and reason
Formally sound and complete propositional logical theorem proving is (co-)NP-hard.
But people reason. So:
What kinds of mistakes do people make? (soundness)
What kinds of (formally incorrect) shortcuts do people use? (soundness)
What do they find difficult? (completeness)
paraconsistent logic
defeasible reasoning
defeasible inheritance networks
non-monotonic logic and other defeasible alternative logics
alternative logics that seem to accord better with 'commonsense intuition'
paradoxes of material implication
relevance logic
intensional logic
reasoning under uncertainty
bayesian networks
connectionist and other non-symbolic and bottom-up approaches
automated ontology/epistemology
which semantic knowledge representations are tractably 'learnable'?
unsupervised machine learning
hierarchical concept learning
clustering
dimensionality reduction
automated theorem discovery/theory formation
autoassociative semantic memory
cognitive architectures (i.e. putting it all together)
automated programming/learnable representations of computer programs
classes of Turing-universal architectures/classes of programming languages
classes types of sub-Turing systems
hierarchial reinforcement learning
attention
creativity (i.e. the ability to generate complex data structures, concepts, and hypotheses; i define this in contrast to the ability to reason about relationships between concepts that have already been generated or given)
symbolic and semantic reasoning on top of low-level connectionist architectures
In general, I’m interested in working towards flexible, general purpose, human-level A.I.
I agree mostly with Push Singh’s research (anti-)programme: http://wayback.archive.org/web/20020601133916/http://web.media.mit.edu/~push/why-ai-failed.html
My study of cognitive psychology and neuroscience is related; I hope that by finding general principals of the computational architecture of the brain, we can determine the types of cognitive architectures most likely to be fruitful. For example, the massively parallel nature of the brain and the 100-step rule suggest that cognitive architectures should at least incorporate a component with massively parallelism and short serial execution paths.
In the longer term, if we can’t figure out how to do A.I. on our own, we can reverse-engineer the human brain and see how it thinks. I don’t expect that this will be possible within my lifetime, however.
Major contributor to AIWiki (defunct).
/notes-cog-ai (beware: these notes were written for personal use; they are not necessarily readable)
How do humans hold inconsistent beliefs? How do humans do what A.I. calls "commonsense reasoning"? This is related to paraconsistent logics, confabulation after brain injury, and to conspiracy theories.